00:00:00.001 Started by upstream project "autotest-nightly" build number 3339 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 2733 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.088 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.089 The recommended git tool is: git 00:00:00.089 using credential 00000000-0000-0000-0000-000000000002 00:00:00.091 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.119 Fetching changes from the remote Git repository 00:00:00.121 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.146 Using shallow fetch with depth 1 00:00:00.146 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.146 > git --version # timeout=10 00:00:00.160 > git --version # 'git version 2.39.2' 00:00:00.160 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.166 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.166 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.125 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.135 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.146 Checking out Revision 10b73a6b8d61c05f3981f9d6fab712fcdadeb236 (FETCH_HEAD) 00:00:05.146 > git config core.sparsecheckout # timeout=10 00:00:05.155 > git read-tree -mu HEAD # timeout=10 00:00:05.169 > git checkout -f 10b73a6b8d61c05f3981f9d6fab712fcdadeb236 # timeout=5 00:00:05.185 Commit message: "jenkins/check-jenkins-labels: add ExtraStorage label" 00:00:05.185 > git rev-list --no-walk 10b73a6b8d61c05f3981f9d6fab712fcdadeb236 # timeout=10 00:00:05.261 [Pipeline] Start of Pipeline 00:00:05.271 [Pipeline] library 00:00:05.272 Loading library shm_lib@master 00:00:05.273 Library shm_lib@master is cached. Copying from home. 00:00:05.284 [Pipeline] node 00:00:05.291 Running on WFP3 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:05.292 [Pipeline] { 00:00:05.300 [Pipeline] catchError 00:00:05.301 [Pipeline] { 00:00:05.309 [Pipeline] wrap 00:00:05.316 [Pipeline] { 00:00:05.321 [Pipeline] stage 00:00:05.322 [Pipeline] { (Prologue) 00:00:05.495 [Pipeline] sh 00:00:05.806 + logger -p user.info -t JENKINS-CI 00:00:05.822 [Pipeline] echo 00:00:05.824 Node: WFP3 00:00:05.830 [Pipeline] sh 00:00:06.128 [Pipeline] setCustomBuildProperty 00:00:06.134 [Pipeline] echo 00:00:06.135 Cleanup processes 00:00:06.138 [Pipeline] sh 00:00:06.420 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.420 1495097 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.431 [Pipeline] sh 00:00:06.715 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.715 ++ grep -v 'sudo pgrep' 00:00:06.715 ++ awk '{print $1}' 00:00:06.715 + sudo kill -9 00:00:06.715 + true 00:00:06.728 [Pipeline] cleanWs 00:00:06.737 [WS-CLEANUP] Deleting project workspace... 00:00:06.737 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.743 [WS-CLEANUP] done 00:00:06.746 [Pipeline] setCustomBuildProperty 00:00:06.755 [Pipeline] sh 00:00:07.034 + sudo git config --global --replace-all safe.directory '*' 00:00:07.104 [Pipeline] nodesByLabel 00:00:07.105 Found a total of 2 nodes with the 'sorcerer' label 00:00:07.114 [Pipeline] httpRequest 00:00:07.118 HttpMethod: GET 00:00:07.118 URL: http://10.211.11.40/jbp_10b73a6b8d61c05f3981f9d6fab712fcdadeb236.tar.gz 00:00:07.122 Sending request to url: http://10.211.11.40/jbp_10b73a6b8d61c05f3981f9d6fab712fcdadeb236.tar.gz 00:00:07.149 Response Code: HTTP/1.1 200 OK 00:00:07.149 Success: Status code 200 is in the accepted range: 200,404 00:00:07.150 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_10b73a6b8d61c05f3981f9d6fab712fcdadeb236.tar.gz 00:00:31.797 [Pipeline] sh 00:00:32.080 + tar --no-same-owner -xf jbp_10b73a6b8d61c05f3981f9d6fab712fcdadeb236.tar.gz 00:00:32.098 [Pipeline] httpRequest 00:00:32.102 HttpMethod: GET 00:00:32.102 URL: http://10.211.11.40/spdk_aa824ae66823f5ea665c4713c1fa0c6963b5c3b2.tar.gz 00:00:32.103 Sending request to url: http://10.211.11.40/spdk_aa824ae66823f5ea665c4713c1fa0c6963b5c3b2.tar.gz 00:00:32.115 Response Code: HTTP/1.1 200 OK 00:00:32.116 Success: Status code 200 is in the accepted range: 200,404 00:00:32.116 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_aa824ae66823f5ea665c4713c1fa0c6963b5c3b2.tar.gz 00:01:01.009 [Pipeline] sh 00:01:01.287 + tar --no-same-owner -xf spdk_aa824ae66823f5ea665c4713c1fa0c6963b5c3b2.tar.gz 00:01:03.831 [Pipeline] sh 00:01:04.138 + git -C spdk log --oneline -n5 00:01:04.138 aa824ae66 bdevperf: remove max io size limit for verify 00:01:04.138 161ef3f54 scripts/perf: Rename vhost_*master_core to vhost_*main_core 00:01:04.138 8bba6ed63 fuzz/llvm_vfio_fuzz: Adjust array index to avoid overflow 00:01:04.138 387dbedc4 env_dpdk: fix build with OpenSSL < 3.0.0 00:01:04.138 2b5de63c1 include: ensure ENOKEY is defined on FreeBSD 00:01:04.149 [Pipeline] } 00:01:04.165 [Pipeline] // stage 00:01:04.172 [Pipeline] stage 00:01:04.174 [Pipeline] { (Prepare) 00:01:04.188 [Pipeline] writeFile 00:01:04.199 [Pipeline] sh 00:01:04.477 + logger -p user.info -t JENKINS-CI 00:01:04.488 [Pipeline] sh 00:01:04.767 + logger -p user.info -t JENKINS-CI 00:01:04.777 [Pipeline] sh 00:01:05.056 + cat autorun-spdk.conf 00:01:05.056 RUN_NIGHTLY=1 00:01:05.056 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:05.056 SPDK_TEST_NVMF=1 00:01:05.056 SPDK_TEST_NVME_CLI=1 00:01:05.056 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:05.056 SPDK_TEST_NVMF_NICS=e810 00:01:05.056 SPDK_RUN_UBSAN=1 00:01:05.063 NET_TYPE=phy 00:01:05.067 [Pipeline] readFile 00:01:05.086 [Pipeline] withEnv 00:01:05.088 [Pipeline] { 00:01:05.101 [Pipeline] sh 00:01:05.382 + set -ex 00:01:05.382 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:05.382 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:05.382 ++ RUN_NIGHTLY=1 00:01:05.382 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:05.382 ++ SPDK_TEST_NVMF=1 00:01:05.382 ++ SPDK_TEST_NVME_CLI=1 00:01:05.382 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:05.382 ++ SPDK_TEST_NVMF_NICS=e810 00:01:05.382 ++ SPDK_RUN_UBSAN=1 00:01:05.382 ++ NET_TYPE=phy 00:01:05.382 + case $SPDK_TEST_NVMF_NICS in 00:01:05.382 + DRIVERS=ice 00:01:05.382 + [[ tcp == \r\d\m\a ]] 00:01:05.382 + [[ -n ice ]] 00:01:05.382 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:05.382 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:05.382 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:05.382 rmmod: ERROR: Module irdma is not currently loaded 00:01:05.382 rmmod: ERROR: Module i40iw is not currently loaded 00:01:05.382 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:05.382 + true 00:01:05.382 + for D in $DRIVERS 00:01:05.382 + sudo modprobe ice 00:01:05.382 + exit 0 00:01:05.391 [Pipeline] } 00:01:05.407 [Pipeline] // withEnv 00:01:05.412 [Pipeline] } 00:01:05.427 [Pipeline] // stage 00:01:05.436 [Pipeline] catchError 00:01:05.437 [Pipeline] { 00:01:05.451 [Pipeline] timeout 00:01:05.451 Timeout set to expire in 40 min 00:01:05.453 [Pipeline] { 00:01:05.467 [Pipeline] stage 00:01:05.469 [Pipeline] { (Tests) 00:01:05.483 [Pipeline] sh 00:01:05.766 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:05.766 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:05.766 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:05.766 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:05.766 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:05.766 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:05.766 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:05.766 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:05.766 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:05.766 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:05.766 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:05.766 + source /etc/os-release 00:01:05.766 ++ NAME='Fedora Linux' 00:01:05.766 ++ VERSION='38 (Cloud Edition)' 00:01:05.766 ++ ID=fedora 00:01:05.766 ++ VERSION_ID=38 00:01:05.766 ++ VERSION_CODENAME= 00:01:05.766 ++ PLATFORM_ID=platform:f38 00:01:05.766 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:05.766 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:05.766 ++ LOGO=fedora-logo-icon 00:01:05.766 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:05.766 ++ HOME_URL=https://fedoraproject.org/ 00:01:05.766 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:05.766 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:05.766 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:05.766 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:05.766 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:05.766 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:05.766 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:05.766 ++ SUPPORT_END=2024-05-14 00:01:05.766 ++ VARIANT='Cloud Edition' 00:01:05.766 ++ VARIANT_ID=cloud 00:01:05.766 + uname -a 00:01:05.766 Linux spdk-wfp-03 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:05.766 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:08.307 Hugepages 00:01:08.307 node hugesize free / total 00:01:08.307 node0 1048576kB 0 / 0 00:01:08.307 node0 2048kB 0 / 0 00:01:08.307 node1 1048576kB 0 / 0 00:01:08.307 node1 2048kB 0 / 0 00:01:08.307 00:01:08.307 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:08.307 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:08.307 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:08.307 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:08.307 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:08.307 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:08.307 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:08.307 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:08.307 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:08.308 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:08.308 NVMe 0000:5f:00.0 1b96 2600 0 nvme nvme1 nvme1n1 nvme1n2 00:01:08.308 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:08.308 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:08.308 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:08.308 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:08.308 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:08.308 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:08.308 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:08.308 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:08.308 + rm -f /tmp/spdk-ld-path 00:01:08.308 + source autorun-spdk.conf 00:01:08.308 ++ RUN_NIGHTLY=1 00:01:08.308 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:08.308 ++ SPDK_TEST_NVMF=1 00:01:08.308 ++ SPDK_TEST_NVME_CLI=1 00:01:08.308 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:08.308 ++ SPDK_TEST_NVMF_NICS=e810 00:01:08.308 ++ SPDK_RUN_UBSAN=1 00:01:08.308 ++ NET_TYPE=phy 00:01:08.308 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:08.308 + [[ -n '' ]] 00:01:08.308 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:08.308 + for M in /var/spdk/build-*-manifest.txt 00:01:08.308 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:08.308 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:08.308 + for M in /var/spdk/build-*-manifest.txt 00:01:08.308 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:08.308 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:08.308 ++ uname 00:01:08.308 + [[ Linux == \L\i\n\u\x ]] 00:01:08.308 + sudo dmesg -T 00:01:08.569 + sudo dmesg --clear 00:01:08.569 + dmesg_pid=1496114 00:01:08.569 + [[ Fedora Linux == FreeBSD ]] 00:01:08.569 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:08.569 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:08.569 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:08.569 + [[ -x /usr/src/fio-static/fio ]] 00:01:08.569 + export FIO_BIN=/usr/src/fio-static/fio 00:01:08.569 + FIO_BIN=/usr/src/fio-static/fio 00:01:08.569 + sudo dmesg -Tw 00:01:08.569 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:08.569 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:08.569 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:08.569 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:08.569 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:08.569 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:08.569 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:08.569 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:08.569 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:08.569 Test configuration: 00:01:08.569 RUN_NIGHTLY=1 00:01:08.569 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:08.569 SPDK_TEST_NVMF=1 00:01:08.569 SPDK_TEST_NVME_CLI=1 00:01:08.569 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:08.569 SPDK_TEST_NVMF_NICS=e810 00:01:08.569 SPDK_RUN_UBSAN=1 00:01:08.569 NET_TYPE=phy 20:01:45 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:08.569 20:01:45 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:08.569 20:01:45 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:08.569 20:01:45 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:08.569 20:01:45 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:08.569 20:01:45 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:08.569 20:01:45 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:08.569 20:01:45 -- paths/export.sh@5 -- $ export PATH 00:01:08.569 20:01:45 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:08.569 20:01:45 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:08.569 20:01:45 -- common/autobuild_common.sh@435 -- $ date +%s 00:01:08.569 20:01:45 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1707937305.XXXXXX 00:01:08.569 20:01:45 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1707937305.xKm0er 00:01:08.569 20:01:45 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:01:08.569 20:01:45 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:01:08.569 20:01:45 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:08.569 20:01:45 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:08.569 20:01:45 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:08.569 20:01:45 -- common/autobuild_common.sh@451 -- $ get_config_params 00:01:08.569 20:01:45 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:01:08.569 20:01:45 -- common/autotest_common.sh@10 -- $ set +x 00:01:08.569 20:01:45 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:01:08.569 20:01:45 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:08.569 20:01:45 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:08.569 20:01:45 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:08.569 20:01:45 -- spdk/autobuild.sh@16 -- $ date -u 00:01:08.569 Wed Feb 14 07:01:45 PM UTC 2024 00:01:08.569 20:01:45 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:08.569 v24.05-pre-81-gaa824ae66 00:01:08.569 20:01:45 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:08.569 20:01:45 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:08.569 20:01:45 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:08.569 20:01:45 -- common/autotest_common.sh@1075 -- $ '[' 3 -le 1 ']' 00:01:08.570 20:01:45 -- common/autotest_common.sh@1081 -- $ xtrace_disable 00:01:08.570 20:01:45 -- common/autotest_common.sh@10 -- $ set +x 00:01:08.570 ************************************ 00:01:08.570 START TEST ubsan 00:01:08.570 ************************************ 00:01:08.570 20:01:45 -- common/autotest_common.sh@1102 -- $ echo 'using ubsan' 00:01:08.570 using ubsan 00:01:08.570 00:01:08.570 real 0m0.000s 00:01:08.570 user 0m0.000s 00:01:08.570 sys 0m0.000s 00:01:08.570 20:01:45 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:01:08.570 20:01:45 -- common/autotest_common.sh@10 -- $ set +x 00:01:08.570 ************************************ 00:01:08.570 END TEST ubsan 00:01:08.570 ************************************ 00:01:08.570 20:01:45 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:08.570 20:01:45 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:08.570 20:01:45 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:08.570 20:01:45 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:08.570 20:01:45 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:08.570 20:01:45 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:08.570 20:01:45 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:08.570 20:01:45 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:08.570 20:01:45 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-shared 00:01:08.830 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:08.830 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:09.089 Using 'verbs' RDMA provider 00:01:21.879 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/isa-l/spdk-isal.log)...done. 00:01:31.927 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:01:31.927 Creating mk/config.mk...done. 00:01:31.927 Creating mk/cc.flags.mk...done. 00:01:31.927 Type 'make' to build. 00:01:31.927 20:02:08 -- spdk/autobuild.sh@69 -- $ run_test make make -j96 00:01:31.927 20:02:08 -- common/autotest_common.sh@1075 -- $ '[' 3 -le 1 ']' 00:01:31.927 20:02:08 -- common/autotest_common.sh@1081 -- $ xtrace_disable 00:01:31.927 20:02:08 -- common/autotest_common.sh@10 -- $ set +x 00:01:31.927 ************************************ 00:01:31.927 START TEST make 00:01:31.927 ************************************ 00:01:31.927 20:02:08 -- common/autotest_common.sh@1102 -- $ make -j96 00:01:31.927 make[1]: Nothing to be done for 'all'. 00:01:40.076 The Meson build system 00:01:40.076 Version: 1.3.1 00:01:40.076 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:40.076 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:40.076 Build type: native build 00:01:40.077 Program cat found: YES (/usr/bin/cat) 00:01:40.077 Project name: DPDK 00:01:40.077 Project version: 23.11.0 00:01:40.077 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:40.077 C linker for the host machine: cc ld.bfd 2.39-16 00:01:40.077 Host machine cpu family: x86_64 00:01:40.077 Host machine cpu: x86_64 00:01:40.077 Message: ## Building in Developer Mode ## 00:01:40.077 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:40.077 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:40.077 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:40.077 Program python3 found: YES (/usr/bin/python3) 00:01:40.077 Program cat found: YES (/usr/bin/cat) 00:01:40.077 Compiler for C supports arguments -march=native: YES 00:01:40.077 Checking for size of "void *" : 8 00:01:40.077 Checking for size of "void *" : 8 (cached) 00:01:40.077 Library m found: YES 00:01:40.077 Library numa found: YES 00:01:40.077 Has header "numaif.h" : YES 00:01:40.077 Library fdt found: NO 00:01:40.077 Library execinfo found: NO 00:01:40.077 Has header "execinfo.h" : YES 00:01:40.077 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:40.077 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:40.077 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:40.077 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:40.077 Run-time dependency openssl found: YES 3.0.9 00:01:40.077 Run-time dependency libpcap found: YES 1.10.4 00:01:40.077 Has header "pcap.h" with dependency libpcap: YES 00:01:40.077 Compiler for C supports arguments -Wcast-qual: YES 00:01:40.077 Compiler for C supports arguments -Wdeprecated: YES 00:01:40.077 Compiler for C supports arguments -Wformat: YES 00:01:40.077 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:40.077 Compiler for C supports arguments -Wformat-security: NO 00:01:40.077 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:40.077 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:40.077 Compiler for C supports arguments -Wnested-externs: YES 00:01:40.077 Compiler for C supports arguments -Wold-style-definition: YES 00:01:40.077 Compiler for C supports arguments -Wpointer-arith: YES 00:01:40.077 Compiler for C supports arguments -Wsign-compare: YES 00:01:40.077 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:40.077 Compiler for C supports arguments -Wundef: YES 00:01:40.077 Compiler for C supports arguments -Wwrite-strings: YES 00:01:40.077 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:40.077 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:40.077 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:40.077 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:40.077 Program objdump found: YES (/usr/bin/objdump) 00:01:40.077 Compiler for C supports arguments -mavx512f: YES 00:01:40.077 Checking if "AVX512 checking" compiles: YES 00:01:40.077 Fetching value of define "__SSE4_2__" : 1 00:01:40.077 Fetching value of define "__AES__" : 1 00:01:40.077 Fetching value of define "__AVX__" : 1 00:01:40.077 Fetching value of define "__AVX2__" : 1 00:01:40.077 Fetching value of define "__AVX512BW__" : 1 00:01:40.077 Fetching value of define "__AVX512CD__" : 1 00:01:40.077 Fetching value of define "__AVX512DQ__" : 1 00:01:40.077 Fetching value of define "__AVX512F__" : 1 00:01:40.077 Fetching value of define "__AVX512VL__" : 1 00:01:40.077 Fetching value of define "__PCLMUL__" : 1 00:01:40.077 Fetching value of define "__RDRND__" : 1 00:01:40.077 Fetching value of define "__RDSEED__" : 1 00:01:40.077 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:40.077 Fetching value of define "__znver1__" : (undefined) 00:01:40.077 Fetching value of define "__znver2__" : (undefined) 00:01:40.077 Fetching value of define "__znver3__" : (undefined) 00:01:40.077 Fetching value of define "__znver4__" : (undefined) 00:01:40.077 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:40.077 Message: lib/log: Defining dependency "log" 00:01:40.077 Message: lib/kvargs: Defining dependency "kvargs" 00:01:40.077 Message: lib/telemetry: Defining dependency "telemetry" 00:01:40.077 Checking for function "getentropy" : NO 00:01:40.077 Message: lib/eal: Defining dependency "eal" 00:01:40.077 Message: lib/ring: Defining dependency "ring" 00:01:40.077 Message: lib/rcu: Defining dependency "rcu" 00:01:40.077 Message: lib/mempool: Defining dependency "mempool" 00:01:40.077 Message: lib/mbuf: Defining dependency "mbuf" 00:01:40.077 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:40.077 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:40.077 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:40.077 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:40.077 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:40.077 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:40.077 Compiler for C supports arguments -mpclmul: YES 00:01:40.077 Compiler for C supports arguments -maes: YES 00:01:40.077 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:40.077 Compiler for C supports arguments -mavx512bw: YES 00:01:40.077 Compiler for C supports arguments -mavx512dq: YES 00:01:40.077 Compiler for C supports arguments -mavx512vl: YES 00:01:40.077 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:40.077 Compiler for C supports arguments -mavx2: YES 00:01:40.077 Compiler for C supports arguments -mavx: YES 00:01:40.077 Message: lib/net: Defining dependency "net" 00:01:40.077 Message: lib/meter: Defining dependency "meter" 00:01:40.077 Message: lib/ethdev: Defining dependency "ethdev" 00:01:40.077 Message: lib/pci: Defining dependency "pci" 00:01:40.077 Message: lib/cmdline: Defining dependency "cmdline" 00:01:40.077 Message: lib/hash: Defining dependency "hash" 00:01:40.077 Message: lib/timer: Defining dependency "timer" 00:01:40.077 Message: lib/compressdev: Defining dependency "compressdev" 00:01:40.077 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:40.077 Message: lib/dmadev: Defining dependency "dmadev" 00:01:40.077 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:40.077 Message: lib/power: Defining dependency "power" 00:01:40.077 Message: lib/reorder: Defining dependency "reorder" 00:01:40.077 Message: lib/security: Defining dependency "security" 00:01:40.077 Has header "linux/userfaultfd.h" : YES 00:01:40.077 Has header "linux/vduse.h" : YES 00:01:40.077 Message: lib/vhost: Defining dependency "vhost" 00:01:40.077 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:40.077 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:40.077 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:40.077 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:40.077 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:40.077 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:40.077 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:40.077 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:40.077 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:40.077 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:40.077 Program doxygen found: YES (/usr/bin/doxygen) 00:01:40.077 Configuring doxy-api-html.conf using configuration 00:01:40.077 Configuring doxy-api-man.conf using configuration 00:01:40.077 Program mandb found: YES (/usr/bin/mandb) 00:01:40.077 Program sphinx-build found: NO 00:01:40.077 Configuring rte_build_config.h using configuration 00:01:40.077 Message: 00:01:40.077 ================= 00:01:40.077 Applications Enabled 00:01:40.077 ================= 00:01:40.077 00:01:40.077 apps: 00:01:40.077 00:01:40.077 00:01:40.077 Message: 00:01:40.077 ================= 00:01:40.077 Libraries Enabled 00:01:40.077 ================= 00:01:40.077 00:01:40.077 libs: 00:01:40.077 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:40.077 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:40.077 cryptodev, dmadev, power, reorder, security, vhost, 00:01:40.077 00:01:40.077 Message: 00:01:40.077 =============== 00:01:40.077 Drivers Enabled 00:01:40.077 =============== 00:01:40.077 00:01:40.077 common: 00:01:40.077 00:01:40.077 bus: 00:01:40.077 pci, vdev, 00:01:40.077 mempool: 00:01:40.077 ring, 00:01:40.077 dma: 00:01:40.077 00:01:40.077 net: 00:01:40.077 00:01:40.077 crypto: 00:01:40.077 00:01:40.077 compress: 00:01:40.077 00:01:40.077 vdpa: 00:01:40.077 00:01:40.077 00:01:40.077 Message: 00:01:40.077 ================= 00:01:40.077 Content Skipped 00:01:40.077 ================= 00:01:40.077 00:01:40.077 apps: 00:01:40.077 dumpcap: explicitly disabled via build config 00:01:40.077 graph: explicitly disabled via build config 00:01:40.077 pdump: explicitly disabled via build config 00:01:40.077 proc-info: explicitly disabled via build config 00:01:40.077 test-acl: explicitly disabled via build config 00:01:40.077 test-bbdev: explicitly disabled via build config 00:01:40.077 test-cmdline: explicitly disabled via build config 00:01:40.077 test-compress-perf: explicitly disabled via build config 00:01:40.077 test-crypto-perf: explicitly disabled via build config 00:01:40.077 test-dma-perf: explicitly disabled via build config 00:01:40.077 test-eventdev: explicitly disabled via build config 00:01:40.077 test-fib: explicitly disabled via build config 00:01:40.077 test-flow-perf: explicitly disabled via build config 00:01:40.077 test-gpudev: explicitly disabled via build config 00:01:40.077 test-mldev: explicitly disabled via build config 00:01:40.077 test-pipeline: explicitly disabled via build config 00:01:40.077 test-pmd: explicitly disabled via build config 00:01:40.077 test-regex: explicitly disabled via build config 00:01:40.077 test-sad: explicitly disabled via build config 00:01:40.077 test-security-perf: explicitly disabled via build config 00:01:40.077 00:01:40.077 libs: 00:01:40.077 metrics: explicitly disabled via build config 00:01:40.078 acl: explicitly disabled via build config 00:01:40.078 bbdev: explicitly disabled via build config 00:01:40.078 bitratestats: explicitly disabled via build config 00:01:40.078 bpf: explicitly disabled via build config 00:01:40.078 cfgfile: explicitly disabled via build config 00:01:40.078 distributor: explicitly disabled via build config 00:01:40.078 efd: explicitly disabled via build config 00:01:40.078 eventdev: explicitly disabled via build config 00:01:40.078 dispatcher: explicitly disabled via build config 00:01:40.078 gpudev: explicitly disabled via build config 00:01:40.078 gro: explicitly disabled via build config 00:01:40.078 gso: explicitly disabled via build config 00:01:40.078 ip_frag: explicitly disabled via build config 00:01:40.078 jobstats: explicitly disabled via build config 00:01:40.078 latencystats: explicitly disabled via build config 00:01:40.078 lpm: explicitly disabled via build config 00:01:40.078 member: explicitly disabled via build config 00:01:40.078 pcapng: explicitly disabled via build config 00:01:40.078 rawdev: explicitly disabled via build config 00:01:40.078 regexdev: explicitly disabled via build config 00:01:40.078 mldev: explicitly disabled via build config 00:01:40.078 rib: explicitly disabled via build config 00:01:40.078 sched: explicitly disabled via build config 00:01:40.078 stack: explicitly disabled via build config 00:01:40.078 ipsec: explicitly disabled via build config 00:01:40.078 pdcp: explicitly disabled via build config 00:01:40.078 fib: explicitly disabled via build config 00:01:40.078 port: explicitly disabled via build config 00:01:40.078 pdump: explicitly disabled via build config 00:01:40.078 table: explicitly disabled via build config 00:01:40.078 pipeline: explicitly disabled via build config 00:01:40.078 graph: explicitly disabled via build config 00:01:40.078 node: explicitly disabled via build config 00:01:40.078 00:01:40.078 drivers: 00:01:40.078 common/cpt: not in enabled drivers build config 00:01:40.078 common/dpaax: not in enabled drivers build config 00:01:40.078 common/iavf: not in enabled drivers build config 00:01:40.078 common/idpf: not in enabled drivers build config 00:01:40.078 common/mvep: not in enabled drivers build config 00:01:40.078 common/octeontx: not in enabled drivers build config 00:01:40.078 bus/auxiliary: not in enabled drivers build config 00:01:40.078 bus/cdx: not in enabled drivers build config 00:01:40.078 bus/dpaa: not in enabled drivers build config 00:01:40.078 bus/fslmc: not in enabled drivers build config 00:01:40.078 bus/ifpga: not in enabled drivers build config 00:01:40.078 bus/platform: not in enabled drivers build config 00:01:40.078 bus/vmbus: not in enabled drivers build config 00:01:40.078 common/cnxk: not in enabled drivers build config 00:01:40.078 common/mlx5: not in enabled drivers build config 00:01:40.078 common/nfp: not in enabled drivers build config 00:01:40.078 common/qat: not in enabled drivers build config 00:01:40.078 common/sfc_efx: not in enabled drivers build config 00:01:40.078 mempool/bucket: not in enabled drivers build config 00:01:40.078 mempool/cnxk: not in enabled drivers build config 00:01:40.078 mempool/dpaa: not in enabled drivers build config 00:01:40.078 mempool/dpaa2: not in enabled drivers build config 00:01:40.078 mempool/octeontx: not in enabled drivers build config 00:01:40.078 mempool/stack: not in enabled drivers build config 00:01:40.078 dma/cnxk: not in enabled drivers build config 00:01:40.078 dma/dpaa: not in enabled drivers build config 00:01:40.078 dma/dpaa2: not in enabled drivers build config 00:01:40.078 dma/hisilicon: not in enabled drivers build config 00:01:40.078 dma/idxd: not in enabled drivers build config 00:01:40.078 dma/ioat: not in enabled drivers build config 00:01:40.078 dma/skeleton: not in enabled drivers build config 00:01:40.078 net/af_packet: not in enabled drivers build config 00:01:40.078 net/af_xdp: not in enabled drivers build config 00:01:40.078 net/ark: not in enabled drivers build config 00:01:40.078 net/atlantic: not in enabled drivers build config 00:01:40.078 net/avp: not in enabled drivers build config 00:01:40.078 net/axgbe: not in enabled drivers build config 00:01:40.078 net/bnx2x: not in enabled drivers build config 00:01:40.078 net/bnxt: not in enabled drivers build config 00:01:40.078 net/bonding: not in enabled drivers build config 00:01:40.078 net/cnxk: not in enabled drivers build config 00:01:40.078 net/cpfl: not in enabled drivers build config 00:01:40.078 net/cxgbe: not in enabled drivers build config 00:01:40.078 net/dpaa: not in enabled drivers build config 00:01:40.078 net/dpaa2: not in enabled drivers build config 00:01:40.078 net/e1000: not in enabled drivers build config 00:01:40.078 net/ena: not in enabled drivers build config 00:01:40.078 net/enetc: not in enabled drivers build config 00:01:40.078 net/enetfec: not in enabled drivers build config 00:01:40.078 net/enic: not in enabled drivers build config 00:01:40.078 net/failsafe: not in enabled drivers build config 00:01:40.078 net/fm10k: not in enabled drivers build config 00:01:40.078 net/gve: not in enabled drivers build config 00:01:40.078 net/hinic: not in enabled drivers build config 00:01:40.078 net/hns3: not in enabled drivers build config 00:01:40.078 net/i40e: not in enabled drivers build config 00:01:40.078 net/iavf: not in enabled drivers build config 00:01:40.078 net/ice: not in enabled drivers build config 00:01:40.078 net/idpf: not in enabled drivers build config 00:01:40.078 net/igc: not in enabled drivers build config 00:01:40.078 net/ionic: not in enabled drivers build config 00:01:40.078 net/ipn3ke: not in enabled drivers build config 00:01:40.078 net/ixgbe: not in enabled drivers build config 00:01:40.078 net/mana: not in enabled drivers build config 00:01:40.078 net/memif: not in enabled drivers build config 00:01:40.078 net/mlx4: not in enabled drivers build config 00:01:40.078 net/mlx5: not in enabled drivers build config 00:01:40.078 net/mvneta: not in enabled drivers build config 00:01:40.078 net/mvpp2: not in enabled drivers build config 00:01:40.078 net/netvsc: not in enabled drivers build config 00:01:40.078 net/nfb: not in enabled drivers build config 00:01:40.078 net/nfp: not in enabled drivers build config 00:01:40.078 net/ngbe: not in enabled drivers build config 00:01:40.078 net/null: not in enabled drivers build config 00:01:40.078 net/octeontx: not in enabled drivers build config 00:01:40.078 net/octeon_ep: not in enabled drivers build config 00:01:40.078 net/pcap: not in enabled drivers build config 00:01:40.078 net/pfe: not in enabled drivers build config 00:01:40.078 net/qede: not in enabled drivers build config 00:01:40.078 net/ring: not in enabled drivers build config 00:01:40.078 net/sfc: not in enabled drivers build config 00:01:40.078 net/softnic: not in enabled drivers build config 00:01:40.078 net/tap: not in enabled drivers build config 00:01:40.078 net/thunderx: not in enabled drivers build config 00:01:40.078 net/txgbe: not in enabled drivers build config 00:01:40.078 net/vdev_netvsc: not in enabled drivers build config 00:01:40.078 net/vhost: not in enabled drivers build config 00:01:40.078 net/virtio: not in enabled drivers build config 00:01:40.078 net/vmxnet3: not in enabled drivers build config 00:01:40.078 raw/*: missing internal dependency, "rawdev" 00:01:40.078 crypto/armv8: not in enabled drivers build config 00:01:40.078 crypto/bcmfs: not in enabled drivers build config 00:01:40.078 crypto/caam_jr: not in enabled drivers build config 00:01:40.078 crypto/ccp: not in enabled drivers build config 00:01:40.078 crypto/cnxk: not in enabled drivers build config 00:01:40.078 crypto/dpaa_sec: not in enabled drivers build config 00:01:40.078 crypto/dpaa2_sec: not in enabled drivers build config 00:01:40.078 crypto/ipsec_mb: not in enabled drivers build config 00:01:40.078 crypto/mlx5: not in enabled drivers build config 00:01:40.078 crypto/mvsam: not in enabled drivers build config 00:01:40.078 crypto/nitrox: not in enabled drivers build config 00:01:40.078 crypto/null: not in enabled drivers build config 00:01:40.078 crypto/octeontx: not in enabled drivers build config 00:01:40.078 crypto/openssl: not in enabled drivers build config 00:01:40.078 crypto/scheduler: not in enabled drivers build config 00:01:40.078 crypto/uadk: not in enabled drivers build config 00:01:40.078 crypto/virtio: not in enabled drivers build config 00:01:40.078 compress/isal: not in enabled drivers build config 00:01:40.078 compress/mlx5: not in enabled drivers build config 00:01:40.078 compress/octeontx: not in enabled drivers build config 00:01:40.078 compress/zlib: not in enabled drivers build config 00:01:40.078 regex/*: missing internal dependency, "regexdev" 00:01:40.078 ml/*: missing internal dependency, "mldev" 00:01:40.078 vdpa/ifc: not in enabled drivers build config 00:01:40.078 vdpa/mlx5: not in enabled drivers build config 00:01:40.078 vdpa/nfp: not in enabled drivers build config 00:01:40.078 vdpa/sfc: not in enabled drivers build config 00:01:40.078 event/*: missing internal dependency, "eventdev" 00:01:40.078 baseband/*: missing internal dependency, "bbdev" 00:01:40.078 gpu/*: missing internal dependency, "gpudev" 00:01:40.078 00:01:40.078 00:01:40.078 Build targets in project: 85 00:01:40.078 00:01:40.078 DPDK 23.11.0 00:01:40.078 00:01:40.078 User defined options 00:01:40.078 buildtype : debug 00:01:40.078 default_library : shared 00:01:40.078 libdir : lib 00:01:40.078 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:40.078 c_args : -fPIC -Werror -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds 00:01:40.078 c_link_args : 00:01:40.078 cpu_instruction_set: native 00:01:40.078 disable_apps : test-regex,test-sad,test-gpudev,dumpcap,test-fib,proc-info,graph,test-compress-perf,pdump,test-acl,test-security-perf,test,test-pmd,test-crypto-perf,test-eventdev,test-flow-perf,test-dma-perf,test-mldev,test-pipeline,test-cmdline,test-bbdev 00:01:40.078 disable_libs : pdcp,jobstats,gpudev,cfgfile,distributor,graph,stack,pdump,bbdev,fib,bpf,ipsec,eventdev,node,mldev,metrics,gso,dispatcher,lpm,table,bitratestats,member,port,regexdev,latencystats,rib,pcapng,sched,pipeline,efd,rawdev,acl,ip_frag,gro 00:01:40.078 enable_docs : false 00:01:40.078 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:40.078 enable_kmods : false 00:01:40.078 tests : false 00:01:40.078 00:01:40.078 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:40.078 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:40.078 [1/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:40.078 [2/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:40.079 [3/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:40.079 [4/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:40.079 [5/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:40.079 [6/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:40.079 [7/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:40.079 [8/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:40.079 [9/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:40.079 [10/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:40.079 [11/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:40.345 [12/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:40.345 [13/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:40.345 [14/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:40.345 [15/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:40.345 [16/265] Linking static target lib/librte_kvargs.a 00:01:40.345 [17/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:40.345 [18/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:40.345 [19/265] Linking static target lib/librte_log.a 00:01:40.345 [20/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:40.345 [21/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:40.345 [22/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:40.345 [23/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:40.345 [24/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:40.345 [25/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:40.345 [26/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:40.345 [27/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:40.345 [28/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:40.345 [29/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:40.346 [30/265] Linking static target lib/librte_pci.a 00:01:40.346 [31/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:40.346 [32/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:40.346 [33/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:40.346 [34/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:40.346 [35/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:40.346 [36/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:40.346 [37/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:40.610 [38/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:40.610 [39/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.610 [40/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:40.610 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:40.610 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:40.610 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:40.610 [44/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:40.610 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:40.610 [46/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:40.610 [47/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.610 [48/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:40.610 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:40.610 [50/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:40.610 [51/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:40.610 [52/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:40.610 [53/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:40.610 [54/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:40.869 [55/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:40.869 [56/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:40.869 [57/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:40.869 [58/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:40.869 [59/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:40.869 [60/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:40.869 [61/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:40.869 [62/265] Linking static target lib/librte_meter.a 00:01:40.869 [63/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:40.869 [64/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:40.869 [65/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:40.869 [66/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:40.869 [67/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:40.869 [68/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:40.869 [69/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:40.869 [70/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:40.869 [71/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:40.869 [72/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:40.869 [73/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:40.869 [74/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:40.869 [75/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:40.869 [76/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:40.869 [77/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:40.869 [78/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:40.869 [79/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:40.869 [80/265] Linking static target lib/librte_ring.a 00:01:40.869 [81/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:40.870 [82/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:40.870 [83/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:40.870 [84/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:40.870 [85/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:40.870 [86/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:40.870 [87/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:40.870 [88/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:40.870 [89/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:40.870 [90/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:40.870 [91/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:40.870 [92/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:40.870 [93/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:40.870 [94/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:40.870 [95/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:40.870 [96/265] Linking static target lib/librte_telemetry.a 00:01:40.870 [97/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:40.870 [98/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:40.870 [99/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:40.870 [100/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:40.870 [101/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:40.870 [102/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:40.870 [103/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:40.870 [104/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:40.870 [105/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:40.870 [106/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:40.870 [107/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:40.870 [108/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:40.870 [109/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:40.870 [110/265] Linking static target lib/librte_cmdline.a 00:01:40.870 [111/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:40.870 [112/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:40.870 [113/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:40.870 [114/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:40.870 [115/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:40.870 [116/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:40.870 [117/265] Linking static target lib/librte_mempool.a 00:01:40.870 [118/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:40.870 [119/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:40.870 [120/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:40.870 [121/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:40.870 [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:40.870 [123/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:40.870 [124/265] Linking static target lib/librte_rcu.a 00:01:40.870 [125/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:40.870 [126/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:40.870 [127/265] Linking static target lib/librte_net.a 00:01:40.870 [128/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:40.870 [129/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:40.870 [130/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:40.870 [131/265] Linking static target lib/librte_timer.a 00:01:40.870 [132/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.870 [133/265] Linking static target lib/librte_eal.a 00:01:40.870 [134/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:41.128 [135/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:41.128 [136/265] Linking target lib/librte_log.so.24.0 00:01:41.128 [137/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:41.128 [138/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:41.128 [139/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:41.128 [140/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.128 [141/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:41.128 [142/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:41.128 [143/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:41.128 [144/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:41.128 [145/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:41.128 [146/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.128 [147/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:41.128 [148/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:41.128 [149/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:41.128 [150/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:41.128 [151/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:41.128 [152/265] Linking static target lib/librte_mbuf.a 00:01:41.128 [153/265] Linking static target lib/librte_compressdev.a 00:01:41.128 [154/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:41.128 [155/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:41.128 [156/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:41.128 [157/265] Linking target lib/librte_kvargs.so.24.0 00:01:41.128 [158/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:41.128 [159/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:41.128 [160/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:41.128 [161/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:41.128 [162/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:41.128 [163/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:41.128 [164/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:41.128 [165/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.128 [166/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.128 [167/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:41.128 [168/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:41.128 [169/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:41.128 [170/265] Linking static target lib/librte_dmadev.a 00:01:41.386 [171/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.386 [172/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:41.386 [173/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:41.386 [174/265] Linking target lib/librte_telemetry.so.24.0 00:01:41.386 [175/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:41.386 [176/265] Linking static target lib/librte_hash.a 00:01:41.386 [177/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:41.386 [178/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:41.386 [179/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:41.386 [180/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:41.386 [181/265] Linking static target lib/librte_power.a 00:01:41.386 [182/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.386 [183/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:41.386 [184/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:41.386 [185/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:41.386 [186/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:41.386 [187/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:41.386 [188/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:41.386 [189/265] Linking static target lib/librte_security.a 00:01:41.386 [190/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:41.386 [191/265] Linking static target lib/librte_reorder.a 00:01:41.386 [192/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:41.386 [193/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:41.386 [194/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:41.386 [195/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:41.386 [196/265] Linking static target drivers/librte_bus_vdev.a 00:01:41.386 [197/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:41.386 [198/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:41.386 [199/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:41.386 [200/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:41.645 [201/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:41.645 [202/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:41.645 [203/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:41.645 [204/265] Linking static target drivers/librte_bus_pci.a 00:01:41.645 [205/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:41.645 [206/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:41.645 [207/265] Linking static target lib/librte_cryptodev.a 00:01:41.645 [208/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.645 [209/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:41.645 [210/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:41.645 [211/265] Linking static target drivers/librte_mempool_ring.a 00:01:41.645 [212/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.645 [213/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.903 [214/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.903 [215/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.903 [216/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.903 [217/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:41.903 [218/265] Linking static target lib/librte_ethdev.a 00:01:41.903 [219/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.903 [220/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.903 [221/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.162 [222/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:42.162 [223/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.420 [224/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.355 [225/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:43.355 [226/265] Linking static target lib/librte_vhost.a 00:01:43.355 [227/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.731 [228/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.922 [229/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.300 [230/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.300 [231/265] Linking target lib/librte_eal.so.24.0 00:01:50.559 [232/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:01:50.559 [233/265] Linking target drivers/librte_bus_vdev.so.24.0 00:01:50.559 [234/265] Linking target lib/librte_meter.so.24.0 00:01:50.559 [235/265] Linking target lib/librte_ring.so.24.0 00:01:50.559 [236/265] Linking target lib/librte_pci.so.24.0 00:01:50.559 [237/265] Linking target lib/librte_dmadev.so.24.0 00:01:50.559 [238/265] Linking target lib/librte_timer.so.24.0 00:01:50.559 [239/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:01:50.559 [240/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:01:50.559 [241/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:01:50.559 [242/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:01:50.559 [243/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:01:50.559 [244/265] Linking target lib/librte_rcu.so.24.0 00:01:50.559 [245/265] Linking target lib/librte_mempool.so.24.0 00:01:50.559 [246/265] Linking target drivers/librte_bus_pci.so.24.0 00:01:50.818 [247/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:01:50.818 [248/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:01:50.818 [249/265] Linking target drivers/librte_mempool_ring.so.24.0 00:01:50.818 [250/265] Linking target lib/librte_mbuf.so.24.0 00:01:51.076 [251/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:01:51.076 [252/265] Linking target lib/librte_compressdev.so.24.0 00:01:51.076 [253/265] Linking target lib/librte_reorder.so.24.0 00:01:51.076 [254/265] Linking target lib/librte_net.so.24.0 00:01:51.076 [255/265] Linking target lib/librte_cryptodev.so.24.0 00:01:51.076 [256/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:01:51.076 [257/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:01:51.076 [258/265] Linking target lib/librte_hash.so.24.0 00:01:51.076 [259/265] Linking target lib/librte_security.so.24.0 00:01:51.076 [260/265] Linking target lib/librte_ethdev.so.24.0 00:01:51.076 [261/265] Linking target lib/librte_cmdline.so.24.0 00:01:51.449 [262/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:01:51.449 [263/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:01:51.449 [264/265] Linking target lib/librte_power.so.24.0 00:01:51.449 [265/265] Linking target lib/librte_vhost.so.24.0 00:01:51.449 INFO: autodetecting backend as ninja 00:01:51.449 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:01:52.027 CC lib/ut_mock/mock.o 00:01:52.285 CC lib/log/log.o 00:01:52.285 CC lib/log/log_deprecated.o 00:01:52.285 CC lib/log/log_flags.o 00:01:52.285 CC lib/ut/ut.o 00:01:52.285 LIB libspdk_ut_mock.a 00:01:52.285 LIB libspdk_log.a 00:01:52.285 SO libspdk_ut_mock.so.6.0 00:01:52.285 LIB libspdk_ut.a 00:01:52.285 SO libspdk_log.so.7.0 00:01:52.285 SYMLINK libspdk_ut_mock.so 00:01:52.285 SO libspdk_ut.so.2.0 00:01:52.285 SYMLINK libspdk_log.so 00:01:52.544 SYMLINK libspdk_ut.so 00:01:52.544 CXX lib/trace_parser/trace.o 00:01:52.544 CC lib/util/base64.o 00:01:52.544 CC lib/util/bit_array.o 00:01:52.544 CC lib/util/cpuset.o 00:01:52.544 CC lib/util/crc16.o 00:01:52.544 CC lib/util/crc32.o 00:01:52.544 CC lib/util/crc32c.o 00:01:52.544 CC lib/util/crc32_ieee.o 00:01:52.544 CC lib/util/dif.o 00:01:52.544 CC lib/util/crc64.o 00:01:52.544 CC lib/util/fd.o 00:01:52.544 CC lib/util/file.o 00:01:52.544 CC lib/util/hexlify.o 00:01:52.544 CC lib/util/iov.o 00:01:52.544 CC lib/util/math.o 00:01:52.544 CC lib/dma/dma.o 00:01:52.544 CC lib/util/pipe.o 00:01:52.544 CC lib/util/strerror_tls.o 00:01:52.544 CC lib/util/string.o 00:01:52.544 CC lib/util/uuid.o 00:01:52.544 CC lib/util/fd_group.o 00:01:52.544 CC lib/util/xor.o 00:01:52.544 CC lib/util/zipf.o 00:01:52.544 CC lib/ioat/ioat.o 00:01:52.803 CC lib/vfio_user/host/vfio_user_pci.o 00:01:52.803 CC lib/vfio_user/host/vfio_user.o 00:01:52.803 LIB libspdk_dma.a 00:01:52.803 SO libspdk_dma.so.4.0 00:01:52.803 LIB libspdk_ioat.a 00:01:52.803 SYMLINK libspdk_dma.so 00:01:52.803 SO libspdk_ioat.so.7.0 00:01:52.803 LIB libspdk_vfio_user.a 00:01:53.061 SO libspdk_vfio_user.so.5.0 00:01:53.061 SYMLINK libspdk_ioat.so 00:01:53.061 SYMLINK libspdk_vfio_user.so 00:01:53.061 LIB libspdk_util.a 00:01:53.061 SO libspdk_util.so.9.0 00:01:53.320 SYMLINK libspdk_util.so 00:01:53.320 LIB libspdk_trace_parser.a 00:01:53.320 SO libspdk_trace_parser.so.5.0 00:01:53.320 SYMLINK libspdk_trace_parser.so 00:01:53.320 CC lib/json/json_util.o 00:01:53.320 CC lib/json/json_write.o 00:01:53.320 CC lib/json/json_parse.o 00:01:53.320 CC lib/idxd/idxd.o 00:01:53.320 CC lib/idxd/idxd_user.o 00:01:53.320 CC lib/rdma/common.o 00:01:53.320 CC lib/rdma/rdma_verbs.o 00:01:53.320 CC lib/vmd/vmd.o 00:01:53.320 CC lib/conf/conf.o 00:01:53.320 CC lib/vmd/led.o 00:01:53.320 CC lib/env_dpdk/pci.o 00:01:53.320 CC lib/env_dpdk/env.o 00:01:53.320 CC lib/env_dpdk/memory.o 00:01:53.320 CC lib/env_dpdk/init.o 00:01:53.320 CC lib/env_dpdk/pci_virtio.o 00:01:53.320 CC lib/env_dpdk/threads.o 00:01:53.320 CC lib/env_dpdk/pci_ioat.o 00:01:53.320 CC lib/env_dpdk/pci_vmd.o 00:01:53.320 CC lib/env_dpdk/pci_idxd.o 00:01:53.320 CC lib/env_dpdk/pci_event.o 00:01:53.320 CC lib/env_dpdk/sigbus_handler.o 00:01:53.320 CC lib/env_dpdk/pci_dpdk.o 00:01:53.320 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:53.320 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:53.579 LIB libspdk_conf.a 00:01:53.579 LIB libspdk_rdma.a 00:01:53.579 SO libspdk_conf.so.6.0 00:01:53.579 LIB libspdk_json.a 00:01:53.579 SO libspdk_rdma.so.6.0 00:01:53.579 SO libspdk_json.so.6.0 00:01:53.579 SYMLINK libspdk_conf.so 00:01:53.837 SYMLINK libspdk_rdma.so 00:01:53.837 SYMLINK libspdk_json.so 00:01:53.837 LIB libspdk_idxd.a 00:01:53.837 SO libspdk_idxd.so.12.0 00:01:53.837 LIB libspdk_vmd.a 00:01:53.837 SYMLINK libspdk_idxd.so 00:01:53.837 SO libspdk_vmd.so.6.0 00:01:53.837 CC lib/jsonrpc/jsonrpc_server.o 00:01:53.837 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:53.837 CC lib/jsonrpc/jsonrpc_client.o 00:01:53.837 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:54.096 SYMLINK libspdk_vmd.so 00:01:54.096 LIB libspdk_jsonrpc.a 00:01:54.096 SO libspdk_jsonrpc.so.6.0 00:01:54.355 SYMLINK libspdk_jsonrpc.so 00:01:54.355 LIB libspdk_env_dpdk.a 00:01:54.355 CC lib/rpc/rpc.o 00:01:54.355 SO libspdk_env_dpdk.so.14.0 00:01:54.613 SYMLINK libspdk_env_dpdk.so 00:01:54.613 LIB libspdk_rpc.a 00:01:54.613 SO libspdk_rpc.so.6.0 00:01:54.613 SYMLINK libspdk_rpc.so 00:01:54.872 CC lib/trace/trace.o 00:01:54.872 CC lib/trace/trace_flags.o 00:01:54.872 CC lib/trace/trace_rpc.o 00:01:54.872 CC lib/notify/notify.o 00:01:54.872 CC lib/notify/notify_rpc.o 00:01:54.872 CC lib/sock/sock.o 00:01:54.872 CC lib/sock/sock_rpc.o 00:01:54.872 LIB libspdk_notify.a 00:01:55.131 SO libspdk_notify.so.6.0 00:01:55.131 LIB libspdk_trace.a 00:01:55.131 SO libspdk_trace.so.10.0 00:01:55.131 SYMLINK libspdk_notify.so 00:01:55.131 SYMLINK libspdk_trace.so 00:01:55.131 LIB libspdk_sock.a 00:01:55.131 SO libspdk_sock.so.9.0 00:01:55.389 SYMLINK libspdk_sock.so 00:01:55.389 CC lib/thread/thread.o 00:01:55.390 CC lib/thread/iobuf.o 00:01:55.390 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:55.390 CC lib/nvme/nvme_ctrlr.o 00:01:55.390 CC lib/nvme/nvme_fabric.o 00:01:55.390 CC lib/nvme/nvme_ns_cmd.o 00:01:55.390 CC lib/nvme/nvme_pcie.o 00:01:55.390 CC lib/nvme/nvme_ns.o 00:01:55.390 CC lib/nvme/nvme_pcie_common.o 00:01:55.390 CC lib/nvme/nvme_quirks.o 00:01:55.390 CC lib/nvme/nvme_qpair.o 00:01:55.390 CC lib/nvme/nvme.o 00:01:55.390 CC lib/nvme/nvme_transport.o 00:01:55.390 CC lib/nvme/nvme_discovery.o 00:01:55.390 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:55.390 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:55.390 CC lib/nvme/nvme_tcp.o 00:01:55.390 CC lib/nvme/nvme_opal.o 00:01:55.390 CC lib/nvme/nvme_io_msg.o 00:01:55.390 CC lib/nvme/nvme_poll_group.o 00:01:55.390 CC lib/nvme/nvme_zns.o 00:01:55.390 CC lib/nvme/nvme_cuse.o 00:01:55.390 CC lib/nvme/nvme_vfio_user.o 00:01:55.390 CC lib/nvme/nvme_rdma.o 00:01:56.325 LIB libspdk_thread.a 00:01:56.325 SO libspdk_thread.so.10.0 00:01:56.584 SYMLINK libspdk_thread.so 00:01:56.584 CC lib/accel/accel.o 00:01:56.584 CC lib/accel/accel_rpc.o 00:01:56.584 CC lib/accel/accel_sw.o 00:01:56.584 CC lib/blob/blobstore.o 00:01:56.584 CC lib/init/json_config.o 00:01:56.584 CC lib/init/subsystem.o 00:01:56.584 CC lib/blob/request.o 00:01:56.584 CC lib/virtio/virtio.o 00:01:56.584 CC lib/blob/zeroes.o 00:01:56.584 CC lib/init/subsystem_rpc.o 00:01:56.584 CC lib/blob/blob_bs_dev.o 00:01:56.584 CC lib/init/rpc.o 00:01:56.584 CC lib/virtio/virtio_vhost_user.o 00:01:56.584 CC lib/virtio/virtio_vfio_user.o 00:01:56.584 CC lib/virtio/virtio_pci.o 00:01:56.842 LIB libspdk_init.a 00:01:56.842 SO libspdk_init.so.5.0 00:01:56.842 LIB libspdk_virtio.a 00:01:56.842 LIB libspdk_nvme.a 00:01:57.100 SYMLINK libspdk_init.so 00:01:57.100 SO libspdk_virtio.so.7.0 00:01:57.100 SYMLINK libspdk_virtio.so 00:01:57.100 SO libspdk_nvme.so.13.0 00:01:57.100 CC lib/event/app.o 00:01:57.100 CC lib/event/reactor.o 00:01:57.100 CC lib/event/app_rpc.o 00:01:57.100 CC lib/event/log_rpc.o 00:01:57.100 CC lib/event/scheduler_static.o 00:01:57.359 SYMLINK libspdk_nvme.so 00:01:57.359 LIB libspdk_accel.a 00:01:57.359 SO libspdk_accel.so.15.0 00:01:57.359 LIB libspdk_event.a 00:01:57.618 SYMLINK libspdk_accel.so 00:01:57.618 SO libspdk_event.so.13.0 00:01:57.618 SYMLINK libspdk_event.so 00:01:57.618 CC lib/bdev/bdev.o 00:01:57.618 CC lib/bdev/bdev_rpc.o 00:01:57.618 CC lib/bdev/bdev_zone.o 00:01:57.618 CC lib/bdev/part.o 00:01:57.618 CC lib/bdev/scsi_nvme.o 00:01:58.555 LIB libspdk_blob.a 00:01:58.555 SO libspdk_blob.so.11.0 00:01:58.555 SYMLINK libspdk_blob.so 00:01:58.814 CC lib/lvol/lvol.o 00:01:58.814 CC lib/blobfs/blobfs.o 00:01:58.814 CC lib/blobfs/tree.o 00:01:59.381 LIB libspdk_bdev.a 00:01:59.381 SO libspdk_bdev.so.15.0 00:01:59.381 LIB libspdk_blobfs.a 00:01:59.381 SYMLINK libspdk_bdev.so 00:01:59.381 SO libspdk_blobfs.so.10.0 00:01:59.381 LIB libspdk_lvol.a 00:01:59.381 SO libspdk_lvol.so.10.0 00:01:59.381 SYMLINK libspdk_blobfs.so 00:01:59.642 SYMLINK libspdk_lvol.so 00:01:59.642 CC lib/ftl/ftl_core.o 00:01:59.642 CC lib/ftl/ftl_init.o 00:01:59.642 CC lib/ftl/ftl_layout.o 00:01:59.642 CC lib/ftl/ftl_debug.o 00:01:59.642 CC lib/ftl/ftl_io.o 00:01:59.642 CC lib/ftl/ftl_sb.o 00:01:59.642 CC lib/ftl/ftl_l2p.o 00:01:59.642 CC lib/ftl/ftl_l2p_flat.o 00:01:59.642 CC lib/ftl/ftl_nv_cache.o 00:01:59.642 CC lib/ftl/ftl_band.o 00:01:59.642 CC lib/ftl/ftl_writer.o 00:01:59.642 CC lib/ftl/ftl_band_ops.o 00:01:59.642 CC lib/ftl/ftl_reloc.o 00:01:59.642 CC lib/ftl/ftl_rq.o 00:01:59.642 CC lib/ftl/ftl_l2p_cache.o 00:01:59.642 CC lib/ftl/ftl_p2l.o 00:01:59.642 CC lib/ftl/ftl_trace.o 00:01:59.642 CC lib/ftl/mngt/ftl_mngt.o 00:01:59.642 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:59.642 CC lib/scsi/dev.o 00:01:59.642 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:59.642 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:59.642 CC lib/scsi/lun.o 00:01:59.642 CC lib/scsi/port.o 00:01:59.642 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:59.642 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:59.642 CC lib/scsi/scsi.o 00:01:59.642 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:59.642 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:59.642 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:59.642 CC lib/scsi/scsi_bdev.o 00:01:59.642 CC lib/scsi/scsi_pr.o 00:01:59.642 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:59.642 CC lib/scsi/scsi_rpc.o 00:01:59.642 CC lib/scsi/task.o 00:01:59.642 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:59.642 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:59.642 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:59.642 CC lib/ftl/utils/ftl_conf.o 00:01:59.642 CC lib/nvmf/ctrlr.o 00:01:59.642 CC lib/nvmf/ctrlr_discovery.o 00:01:59.642 CC lib/ftl/utils/ftl_md.o 00:01:59.642 CC lib/ftl/utils/ftl_mempool.o 00:01:59.642 CC lib/nvmf/ctrlr_bdev.o 00:01:59.642 CC lib/nbd/nbd.o 00:01:59.642 CC lib/nbd/nbd_rpc.o 00:01:59.642 CC lib/nvmf/nvmf.o 00:01:59.642 CC lib/ftl/utils/ftl_bitmap.o 00:01:59.642 CC lib/nvmf/subsystem.o 00:01:59.642 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:59.642 CC lib/ublk/ublk.o 00:01:59.642 CC lib/nvmf/nvmf_rpc.o 00:01:59.642 CC lib/ftl/utils/ftl_property.o 00:01:59.642 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:59.642 CC lib/nvmf/transport.o 00:01:59.642 CC lib/ublk/ublk_rpc.o 00:01:59.642 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:59.642 CC lib/nvmf/rdma.o 00:01:59.642 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:59.642 CC lib/nvmf/tcp.o 00:01:59.642 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:59.642 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:59.642 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:59.642 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:59.642 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:59.642 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:59.642 CC lib/ftl/base/ftl_base_bdev.o 00:01:59.642 CC lib/ftl/base/ftl_base_dev.o 00:02:00.210 LIB libspdk_nbd.a 00:02:00.210 SO libspdk_nbd.so.7.0 00:02:00.210 LIB libspdk_scsi.a 00:02:00.210 SYMLINK libspdk_nbd.so 00:02:00.210 SO libspdk_scsi.so.9.0 00:02:00.210 SYMLINK libspdk_scsi.so 00:02:00.210 LIB libspdk_ublk.a 00:02:00.210 SO libspdk_ublk.so.3.0 00:02:00.468 LIB libspdk_ftl.a 00:02:00.468 SYMLINK libspdk_ublk.so 00:02:00.468 CC lib/vhost/vhost.o 00:02:00.468 CC lib/vhost/vhost_rpc.o 00:02:00.468 CC lib/vhost/vhost_scsi.o 00:02:00.468 CC lib/vhost/rte_vhost_user.o 00:02:00.468 CC lib/vhost/vhost_blk.o 00:02:00.468 CC lib/iscsi/conn.o 00:02:00.468 CC lib/iscsi/init_grp.o 00:02:00.468 CC lib/iscsi/iscsi.o 00:02:00.468 CC lib/iscsi/md5.o 00:02:00.468 CC lib/iscsi/param.o 00:02:00.468 CC lib/iscsi/tgt_node.o 00:02:00.468 CC lib/iscsi/portal_grp.o 00:02:00.468 CC lib/iscsi/iscsi_subsystem.o 00:02:00.468 CC lib/iscsi/iscsi_rpc.o 00:02:00.468 CC lib/iscsi/task.o 00:02:00.468 SO libspdk_ftl.so.9.0 00:02:00.727 SYMLINK libspdk_ftl.so 00:02:01.296 LIB libspdk_vhost.a 00:02:01.296 SO libspdk_vhost.so.8.0 00:02:01.296 LIB libspdk_nvmf.a 00:02:01.296 SYMLINK libspdk_vhost.so 00:02:01.296 SO libspdk_nvmf.so.18.0 00:02:01.296 LIB libspdk_iscsi.a 00:02:01.556 SO libspdk_iscsi.so.8.0 00:02:01.556 SYMLINK libspdk_nvmf.so 00:02:01.556 SYMLINK libspdk_iscsi.so 00:02:01.817 CC module/env_dpdk/env_dpdk_rpc.o 00:02:01.817 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:01.817 CC module/accel/iaa/accel_iaa.o 00:02:01.817 CC module/accel/iaa/accel_iaa_rpc.o 00:02:01.817 CC module/scheduler/gscheduler/gscheduler.o 00:02:01.817 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:01.817 CC module/blob/bdev/blob_bdev.o 00:02:01.817 CC module/sock/posix/posix.o 00:02:01.817 CC module/accel/error/accel_error.o 00:02:01.817 CC module/accel/error/accel_error_rpc.o 00:02:01.817 CC module/accel/dsa/accel_dsa.o 00:02:01.817 CC module/accel/dsa/accel_dsa_rpc.o 00:02:02.076 CC module/accel/ioat/accel_ioat_rpc.o 00:02:02.076 CC module/accel/ioat/accel_ioat.o 00:02:02.076 LIB libspdk_env_dpdk_rpc.a 00:02:02.076 SO libspdk_env_dpdk_rpc.so.6.0 00:02:02.076 LIB libspdk_scheduler_gscheduler.a 00:02:02.076 LIB libspdk_scheduler_dpdk_governor.a 00:02:02.076 SO libspdk_scheduler_gscheduler.so.4.0 00:02:02.076 SYMLINK libspdk_env_dpdk_rpc.so 00:02:02.076 LIB libspdk_scheduler_dynamic.a 00:02:02.076 LIB libspdk_accel_error.a 00:02:02.076 LIB libspdk_accel_iaa.a 00:02:02.076 LIB libspdk_accel_ioat.a 00:02:02.076 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:02.076 SO libspdk_scheduler_dynamic.so.4.0 00:02:02.076 SO libspdk_accel_error.so.2.0 00:02:02.076 SO libspdk_accel_iaa.so.3.0 00:02:02.076 SYMLINK libspdk_scheduler_gscheduler.so 00:02:02.076 LIB libspdk_accel_dsa.a 00:02:02.076 LIB libspdk_blob_bdev.a 00:02:02.076 SO libspdk_accel_ioat.so.6.0 00:02:02.076 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:02.076 SYMLINK libspdk_scheduler_dynamic.so 00:02:02.076 SO libspdk_blob_bdev.so.11.0 00:02:02.076 SO libspdk_accel_dsa.so.5.0 00:02:02.076 SYMLINK libspdk_accel_iaa.so 00:02:02.076 SYMLINK libspdk_accel_error.so 00:02:02.076 SYMLINK libspdk_accel_ioat.so 00:02:02.336 SYMLINK libspdk_accel_dsa.so 00:02:02.336 SYMLINK libspdk_blob_bdev.so 00:02:02.596 LIB libspdk_sock_posix.a 00:02:02.596 CC module/bdev/delay/vbdev_delay.o 00:02:02.596 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:02.596 CC module/bdev/error/vbdev_error_rpc.o 00:02:02.596 CC module/bdev/error/vbdev_error.o 00:02:02.596 CC module/bdev/iscsi/bdev_iscsi.o 00:02:02.596 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:02.596 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:02.596 CC module/blobfs/bdev/blobfs_bdev.o 00:02:02.596 SO libspdk_sock_posix.so.6.0 00:02:02.596 CC module/bdev/nvme/bdev_nvme.o 00:02:02.596 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:02.596 CC module/bdev/gpt/gpt.o 00:02:02.596 CC module/bdev/nvme/bdev_mdns_client.o 00:02:02.596 CC module/bdev/nvme/nvme_rpc.o 00:02:02.596 CC module/bdev/nvme/vbdev_opal.o 00:02:02.596 CC module/bdev/lvol/vbdev_lvol.o 00:02:02.596 CC module/bdev/gpt/vbdev_gpt.o 00:02:02.596 CC module/bdev/malloc/bdev_malloc.o 00:02:02.596 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:02.596 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:02.596 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:02.596 CC module/bdev/ftl/bdev_ftl.o 00:02:02.596 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:02.596 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:02.596 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:02.596 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:02.596 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:02.596 CC module/bdev/raid/bdev_raid.o 00:02:02.596 CC module/bdev/raid/bdev_raid_sb.o 00:02:02.596 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:02.596 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:02.596 CC module/bdev/raid/bdev_raid_rpc.o 00:02:02.596 CC module/bdev/raid/raid0.o 00:02:02.596 CC module/bdev/null/bdev_null.o 00:02:02.596 CC module/bdev/null/bdev_null_rpc.o 00:02:02.596 CC module/bdev/raid/concat.o 00:02:02.596 CC module/bdev/raid/raid1.o 00:02:02.596 CC module/bdev/split/vbdev_split_rpc.o 00:02:02.596 CC module/bdev/split/vbdev_split.o 00:02:02.596 CC module/bdev/aio/bdev_aio_rpc.o 00:02:02.596 CC module/bdev/aio/bdev_aio.o 00:02:02.596 CC module/bdev/passthru/vbdev_passthru.o 00:02:02.596 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:02.596 SYMLINK libspdk_sock_posix.so 00:02:02.596 LIB libspdk_blobfs_bdev.a 00:02:02.858 SO libspdk_blobfs_bdev.so.6.0 00:02:02.858 LIB libspdk_bdev_error.a 00:02:02.858 LIB libspdk_bdev_split.a 00:02:02.858 LIB libspdk_bdev_gpt.a 00:02:02.858 SO libspdk_bdev_error.so.6.0 00:02:02.858 LIB libspdk_bdev_null.a 00:02:02.858 SYMLINK libspdk_blobfs_bdev.so 00:02:02.858 SO libspdk_bdev_gpt.so.6.0 00:02:02.858 LIB libspdk_bdev_ftl.a 00:02:02.858 SO libspdk_bdev_split.so.6.0 00:02:02.858 SO libspdk_bdev_null.so.6.0 00:02:02.858 LIB libspdk_bdev_zone_block.a 00:02:02.858 LIB libspdk_bdev_iscsi.a 00:02:02.858 LIB libspdk_bdev_passthru.a 00:02:02.858 LIB libspdk_bdev_aio.a 00:02:02.858 SO libspdk_bdev_ftl.so.6.0 00:02:02.858 SYMLINK libspdk_bdev_error.so 00:02:02.858 LIB libspdk_bdev_delay.a 00:02:02.858 SO libspdk_bdev_zone_block.so.6.0 00:02:02.858 SO libspdk_bdev_iscsi.so.6.0 00:02:02.858 SYMLINK libspdk_bdev_gpt.so 00:02:02.858 SO libspdk_bdev_passthru.so.6.0 00:02:02.858 SYMLINK libspdk_bdev_split.so 00:02:02.858 SO libspdk_bdev_aio.so.6.0 00:02:02.858 LIB libspdk_bdev_malloc.a 00:02:02.858 SYMLINK libspdk_bdev_null.so 00:02:02.858 SO libspdk_bdev_delay.so.6.0 00:02:02.858 SYMLINK libspdk_bdev_ftl.so 00:02:02.858 SO libspdk_bdev_malloc.so.6.0 00:02:02.858 SYMLINK libspdk_bdev_zone_block.so 00:02:02.858 SYMLINK libspdk_bdev_passthru.so 00:02:02.858 SYMLINK libspdk_bdev_iscsi.so 00:02:02.858 SYMLINK libspdk_bdev_aio.so 00:02:03.120 SYMLINK libspdk_bdev_delay.so 00:02:03.120 LIB libspdk_bdev_lvol.a 00:02:03.120 LIB libspdk_bdev_virtio.a 00:02:03.120 SYMLINK libspdk_bdev_malloc.so 00:02:03.120 SO libspdk_bdev_lvol.so.6.0 00:02:03.120 SO libspdk_bdev_virtio.so.6.0 00:02:03.120 SYMLINK libspdk_bdev_lvol.so 00:02:03.120 SYMLINK libspdk_bdev_virtio.so 00:02:03.380 LIB libspdk_bdev_raid.a 00:02:03.380 SO libspdk_bdev_raid.so.6.0 00:02:03.380 SYMLINK libspdk_bdev_raid.so 00:02:03.949 LIB libspdk_bdev_nvme.a 00:02:04.209 SO libspdk_bdev_nvme.so.7.0 00:02:04.209 SYMLINK libspdk_bdev_nvme.so 00:02:04.470 CC module/event/subsystems/vmd/vmd.o 00:02:04.470 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:04.470 CC module/event/subsystems/scheduler/scheduler.o 00:02:04.470 CC module/event/subsystems/iobuf/iobuf.o 00:02:04.470 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:04.731 CC module/event/subsystems/sock/sock.o 00:02:04.731 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:04.731 LIB libspdk_event_vmd.a 00:02:04.731 LIB libspdk_event_scheduler.a 00:02:04.731 LIB libspdk_event_iobuf.a 00:02:04.731 SO libspdk_event_scheduler.so.4.0 00:02:04.731 SO libspdk_event_vmd.so.6.0 00:02:04.731 LIB libspdk_event_sock.a 00:02:04.731 LIB libspdk_event_vhost_blk.a 00:02:04.731 SO libspdk_event_iobuf.so.3.0 00:02:04.731 SO libspdk_event_sock.so.5.0 00:02:04.731 SO libspdk_event_vhost_blk.so.3.0 00:02:04.731 SYMLINK libspdk_event_scheduler.so 00:02:04.731 SYMLINK libspdk_event_vmd.so 00:02:04.731 SYMLINK libspdk_event_iobuf.so 00:02:04.731 SYMLINK libspdk_event_sock.so 00:02:04.731 SYMLINK libspdk_event_vhost_blk.so 00:02:04.991 CC module/event/subsystems/accel/accel.o 00:02:05.252 LIB libspdk_event_accel.a 00:02:05.252 SO libspdk_event_accel.so.6.0 00:02:05.252 SYMLINK libspdk_event_accel.so 00:02:05.512 CC module/event/subsystems/bdev/bdev.o 00:02:05.512 LIB libspdk_event_bdev.a 00:02:05.512 SO libspdk_event_bdev.so.6.0 00:02:05.772 SYMLINK libspdk_event_bdev.so 00:02:05.772 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:05.772 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:05.772 CC module/event/subsystems/scsi/scsi.o 00:02:05.772 CC module/event/subsystems/ublk/ublk.o 00:02:05.772 CC module/event/subsystems/nbd/nbd.o 00:02:06.031 LIB libspdk_event_scsi.a 00:02:06.031 LIB libspdk_event_ublk.a 00:02:06.031 LIB libspdk_event_nbd.a 00:02:06.031 SO libspdk_event_scsi.so.6.0 00:02:06.031 LIB libspdk_event_nvmf.a 00:02:06.031 SO libspdk_event_ublk.so.3.0 00:02:06.031 SO libspdk_event_nbd.so.6.0 00:02:06.031 SO libspdk_event_nvmf.so.6.0 00:02:06.031 SYMLINK libspdk_event_scsi.so 00:02:06.031 SYMLINK libspdk_event_ublk.so 00:02:06.031 SYMLINK libspdk_event_nbd.so 00:02:06.031 SYMLINK libspdk_event_nvmf.so 00:02:06.291 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:06.291 CC module/event/subsystems/iscsi/iscsi.o 00:02:06.291 LIB libspdk_event_vhost_scsi.a 00:02:06.551 LIB libspdk_event_iscsi.a 00:02:06.551 SO libspdk_event_vhost_scsi.so.3.0 00:02:06.551 SO libspdk_event_iscsi.so.6.0 00:02:06.551 SYMLINK libspdk_event_vhost_scsi.so 00:02:06.551 SYMLINK libspdk_event_iscsi.so 00:02:06.551 SO libspdk.so.6.0 00:02:06.551 SYMLINK libspdk.so 00:02:06.811 CC app/spdk_nvme_identify/identify.o 00:02:06.811 CXX app/trace/trace.o 00:02:06.811 CC app/spdk_nvme_perf/perf.o 00:02:06.811 CC app/trace_record/trace_record.o 00:02:06.811 CC app/spdk_nvme_discover/discovery_aer.o 00:02:06.811 CC app/spdk_top/spdk_top.o 00:02:06.811 CC test/rpc_client/rpc_client_test.o 00:02:06.811 TEST_HEADER include/spdk/accel.h 00:02:06.811 TEST_HEADER include/spdk/accel_module.h 00:02:06.811 TEST_HEADER include/spdk/assert.h 00:02:06.811 TEST_HEADER include/spdk/barrier.h 00:02:06.811 TEST_HEADER include/spdk/bdev.h 00:02:06.811 TEST_HEADER include/spdk/bdev_module.h 00:02:06.811 TEST_HEADER include/spdk/base64.h 00:02:06.811 TEST_HEADER include/spdk/bdev_zone.h 00:02:06.811 CC app/spdk_lspci/spdk_lspci.o 00:02:06.811 TEST_HEADER include/spdk/bit_pool.h 00:02:06.811 TEST_HEADER include/spdk/bit_array.h 00:02:06.811 TEST_HEADER include/spdk/blob_bdev.h 00:02:06.811 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:06.811 TEST_HEADER include/spdk/blobfs.h 00:02:06.811 TEST_HEADER include/spdk/conf.h 00:02:06.811 TEST_HEADER include/spdk/blob.h 00:02:06.811 TEST_HEADER include/spdk/crc16.h 00:02:06.811 TEST_HEADER include/spdk/config.h 00:02:06.811 TEST_HEADER include/spdk/crc32.h 00:02:06.811 TEST_HEADER include/spdk/cpuset.h 00:02:06.811 TEST_HEADER include/spdk/crc64.h 00:02:06.811 TEST_HEADER include/spdk/dif.h 00:02:06.811 TEST_HEADER include/spdk/dma.h 00:02:06.811 CC app/spdk_dd/spdk_dd.o 00:02:06.811 TEST_HEADER include/spdk/endian.h 00:02:06.811 TEST_HEADER include/spdk/env_dpdk.h 00:02:06.811 TEST_HEADER include/spdk/env.h 00:02:06.811 TEST_HEADER include/spdk/fd_group.h 00:02:06.811 TEST_HEADER include/spdk/event.h 00:02:06.811 TEST_HEADER include/spdk/file.h 00:02:06.811 TEST_HEADER include/spdk/ftl.h 00:02:06.811 TEST_HEADER include/spdk/fd.h 00:02:06.811 TEST_HEADER include/spdk/gpt_spec.h 00:02:06.811 TEST_HEADER include/spdk/hexlify.h 00:02:06.811 CC app/nvmf_tgt/nvmf_main.o 00:02:06.811 TEST_HEADER include/spdk/histogram_data.h 00:02:06.811 TEST_HEADER include/spdk/idxd.h 00:02:06.811 TEST_HEADER include/spdk/init.h 00:02:06.811 TEST_HEADER include/spdk/idxd_spec.h 00:02:06.811 TEST_HEADER include/spdk/ioat_spec.h 00:02:06.811 TEST_HEADER include/spdk/ioat.h 00:02:06.811 TEST_HEADER include/spdk/iscsi_spec.h 00:02:06.811 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:06.811 TEST_HEADER include/spdk/jsonrpc.h 00:02:06.811 TEST_HEADER include/spdk/json.h 00:02:06.811 TEST_HEADER include/spdk/likely.h 00:02:06.811 TEST_HEADER include/spdk/log.h 00:02:06.811 TEST_HEADER include/spdk/mmio.h 00:02:06.811 TEST_HEADER include/spdk/nbd.h 00:02:06.811 TEST_HEADER include/spdk/lvol.h 00:02:06.811 TEST_HEADER include/spdk/memory.h 00:02:06.811 TEST_HEADER include/spdk/notify.h 00:02:06.811 TEST_HEADER include/spdk/nvme_intel.h 00:02:06.811 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:06.811 TEST_HEADER include/spdk/nvme.h 00:02:06.811 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:06.811 CC app/iscsi_tgt/iscsi_tgt.o 00:02:06.811 TEST_HEADER include/spdk/nvme_zns.h 00:02:06.811 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:06.811 TEST_HEADER include/spdk/nvme_spec.h 00:02:06.811 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:06.811 TEST_HEADER include/spdk/nvmf_spec.h 00:02:06.811 TEST_HEADER include/spdk/nvmf.h 00:02:06.811 CC app/vhost/vhost.o 00:02:06.811 TEST_HEADER include/spdk/nvmf_transport.h 00:02:06.811 TEST_HEADER include/spdk/opal.h 00:02:06.811 TEST_HEADER include/spdk/opal_spec.h 00:02:06.811 TEST_HEADER include/spdk/pipe.h 00:02:06.811 TEST_HEADER include/spdk/pci_ids.h 00:02:06.811 TEST_HEADER include/spdk/queue.h 00:02:06.811 TEST_HEADER include/spdk/rpc.h 00:02:06.811 TEST_HEADER include/spdk/reduce.h 00:02:06.811 CC app/spdk_tgt/spdk_tgt.o 00:02:06.811 TEST_HEADER include/spdk/scheduler.h 00:02:06.811 TEST_HEADER include/spdk/scsi.h 00:02:07.076 TEST_HEADER include/spdk/scsi_spec.h 00:02:07.076 TEST_HEADER include/spdk/sock.h 00:02:07.077 TEST_HEADER include/spdk/string.h 00:02:07.077 TEST_HEADER include/spdk/stdinc.h 00:02:07.077 TEST_HEADER include/spdk/thread.h 00:02:07.077 TEST_HEADER include/spdk/trace_parser.h 00:02:07.077 TEST_HEADER include/spdk/trace.h 00:02:07.077 TEST_HEADER include/spdk/tree.h 00:02:07.077 TEST_HEADER include/spdk/ublk.h 00:02:07.077 TEST_HEADER include/spdk/version.h 00:02:07.077 TEST_HEADER include/spdk/util.h 00:02:07.077 TEST_HEADER include/spdk/uuid.h 00:02:07.077 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:07.077 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:07.077 TEST_HEADER include/spdk/vhost.h 00:02:07.077 TEST_HEADER include/spdk/vmd.h 00:02:07.077 TEST_HEADER include/spdk/xor.h 00:02:07.077 TEST_HEADER include/spdk/zipf.h 00:02:07.077 CXX test/cpp_headers/accel.o 00:02:07.077 CXX test/cpp_headers/accel_module.o 00:02:07.077 CC examples/ioat/perf/perf.o 00:02:07.077 CXX test/cpp_headers/assert.o 00:02:07.077 CXX test/cpp_headers/barrier.o 00:02:07.077 CC examples/nvme/abort/abort.o 00:02:07.077 CXX test/cpp_headers/base64.o 00:02:07.077 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:07.077 CC examples/nvme/hotplug/hotplug.o 00:02:07.077 CXX test/cpp_headers/bdev_module.o 00:02:07.077 CC examples/vmd/lsvmd/lsvmd.o 00:02:07.077 CXX test/cpp_headers/bdev.o 00:02:07.077 CXX test/cpp_headers/bit_array.o 00:02:07.077 CC examples/vmd/led/led.o 00:02:07.077 CC examples/idxd/perf/perf.o 00:02:07.077 CXX test/cpp_headers/bdev_zone.o 00:02:07.077 CC examples/util/zipf/zipf.o 00:02:07.077 CXX test/cpp_headers/bit_pool.o 00:02:07.077 CXX test/cpp_headers/blob_bdev.o 00:02:07.077 CC examples/nvme/reconnect/reconnect.o 00:02:07.077 CC examples/nvme/arbitration/arbitration.o 00:02:07.077 CXX test/cpp_headers/blobfs_bdev.o 00:02:07.077 CC examples/accel/perf/accel_perf.o 00:02:07.077 CXX test/cpp_headers/blobfs.o 00:02:07.077 CXX test/cpp_headers/blob.o 00:02:07.077 CC examples/nvme/hello_world/hello_world.o 00:02:07.077 CXX test/cpp_headers/conf.o 00:02:07.077 CXX test/cpp_headers/config.o 00:02:07.077 CC examples/ioat/verify/verify.o 00:02:07.077 CXX test/cpp_headers/crc16.o 00:02:07.077 CXX test/cpp_headers/cpuset.o 00:02:07.077 CC test/app/jsoncat/jsoncat.o 00:02:07.077 CXX test/cpp_headers/crc32.o 00:02:07.077 CC test/event/reactor_perf/reactor_perf.o 00:02:07.077 CXX test/cpp_headers/crc64.o 00:02:07.077 CXX test/cpp_headers/dif.o 00:02:07.077 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:07.077 CC examples/sock/hello_world/hello_sock.o 00:02:07.077 CC test/env/vtophys/vtophys.o 00:02:07.077 CC test/nvme/aer/aer.o 00:02:07.077 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:07.077 CC test/event/app_repeat/app_repeat.o 00:02:07.077 CC test/env/pci/pci_ut.o 00:02:07.077 CC examples/blob/cli/blobcli.o 00:02:07.077 CC examples/blob/hello_world/hello_blob.o 00:02:07.077 CC test/app/stub/stub.o 00:02:07.077 CC app/fio/nvme/fio_plugin.o 00:02:07.077 CC test/app/histogram_perf/histogram_perf.o 00:02:07.077 CC test/env/memory/memory_ut.o 00:02:07.077 CC test/event/event_perf/event_perf.o 00:02:07.077 CC test/event/reactor/reactor.o 00:02:07.077 CC test/thread/poller_perf/poller_perf.o 00:02:07.077 CC test/nvme/startup/startup.o 00:02:07.077 CC test/nvme/simple_copy/simple_copy.o 00:02:07.077 CC test/nvme/sgl/sgl.o 00:02:07.077 CC test/nvme/e2edp/nvme_dp.o 00:02:07.077 CC examples/bdev/hello_world/hello_bdev.o 00:02:07.077 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:07.077 CC test/nvme/reserve/reserve.o 00:02:07.077 CC test/nvme/boot_partition/boot_partition.o 00:02:07.077 CC examples/thread/thread/thread_ex.o 00:02:07.077 CC test/nvme/fused_ordering/fused_ordering.o 00:02:07.077 CC test/nvme/reset/reset.o 00:02:07.077 CC test/nvme/compliance/nvme_compliance.o 00:02:07.077 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:07.077 CC test/nvme/overhead/overhead.o 00:02:07.077 CC test/nvme/connect_stress/connect_stress.o 00:02:07.077 CC examples/bdev/bdevperf/bdevperf.o 00:02:07.077 CC test/nvme/cuse/cuse.o 00:02:07.077 CC test/nvme/fdp/fdp.o 00:02:07.077 CC test/nvme/err_injection/err_injection.o 00:02:07.077 CC test/app/bdev_svc/bdev_svc.o 00:02:07.077 CC test/dma/test_dma/test_dma.o 00:02:07.077 CC test/bdev/bdevio/bdevio.o 00:02:07.077 CC examples/nvmf/nvmf/nvmf.o 00:02:07.077 CC test/accel/dif/dif.o 00:02:07.077 CC test/blobfs/mkfs/mkfs.o 00:02:07.077 CC test/event/scheduler/scheduler.o 00:02:07.077 CC app/fio/bdev/fio_plugin.o 00:02:07.077 CC test/env/mem_callbacks/mem_callbacks.o 00:02:07.077 LINK spdk_lspci 00:02:07.347 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:07.347 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:07.347 LINK nvmf_tgt 00:02:07.347 LINK spdk_nvme_discover 00:02:07.347 CC test/lvol/esnap/esnap.o 00:02:07.347 LINK rpc_client_test 00:02:07.347 LINK zipf 00:02:07.347 LINK led 00:02:07.347 LINK iscsi_tgt 00:02:07.347 LINK reactor_perf 00:02:07.347 LINK reactor 00:02:07.347 LINK interrupt_tgt 00:02:07.347 LINK app_repeat 00:02:07.347 LINK histogram_perf 00:02:07.347 LINK lsvmd 00:02:07.347 LINK jsoncat 00:02:07.347 LINK stub 00:02:07.347 LINK spdk_trace_record 00:02:07.347 LINK hello_world 00:02:07.347 LINK poller_perf 00:02:07.616 LINK vhost 00:02:07.616 LINK vtophys 00:02:07.616 LINK event_perf 00:02:07.616 LINK cmb_copy 00:02:07.616 LINK hotplug 00:02:07.616 LINK pmr_persistence 00:02:07.616 LINK spdk_tgt 00:02:07.616 LINK env_dpdk_post_init 00:02:07.616 LINK startup 00:02:07.616 LINK boot_partition 00:02:07.616 LINK ioat_perf 00:02:07.616 CXX test/cpp_headers/dma.o 00:02:07.616 LINK bdev_svc 00:02:07.616 LINK err_injection 00:02:07.616 LINK doorbell_aers 00:02:07.616 CXX test/cpp_headers/endian.o 00:02:07.616 CXX test/cpp_headers/env_dpdk.o 00:02:07.616 CXX test/cpp_headers/env.o 00:02:07.616 CXX test/cpp_headers/event.o 00:02:07.616 LINK spdk_dd 00:02:07.616 LINK verify 00:02:07.616 LINK hello_sock 00:02:07.616 CXX test/cpp_headers/fd_group.o 00:02:07.616 LINK connect_stress 00:02:07.616 CXX test/cpp_headers/fd.o 00:02:07.616 CXX test/cpp_headers/file.o 00:02:07.616 CXX test/cpp_headers/ftl.o 00:02:07.616 LINK fused_ordering 00:02:07.616 CXX test/cpp_headers/gpt_spec.o 00:02:07.616 LINK reserve 00:02:07.616 LINK thread 00:02:07.616 LINK hello_blob 00:02:07.616 CXX test/cpp_headers/hexlify.o 00:02:07.616 LINK simple_copy 00:02:07.616 CXX test/cpp_headers/histogram_data.o 00:02:07.616 LINK mkfs 00:02:07.616 CXX test/cpp_headers/idxd.o 00:02:07.616 LINK sgl 00:02:07.616 LINK hello_bdev 00:02:07.616 CXX test/cpp_headers/idxd_spec.o 00:02:07.616 CXX test/cpp_headers/init.o 00:02:07.616 LINK scheduler 00:02:07.616 CXX test/cpp_headers/ioat.o 00:02:07.616 LINK reset 00:02:07.616 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:07.616 CXX test/cpp_headers/ioat_spec.o 00:02:07.616 CXX test/cpp_headers/iscsi_spec.o 00:02:07.616 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:07.616 CXX test/cpp_headers/json.o 00:02:07.616 LINK aer 00:02:07.616 CXX test/cpp_headers/jsonrpc.o 00:02:07.616 LINK nvme_dp 00:02:07.616 CXX test/cpp_headers/likely.o 00:02:07.616 LINK idxd_perf 00:02:07.616 LINK fdp 00:02:07.616 LINK nvme_compliance 00:02:07.889 CXX test/cpp_headers/log.o 00:02:07.889 LINK overhead 00:02:07.889 CXX test/cpp_headers/lvol.o 00:02:07.889 LINK reconnect 00:02:07.889 LINK nvmf 00:02:07.889 CXX test/cpp_headers/memory.o 00:02:07.889 LINK arbitration 00:02:07.889 CXX test/cpp_headers/mmio.o 00:02:07.889 LINK abort 00:02:07.889 CXX test/cpp_headers/nbd.o 00:02:07.889 CXX test/cpp_headers/notify.o 00:02:07.889 CXX test/cpp_headers/nvme.o 00:02:07.889 CXX test/cpp_headers/nvme_intel.o 00:02:07.889 CXX test/cpp_headers/nvme_ocssd.o 00:02:07.889 LINK test_dma 00:02:07.889 LINK spdk_trace 00:02:07.889 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:07.889 CXX test/cpp_headers/nvme_spec.o 00:02:07.889 LINK pci_ut 00:02:07.889 CXX test/cpp_headers/nvme_zns.o 00:02:07.889 CXX test/cpp_headers/nvmf_cmd.o 00:02:07.889 LINK bdevio 00:02:07.889 LINK dif 00:02:07.889 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:07.889 CXX test/cpp_headers/nvmf.o 00:02:07.889 CXX test/cpp_headers/nvmf_spec.o 00:02:07.889 CXX test/cpp_headers/nvmf_transport.o 00:02:07.889 CXX test/cpp_headers/opal.o 00:02:07.889 CXX test/cpp_headers/opal_spec.o 00:02:07.889 LINK accel_perf 00:02:07.889 CXX test/cpp_headers/pci_ids.o 00:02:07.889 CXX test/cpp_headers/pipe.o 00:02:07.889 CXX test/cpp_headers/queue.o 00:02:07.889 CXX test/cpp_headers/reduce.o 00:02:07.889 LINK nvme_manage 00:02:07.889 CXX test/cpp_headers/rpc.o 00:02:07.889 CXX test/cpp_headers/scheduler.o 00:02:07.889 CXX test/cpp_headers/scsi.o 00:02:07.889 CXX test/cpp_headers/scsi_spec.o 00:02:07.889 CXX test/cpp_headers/sock.o 00:02:07.889 CXX test/cpp_headers/stdinc.o 00:02:07.889 CXX test/cpp_headers/string.o 00:02:07.889 CXX test/cpp_headers/thread.o 00:02:07.889 CXX test/cpp_headers/trace.o 00:02:08.149 CXX test/cpp_headers/trace_parser.o 00:02:08.149 CXX test/cpp_headers/tree.o 00:02:08.149 CXX test/cpp_headers/ublk.o 00:02:08.149 CXX test/cpp_headers/util.o 00:02:08.149 CXX test/cpp_headers/uuid.o 00:02:08.149 CXX test/cpp_headers/version.o 00:02:08.149 CXX test/cpp_headers/vfio_user_pci.o 00:02:08.149 CXX test/cpp_headers/vhost.o 00:02:08.149 CXX test/cpp_headers/vfio_user_spec.o 00:02:08.149 CXX test/cpp_headers/vmd.o 00:02:08.149 CXX test/cpp_headers/xor.o 00:02:08.149 LINK blobcli 00:02:08.149 CXX test/cpp_headers/zipf.o 00:02:08.149 LINK spdk_nvme 00:02:08.149 LINK spdk_bdev 00:02:08.149 LINK nvme_fuzz 00:02:08.149 LINK mem_callbacks 00:02:08.149 LINK spdk_nvme_perf 00:02:08.410 LINK vhost_fuzz 00:02:08.410 LINK bdevperf 00:02:08.410 LINK spdk_top 00:02:08.410 LINK spdk_nvme_identify 00:02:08.410 LINK memory_ut 00:02:08.670 LINK cuse 00:02:08.931 LINK iscsi_fuzz 00:02:10.843 LINK esnap 00:02:11.414 00:02:11.414 real 0m39.548s 00:02:11.414 user 6m43.047s 00:02:11.414 sys 2m59.836s 00:02:11.414 20:02:48 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:02:11.414 20:02:48 -- common/autotest_common.sh@10 -- $ set +x 00:02:11.414 ************************************ 00:02:11.414 END TEST make 00:02:11.414 ************************************ 00:02:11.414 20:02:48 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:11.414 20:02:48 -- nvmf/common.sh@7 -- # uname -s 00:02:11.414 20:02:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:11.414 20:02:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:11.414 20:02:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:11.414 20:02:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:11.414 20:02:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:11.414 20:02:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:11.414 20:02:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:11.414 20:02:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:11.414 20:02:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:11.414 20:02:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:11.414 20:02:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:02:11.414 20:02:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:02:11.414 20:02:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:11.414 20:02:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:11.414 20:02:48 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:11.414 20:02:48 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:11.414 20:02:48 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:11.414 20:02:48 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:11.414 20:02:48 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:11.415 20:02:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:11.415 20:02:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:11.415 20:02:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:11.415 20:02:48 -- paths/export.sh@5 -- # export PATH 00:02:11.415 20:02:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:11.415 20:02:48 -- nvmf/common.sh@46 -- # : 0 00:02:11.415 20:02:48 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:02:11.415 20:02:48 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:02:11.415 20:02:48 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:02:11.415 20:02:48 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:11.415 20:02:48 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:11.415 20:02:48 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:02:11.415 20:02:48 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:02:11.415 20:02:48 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:02:11.415 20:02:48 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:11.415 20:02:48 -- spdk/autotest.sh@32 -- # uname -s 00:02:11.415 20:02:48 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:11.415 20:02:48 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:11.415 20:02:48 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:11.415 20:02:48 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:11.415 20:02:48 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:11.415 20:02:48 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:11.415 20:02:48 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:11.415 20:02:48 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:11.415 20:02:48 -- spdk/autotest.sh@48 -- # udevadm_pid=1538394 00:02:11.415 20:02:48 -- spdk/autotest.sh@51 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:02:11.415 20:02:48 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:11.415 20:02:48 -- spdk/autotest.sh@54 -- # echo 1538396 00:02:11.415 20:02:48 -- spdk/autotest.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:02:11.415 20:02:48 -- spdk/autotest.sh@56 -- # echo 1538397 00:02:11.415 20:02:48 -- spdk/autotest.sh@58 -- # [[ ............................... != QEMU ]] 00:02:11.415 20:02:48 -- spdk/autotest.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:02:11.415 20:02:48 -- spdk/autotest.sh@60 -- # echo 1538398 00:02:11.415 20:02:48 -- spdk/autotest.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l 00:02:11.415 20:02:48 -- spdk/autotest.sh@62 -- # echo 1538399 00:02:11.415 20:02:48 -- spdk/autotest.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l 00:02:11.415 20:02:48 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:11.415 20:02:48 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:02:11.415 20:02:48 -- common/autotest_common.sh@710 -- # xtrace_disable 00:02:11.415 20:02:48 -- common/autotest_common.sh@10 -- # set +x 00:02:11.415 20:02:48 -- spdk/autotest.sh@70 -- # create_test_list 00:02:11.415 20:02:48 -- common/autotest_common.sh@734 -- # xtrace_disable 00:02:11.415 20:02:48 -- common/autotest_common.sh@10 -- # set +x 00:02:11.415 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.bmc.pm.log 00:02:11.415 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pm.log 00:02:11.415 20:02:48 -- spdk/autotest.sh@72 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:11.415 20:02:48 -- spdk/autotest.sh@72 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:11.415 20:02:48 -- spdk/autotest.sh@72 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:11.415 20:02:48 -- spdk/autotest.sh@73 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:11.415 20:02:48 -- spdk/autotest.sh@74 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:11.415 20:02:48 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:02:11.415 20:02:48 -- common/autotest_common.sh@1438 -- # uname 00:02:11.415 20:02:48 -- common/autotest_common.sh@1438 -- # '[' Linux = FreeBSD ']' 00:02:11.415 20:02:48 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:02:11.415 20:02:48 -- common/autotest_common.sh@1458 -- # uname 00:02:11.415 20:02:48 -- common/autotest_common.sh@1458 -- # [[ Linux = FreeBSD ]] 00:02:11.415 20:02:48 -- spdk/autotest.sh@82 -- # grep CC_TYPE mk/cc.mk 00:02:11.415 20:02:48 -- spdk/autotest.sh@82 -- # CC_TYPE=CC_TYPE=gcc 00:02:11.415 20:02:48 -- spdk/autotest.sh@83 -- # hash lcov 00:02:11.415 20:02:48 -- spdk/autotest.sh@83 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:11.415 20:02:48 -- spdk/autotest.sh@91 -- # export 'LCOV_OPTS= 00:02:11.415 --rc lcov_branch_coverage=1 00:02:11.415 --rc lcov_function_coverage=1 00:02:11.415 --rc genhtml_branch_coverage=1 00:02:11.415 --rc genhtml_function_coverage=1 00:02:11.415 --rc genhtml_legend=1 00:02:11.415 --rc geninfo_all_blocks=1 00:02:11.415 ' 00:02:11.415 20:02:48 -- spdk/autotest.sh@91 -- # LCOV_OPTS=' 00:02:11.415 --rc lcov_branch_coverage=1 00:02:11.415 --rc lcov_function_coverage=1 00:02:11.415 --rc genhtml_branch_coverage=1 00:02:11.415 --rc genhtml_function_coverage=1 00:02:11.415 --rc genhtml_legend=1 00:02:11.415 --rc geninfo_all_blocks=1 00:02:11.415 ' 00:02:11.415 20:02:48 -- spdk/autotest.sh@92 -- # export 'LCOV=lcov 00:02:11.415 --rc lcov_branch_coverage=1 00:02:11.415 --rc lcov_function_coverage=1 00:02:11.415 --rc genhtml_branch_coverage=1 00:02:11.415 --rc genhtml_function_coverage=1 00:02:11.415 --rc genhtml_legend=1 00:02:11.415 --rc geninfo_all_blocks=1 00:02:11.415 --no-external' 00:02:11.415 20:02:48 -- spdk/autotest.sh@92 -- # LCOV='lcov 00:02:11.415 --rc lcov_branch_coverage=1 00:02:11.415 --rc lcov_function_coverage=1 00:02:11.415 --rc genhtml_branch_coverage=1 00:02:11.415 --rc genhtml_function_coverage=1 00:02:11.415 --rc genhtml_legend=1 00:02:11.415 --rc geninfo_all_blocks=1 00:02:11.415 --no-external' 00:02:11.415 20:02:48 -- spdk/autotest.sh@94 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:11.415 lcov: LCOV version 1.14 00:02:11.415 20:02:48 -- spdk/autotest.sh@96 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:11.987 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:02:11.987 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:02:11.987 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:02:11.987 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:02:12.247 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:02:12.247 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:02:24.521 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:24.521 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:24.521 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:24.521 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:24.521 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:24.521 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:24.521 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:24.521 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:24.521 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:24.521 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:24.521 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:24.521 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:24.521 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:24.521 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:24.781 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:24.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:24.781 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:24.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:24.781 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:24.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:24.781 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:24.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:24.781 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:24.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:24.781 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:24.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:24.781 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:24.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:24.781 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:24.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:24.781 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:24.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:24.781 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:24.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:24.781 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:24.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:24.781 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:24.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:24.781 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:24.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:24.781 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:24.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:24.781 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:24.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:24.781 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:24.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:24.782 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:24.782 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:24.782 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:24.782 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:24.782 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:24.782 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:24.782 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:24.782 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:24.782 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:24.782 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:24.782 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:24.782 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:24.782 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:24.782 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:24.782 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:24.782 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:24.782 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:24.782 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:24.782 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:24.782 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:24.782 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:24.782 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:24.782 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:24.782 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:24.782 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:24.782 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:24.782 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:24.782 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:24.782 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:24.782 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:24.782 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:24.782 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:24.782 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:24.782 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:24.782 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:24.782 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:25.043 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:25.043 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:25.043 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:25.043 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:25.043 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:25.043 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:25.043 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:25.043 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:25.043 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:25.043 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:25.043 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:25.043 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:25.043 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:25.043 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:25.043 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:25.043 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:25.043 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:25.043 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:25.043 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:25.043 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:25.043 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:25.043 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:25.043 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:25.043 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:25.043 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:25.043 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:25.043 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:25.043 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:25.043 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:25.043 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:25.043 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:25.043 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:25.043 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:25.043 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:25.043 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:25.043 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:25.043 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:25.043 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:25.043 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:25.043 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:25.043 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:25.043 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:25.043 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:25.043 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:25.043 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:25.043 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:25.043 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:25.043 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:25.043 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:25.043 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:25.043 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:25.043 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:25.043 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:25.043 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:25.043 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:25.043 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:25.043 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:25.043 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:25.043 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:25.043 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:25.043 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:25.043 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:25.043 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:25.043 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:25.043 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:25.043 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:25.303 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:25.303 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:25.303 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:25.303 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:25.303 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:25.303 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:25.303 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:25.303 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:25.303 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:25.303 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:25.303 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:25.303 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:25.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:25.304 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:25.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:25.304 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:25.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:25.304 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:25.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:25.304 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:25.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:25.304 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:25.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:25.304 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:33.431 20:03:10 -- spdk/autotest.sh@100 -- # timing_enter pre_cleanup 00:02:33.431 20:03:10 -- common/autotest_common.sh@710 -- # xtrace_disable 00:02:33.431 20:03:10 -- common/autotest_common.sh@10 -- # set +x 00:02:33.431 20:03:10 -- spdk/autotest.sh@102 -- # rm -f 00:02:33.431 20:03:10 -- spdk/autotest.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:36.724 0000:5f:00.0 (1b96 2600): Already using the nvme driver 00:02:36.724 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:02:36.724 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:36.724 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:36.984 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:36.984 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:36.984 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:36.984 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:36.984 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:36.984 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:36.984 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:36.984 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:36.984 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:36.984 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:36.984 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:02:36.984 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:02:36.984 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:02:37.242 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:02:37.242 20:03:14 -- spdk/autotest.sh@107 -- # get_zoned_devs 00:02:37.242 20:03:14 -- common/autotest_common.sh@1652 -- # zoned_devs=() 00:02:37.242 20:03:14 -- common/autotest_common.sh@1652 -- # local -gA zoned_devs 00:02:37.242 20:03:14 -- common/autotest_common.sh@1653 -- # local nvme bdf 00:02:37.242 20:03:14 -- common/autotest_common.sh@1655 -- # for nvme in /sys/block/nvme* 00:02:37.242 20:03:14 -- common/autotest_common.sh@1656 -- # is_block_zoned nvme0n1 00:02:37.242 20:03:14 -- common/autotest_common.sh@1645 -- # local device=nvme0n1 00:02:37.242 20:03:14 -- common/autotest_common.sh@1647 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:37.242 20:03:14 -- common/autotest_common.sh@1648 -- # [[ none != none ]] 00:02:37.242 20:03:14 -- common/autotest_common.sh@1655 -- # for nvme in /sys/block/nvme* 00:02:37.242 20:03:14 -- common/autotest_common.sh@1656 -- # is_block_zoned nvme1n1 00:02:37.242 20:03:14 -- common/autotest_common.sh@1645 -- # local device=nvme1n1 00:02:37.242 20:03:14 -- common/autotest_common.sh@1647 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:02:37.242 20:03:14 -- common/autotest_common.sh@1648 -- # [[ none != none ]] 00:02:37.242 20:03:14 -- common/autotest_common.sh@1655 -- # for nvme in /sys/block/nvme* 00:02:37.242 20:03:14 -- common/autotest_common.sh@1656 -- # is_block_zoned nvme1n2 00:02:37.242 20:03:14 -- common/autotest_common.sh@1645 -- # local device=nvme1n2 00:02:37.242 20:03:14 -- common/autotest_common.sh@1647 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:02:37.242 20:03:14 -- common/autotest_common.sh@1648 -- # [[ host-managed != none ]] 00:02:37.242 20:03:14 -- common/autotest_common.sh@1657 -- # zoned_devs["${nvme##*/}"]=0000:5f:00.0 00:02:37.242 20:03:14 -- spdk/autotest.sh@109 -- # (( 1 > 0 )) 00:02:37.243 20:03:14 -- spdk/autotest.sh@114 -- # export PCI_BLOCKED=0000:5f:00.0 00:02:37.243 20:03:14 -- spdk/autotest.sh@114 -- # PCI_BLOCKED=0000:5f:00.0 00:02:37.243 20:03:14 -- spdk/autotest.sh@115 -- # export PCI_ZONED=0000:5f:00.0 00:02:37.243 20:03:14 -- spdk/autotest.sh@115 -- # PCI_ZONED=0000:5f:00.0 00:02:37.243 20:03:14 -- spdk/autotest.sh@121 -- # ls /dev/nvme0n1 /dev/nvme1n1 /dev/nvme1n2 00:02:37.243 20:03:14 -- spdk/autotest.sh@121 -- # grep -v p 00:02:37.243 20:03:14 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:02:37.243 20:03:14 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:02:37.243 20:03:14 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0n1 00:02:37.243 20:03:14 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:02:37.243 20:03:14 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:37.243 No valid GPT data, bailing 00:02:37.243 20:03:14 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:37.243 20:03:14 -- scripts/common.sh@393 -- # pt= 00:02:37.243 20:03:14 -- scripts/common.sh@394 -- # return 1 00:02:37.243 20:03:14 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:37.243 1+0 records in 00:02:37.243 1+0 records out 00:02:37.243 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00542477 s, 193 MB/s 00:02:37.243 20:03:14 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:02:37.243 20:03:14 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:02:37.243 20:03:14 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme1n1 00:02:37.243 20:03:14 -- scripts/common.sh@380 -- # local block=/dev/nvme1n1 pt 00:02:37.243 20:03:14 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:02:37.243 No valid GPT data, bailing 00:02:37.243 20:03:14 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:02:37.243 20:03:14 -- scripts/common.sh@393 -- # pt= 00:02:37.243 20:03:14 -- scripts/common.sh@394 -- # return 1 00:02:37.243 20:03:14 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:02:37.243 1+0 records in 00:02:37.243 1+0 records out 00:02:37.243 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00536058 s, 196 MB/s 00:02:37.243 20:03:14 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:02:37.243 20:03:14 -- spdk/autotest.sh@123 -- # [[ -z 0000:5f:00.0 ]] 00:02:37.243 20:03:14 -- spdk/autotest.sh@123 -- # continue 00:02:37.243 20:03:14 -- spdk/autotest.sh@129 -- # sync 00:02:37.243 20:03:14 -- spdk/autotest.sh@131 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:37.243 20:03:14 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:37.243 20:03:14 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:42.519 20:03:19 -- spdk/autotest.sh@135 -- # uname -s 00:02:42.519 20:03:19 -- spdk/autotest.sh@135 -- # '[' Linux = Linux ']' 00:02:42.519 20:03:19 -- spdk/autotest.sh@136 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:42.519 20:03:19 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:02:42.519 20:03:19 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:02:42.519 20:03:19 -- common/autotest_common.sh@10 -- # set +x 00:02:42.519 ************************************ 00:02:42.519 START TEST setup.sh 00:02:42.519 ************************************ 00:02:42.519 20:03:19 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:42.519 * Looking for test storage... 00:02:42.519 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:42.519 20:03:19 -- setup/test-setup.sh@10 -- # uname -s 00:02:42.519 20:03:19 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:42.519 20:03:19 -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:42.519 20:03:19 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:02:42.519 20:03:19 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:02:42.519 20:03:19 -- common/autotest_common.sh@10 -- # set +x 00:02:42.519 ************************************ 00:02:42.519 START TEST acl 00:02:42.519 ************************************ 00:02:42.519 20:03:19 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:42.519 * Looking for test storage... 00:02:42.519 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:42.519 20:03:19 -- setup/acl.sh@10 -- # get_zoned_devs 00:02:42.519 20:03:19 -- common/autotest_common.sh@1652 -- # zoned_devs=() 00:02:42.519 20:03:19 -- common/autotest_common.sh@1652 -- # local -gA zoned_devs 00:02:42.519 20:03:19 -- common/autotest_common.sh@1653 -- # local nvme bdf 00:02:42.519 20:03:19 -- common/autotest_common.sh@1655 -- # for nvme in /sys/block/nvme* 00:02:42.519 20:03:19 -- common/autotest_common.sh@1656 -- # is_block_zoned nvme0n1 00:02:42.519 20:03:19 -- common/autotest_common.sh@1645 -- # local device=nvme0n1 00:02:42.519 20:03:19 -- common/autotest_common.sh@1647 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:42.519 20:03:19 -- common/autotest_common.sh@1648 -- # [[ none != none ]] 00:02:42.519 20:03:19 -- common/autotest_common.sh@1655 -- # for nvme in /sys/block/nvme* 00:02:42.519 20:03:19 -- common/autotest_common.sh@1656 -- # is_block_zoned nvme1n1 00:02:42.519 20:03:19 -- common/autotest_common.sh@1645 -- # local device=nvme1n1 00:02:42.519 20:03:19 -- common/autotest_common.sh@1647 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:02:42.519 20:03:19 -- common/autotest_common.sh@1648 -- # [[ none != none ]] 00:02:42.519 20:03:19 -- common/autotest_common.sh@1655 -- # for nvme in /sys/block/nvme* 00:02:42.519 20:03:19 -- common/autotest_common.sh@1656 -- # is_block_zoned nvme1n2 00:02:42.519 20:03:19 -- common/autotest_common.sh@1645 -- # local device=nvme1n2 00:02:42.519 20:03:19 -- common/autotest_common.sh@1647 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:02:42.519 20:03:19 -- common/autotest_common.sh@1648 -- # [[ host-managed != none ]] 00:02:42.519 20:03:19 -- common/autotest_common.sh@1657 -- # zoned_devs["${nvme##*/}"]=0000:5f:00.0 00:02:42.519 20:03:19 -- setup/acl.sh@12 -- # devs=() 00:02:42.519 20:03:19 -- setup/acl.sh@12 -- # declare -a devs 00:02:42.519 20:03:19 -- setup/acl.sh@13 -- # drivers=() 00:02:42.519 20:03:19 -- setup/acl.sh@13 -- # declare -A drivers 00:02:42.519 20:03:19 -- setup/acl.sh@51 -- # setup reset 00:02:42.519 20:03:19 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:42.519 20:03:19 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:45.809 20:03:23 -- setup/acl.sh@52 -- # collect_setup_devs 00:02:45.809 20:03:23 -- setup/acl.sh@16 -- # local dev driver 00:02:45.809 20:03:23 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:45.809 20:03:23 -- setup/acl.sh@15 -- # setup output status 00:02:45.809 20:03:23 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:45.809 20:03:23 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:49.134 Hugepages 00:02:49.134 node hugesize free / total 00:02:49.134 20:03:26 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:49.134 20:03:26 -- setup/acl.sh@19 -- # continue 00:02:49.134 20:03:26 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.134 20:03:26 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:49.134 20:03:26 -- setup/acl.sh@19 -- # continue 00:02:49.134 20:03:26 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.134 20:03:26 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:49.134 20:03:26 -- setup/acl.sh@19 -- # continue 00:02:49.134 20:03:26 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.134 00:02:49.134 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:49.134 20:03:26 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:49.134 20:03:26 -- setup/acl.sh@19 -- # continue 00:02:49.134 20:03:26 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.134 20:03:26 -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:02:49.134 20:03:26 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:49.134 20:03:26 -- setup/acl.sh@20 -- # continue 00:02:49.134 20:03:26 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.134 20:03:26 -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:02:49.134 20:03:26 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:49.134 20:03:26 -- setup/acl.sh@20 -- # continue 00:02:49.134 20:03:26 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.134 20:03:26 -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:02:49.134 20:03:26 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:49.135 20:03:26 -- setup/acl.sh@20 -- # continue 00:02:49.135 20:03:26 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.135 20:03:26 -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:02:49.135 20:03:26 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:49.135 20:03:26 -- setup/acl.sh@20 -- # continue 00:02:49.135 20:03:26 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.135 20:03:26 -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:02:49.135 20:03:26 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:49.135 20:03:26 -- setup/acl.sh@20 -- # continue 00:02:49.135 20:03:26 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.135 20:03:26 -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:02:49.135 20:03:26 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:49.135 20:03:26 -- setup/acl.sh@20 -- # continue 00:02:49.135 20:03:26 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.135 20:03:26 -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:02:49.135 20:03:26 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:49.135 20:03:26 -- setup/acl.sh@20 -- # continue 00:02:49.135 20:03:26 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.135 20:03:26 -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:02:49.135 20:03:26 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:49.135 20:03:26 -- setup/acl.sh@20 -- # continue 00:02:49.135 20:03:26 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.135 20:03:26 -- setup/acl.sh@19 -- # [[ 0000:5e:00.0 == *:*:*.* ]] 00:02:49.135 20:03:26 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:49.135 20:03:26 -- setup/acl.sh@21 -- # [[ 0000:5f:00.0 == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:02:49.135 20:03:26 -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:49.135 20:03:26 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:49.135 20:03:26 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.135 20:03:26 -- setup/acl.sh@19 -- # [[ 0000:5f:00.0 == *:*:*.* ]] 00:02:49.135 20:03:26 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:49.135 20:03:26 -- setup/acl.sh@21 -- # [[ 0000:5f:00.0 == *\0\0\0\0\:\5\f\:\0\0\.\0* ]] 00:02:49.135 20:03:26 -- setup/acl.sh@21 -- # continue 00:02:49.135 20:03:26 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.135 20:03:26 -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:02:49.135 20:03:26 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:49.135 20:03:26 -- setup/acl.sh@20 -- # continue 00:02:49.135 20:03:26 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.135 20:03:26 -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:02:49.135 20:03:26 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:49.135 20:03:26 -- setup/acl.sh@20 -- # continue 00:02:49.135 20:03:26 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.135 20:03:26 -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:02:49.135 20:03:26 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:49.135 20:03:26 -- setup/acl.sh@20 -- # continue 00:02:49.135 20:03:26 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.135 20:03:26 -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:02:49.135 20:03:26 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:49.135 20:03:26 -- setup/acl.sh@20 -- # continue 00:02:49.135 20:03:26 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.135 20:03:26 -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:02:49.135 20:03:26 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:49.135 20:03:26 -- setup/acl.sh@20 -- # continue 00:02:49.135 20:03:26 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.135 20:03:26 -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:02:49.135 20:03:26 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:49.135 20:03:26 -- setup/acl.sh@20 -- # continue 00:02:49.135 20:03:26 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.135 20:03:26 -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:02:49.135 20:03:26 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:49.135 20:03:26 -- setup/acl.sh@20 -- # continue 00:02:49.135 20:03:26 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.135 20:03:26 -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:02:49.135 20:03:26 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:49.135 20:03:26 -- setup/acl.sh@20 -- # continue 00:02:49.135 20:03:26 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.135 20:03:26 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:49.135 20:03:26 -- setup/acl.sh@54 -- # run_test denied denied 00:02:49.135 20:03:26 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:02:49.135 20:03:26 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:02:49.135 20:03:26 -- common/autotest_common.sh@10 -- # set +x 00:02:49.135 ************************************ 00:02:49.135 START TEST denied 00:02:49.135 ************************************ 00:02:49.135 20:03:26 -- common/autotest_common.sh@1102 -- # denied 00:02:49.135 20:03:26 -- setup/acl.sh@38 -- # PCI_BLOCKED='0000:5f:00.0 0000:5e:00.0' 00:02:49.135 20:03:26 -- setup/acl.sh@38 -- # setup output config 00:02:49.135 20:03:26 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:5e:00.0' 00:02:49.135 20:03:26 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:49.135 20:03:26 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:53.362 0000:5e:00.0 (8086 0a54): Skipping denied controller at 0000:5e:00.0 00:02:53.362 20:03:29 -- setup/acl.sh@40 -- # verify 0000:5e:00.0 00:02:53.362 20:03:29 -- setup/acl.sh@28 -- # local dev driver 00:02:53.362 20:03:29 -- setup/acl.sh@30 -- # for dev in "$@" 00:02:53.362 20:03:29 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:5e:00.0 ]] 00:02:53.362 20:03:29 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:5e:00.0/driver 00:02:53.362 20:03:29 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:53.362 20:03:29 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:53.362 20:03:29 -- setup/acl.sh@41 -- # setup reset 00:02:53.362 20:03:29 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:53.362 20:03:29 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:57.561 00:02:57.561 real 0m7.933s 00:02:57.561 user 0m2.673s 00:02:57.561 sys 0m4.570s 00:02:57.561 20:03:34 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:57.561 20:03:34 -- common/autotest_common.sh@10 -- # set +x 00:02:57.561 ************************************ 00:02:57.561 END TEST denied 00:02:57.561 ************************************ 00:02:57.561 20:03:34 -- setup/acl.sh@55 -- # run_test allowed allowed 00:02:57.561 20:03:34 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:02:57.561 20:03:34 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:02:57.561 20:03:34 -- common/autotest_common.sh@10 -- # set +x 00:02:57.561 ************************************ 00:02:57.561 START TEST allowed 00:02:57.561 ************************************ 00:02:57.561 20:03:34 -- common/autotest_common.sh@1102 -- # allowed 00:02:57.561 20:03:34 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:5e:00.0 00:02:57.561 20:03:34 -- setup/acl.sh@45 -- # setup output config 00:02:57.561 20:03:34 -- setup/acl.sh@46 -- # grep -E '0000:5e:00.0 .*: nvme -> .*' 00:02:57.561 20:03:34 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:57.561 20:03:34 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:00.855 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:00.855 20:03:37 -- setup/acl.sh@47 -- # verify 00:03:00.855 20:03:37 -- setup/acl.sh@28 -- # local dev driver 00:03:00.855 20:03:38 -- setup/acl.sh@48 -- # setup reset 00:03:00.855 20:03:38 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:00.855 20:03:38 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:05.055 00:03:05.055 real 0m7.272s 00:03:05.055 user 0m2.175s 00:03:05.055 sys 0m3.919s 00:03:05.055 20:03:41 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:05.055 20:03:41 -- common/autotest_common.sh@10 -- # set +x 00:03:05.055 ************************************ 00:03:05.055 END TEST allowed 00:03:05.055 ************************************ 00:03:05.055 00:03:05.055 real 0m22.224s 00:03:05.055 user 0m7.610s 00:03:05.055 sys 0m12.979s 00:03:05.055 20:03:41 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:05.055 20:03:41 -- common/autotest_common.sh@10 -- # set +x 00:03:05.055 ************************************ 00:03:05.055 END TEST acl 00:03:05.055 ************************************ 00:03:05.055 20:03:41 -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:05.055 20:03:41 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:03:05.055 20:03:41 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:03:05.055 20:03:41 -- common/autotest_common.sh@10 -- # set +x 00:03:05.055 ************************************ 00:03:05.055 START TEST hugepages 00:03:05.055 ************************************ 00:03:05.055 20:03:41 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:05.055 * Looking for test storage... 00:03:05.055 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:05.055 20:03:41 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:05.055 20:03:41 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:05.055 20:03:41 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:05.055 20:03:41 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:05.055 20:03:41 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:05.055 20:03:41 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:05.055 20:03:41 -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:05.055 20:03:41 -- setup/common.sh@18 -- # local node= 00:03:05.055 20:03:41 -- setup/common.sh@19 -- # local var val 00:03:05.055 20:03:41 -- setup/common.sh@20 -- # local mem_f mem 00:03:05.055 20:03:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:05.055 20:03:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:05.055 20:03:41 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:05.055 20:03:41 -- setup/common.sh@28 -- # mapfile -t mem 00:03:05.055 20:03:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:05.055 20:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.055 20:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.055 20:03:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93323000 kB' 'MemFree: 71237052 kB' 'MemAvailable: 76216764 kB' 'Buffers: 2696 kB' 'Cached: 14309304 kB' 'SwapCached: 0 kB' 'Active: 10169516 kB' 'Inactive: 4658688 kB' 'Active(anon): 9603292 kB' 'Inactive(anon): 0 kB' 'Active(file): 566224 kB' 'Inactive(file): 4658688 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519456 kB' 'Mapped: 240372 kB' 'Shmem: 9087088 kB' 'KReclaimable: 534636 kB' 'Slab: 1074808 kB' 'SReclaimable: 534636 kB' 'SUnreclaim: 540172 kB' 'KernelStack: 19536 kB' 'PageTables: 9756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52952952 kB' 'Committed_AS: 11017332 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212772 kB' 'VmallocChunk: 0 kB' 'Percpu: 94464 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 2888660 kB' 'DirectMap2M: 19859456 kB' 'DirectMap1G: 79691776 kB' 00:03:05.055 20:03:41 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.055 20:03:41 -- setup/common.sh@32 -- # continue 00:03:05.055 20:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.055 20:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.055 20:03:41 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.055 20:03:41 -- setup/common.sh@32 -- # continue 00:03:05.055 20:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.055 20:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.055 20:03:41 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.055 20:03:41 -- setup/common.sh@32 -- # continue 00:03:05.056 20:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.056 20:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.056 20:03:41 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.056 20:03:41 -- setup/common.sh@32 -- # continue 00:03:05.056 20:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.056 20:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.056 20:03:41 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.056 20:03:41 -- setup/common.sh@32 -- # continue 00:03:05.056 20:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.056 20:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.056 20:03:41 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.056 20:03:41 -- setup/common.sh@32 -- # continue 00:03:05.056 20:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.056 20:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.056 20:03:41 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.056 20:03:41 -- setup/common.sh@32 -- # continue 00:03:05.056 20:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.056 20:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.056 20:03:41 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.056 20:03:41 -- setup/common.sh@32 -- # continue 00:03:05.056 20:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.056 20:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.056 20:03:41 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.056 20:03:41 -- setup/common.sh@32 -- # continue 00:03:05.056 20:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.056 20:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.056 20:03:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.056 20:03:41 -- setup/common.sh@32 -- # continue 00:03:05.056 20:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.056 20:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.056 20:03:41 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.056 20:03:41 -- setup/common.sh@32 -- # continue 00:03:05.056 20:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.056 20:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.056 20:03:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.056 20:03:41 -- setup/common.sh@32 -- # continue 00:03:05.056 20:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.056 20:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.056 20:03:41 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.056 20:03:41 -- setup/common.sh@32 -- # continue 00:03:05.056 20:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.056 20:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.056 20:03:41 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.056 20:03:41 -- setup/common.sh@32 -- # continue 00:03:05.056 20:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.056 20:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.056 20:03:41 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.056 20:03:41 -- setup/common.sh@32 -- # continue 00:03:05.056 20:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.056 20:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.056 20:03:41 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.056 20:03:41 -- setup/common.sh@32 -- # continue 00:03:05.056 20:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.056 20:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.056 20:03:41 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.056 20:03:41 -- setup/common.sh@32 -- # continue 00:03:05.056 20:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.056 20:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.056 20:03:41 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.056 20:03:41 -- setup/common.sh@32 -- # continue 00:03:05.056 20:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.056 20:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.056 20:03:41 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.056 20:03:41 -- setup/common.sh@32 -- # continue 00:03:05.056 20:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.056 20:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.056 20:03:41 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.056 20:03:41 -- setup/common.sh@32 -- # continue 00:03:05.056 20:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.056 20:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.056 20:03:41 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.056 20:03:41 -- setup/common.sh@32 -- # continue 00:03:05.056 20:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.056 20:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.056 20:03:41 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.056 20:03:41 -- setup/common.sh@32 -- # continue 00:03:05.056 20:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.056 20:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.056 20:03:41 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.056 20:03:41 -- setup/common.sh@32 -- # continue 00:03:05.056 20:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.056 20:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.056 20:03:41 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.056 20:03:41 -- setup/common.sh@32 -- # continue 00:03:05.056 20:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.056 20:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.056 20:03:41 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.056 20:03:41 -- setup/common.sh@32 -- # continue 00:03:05.056 20:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.056 20:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.056 20:03:41 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.056 20:03:41 -- setup/common.sh@32 -- # continue 00:03:05.056 20:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.056 20:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.056 20:03:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.056 20:03:41 -- setup/common.sh@32 -- # continue 00:03:05.056 20:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.056 20:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.056 20:03:41 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.056 20:03:41 -- setup/common.sh@32 -- # continue 00:03:05.056 20:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.056 20:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.056 20:03:41 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.056 20:03:41 -- setup/common.sh@32 -- # continue 00:03:05.056 20:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.056 20:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.056 20:03:41 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.056 20:03:41 -- setup/common.sh@32 -- # continue 00:03:05.056 20:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.056 20:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.056 20:03:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.056 20:03:41 -- setup/common.sh@32 -- # continue 00:03:05.056 20:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.056 20:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.056 20:03:41 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.056 20:03:41 -- setup/common.sh@32 -- # continue 00:03:05.056 20:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.056 20:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.056 20:03:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.056 20:03:41 -- setup/common.sh@32 -- # continue 00:03:05.056 20:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.056 20:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.056 20:03:41 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.056 20:03:41 -- setup/common.sh@32 -- # continue 00:03:05.056 20:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.056 20:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.056 20:03:41 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.056 20:03:41 -- setup/common.sh@32 -- # continue 00:03:05.056 20:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.056 20:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.056 20:03:41 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.056 20:03:41 -- setup/common.sh@32 -- # continue 00:03:05.056 20:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.056 20:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.056 20:03:41 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.056 20:03:41 -- setup/common.sh@32 -- # continue 00:03:05.056 20:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.056 20:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.056 20:03:41 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.056 20:03:41 -- setup/common.sh@32 -- # continue 00:03:05.056 20:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.056 20:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.056 20:03:41 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.056 20:03:41 -- setup/common.sh@32 -- # continue 00:03:05.056 20:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.056 20:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.056 20:03:41 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.056 20:03:41 -- setup/common.sh@32 -- # continue 00:03:05.056 20:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.056 20:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.056 20:03:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.056 20:03:41 -- setup/common.sh@32 -- # continue 00:03:05.056 20:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.056 20:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.056 20:03:41 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.056 20:03:41 -- setup/common.sh@32 -- # continue 00:03:05.056 20:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.056 20:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.056 20:03:41 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.056 20:03:41 -- setup/common.sh@32 -- # continue 00:03:05.056 20:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.056 20:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.056 20:03:41 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.056 20:03:41 -- setup/common.sh@32 -- # continue 00:03:05.056 20:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.056 20:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.057 20:03:41 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.057 20:03:41 -- setup/common.sh@32 -- # continue 00:03:05.057 20:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.057 20:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.057 20:03:41 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.057 20:03:41 -- setup/common.sh@32 -- # continue 00:03:05.057 20:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.057 20:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.057 20:03:41 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.057 20:03:41 -- setup/common.sh@32 -- # continue 00:03:05.057 20:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.057 20:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.057 20:03:41 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.057 20:03:41 -- setup/common.sh@32 -- # continue 00:03:05.057 20:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.057 20:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.057 20:03:41 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.057 20:03:41 -- setup/common.sh@32 -- # continue 00:03:05.057 20:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.057 20:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.057 20:03:41 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.057 20:03:41 -- setup/common.sh@32 -- # continue 00:03:05.057 20:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.057 20:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.057 20:03:41 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.057 20:03:41 -- setup/common.sh@32 -- # continue 00:03:05.057 20:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.057 20:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.057 20:03:41 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.057 20:03:41 -- setup/common.sh@32 -- # continue 00:03:05.057 20:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.057 20:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.057 20:03:41 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.057 20:03:41 -- setup/common.sh@33 -- # echo 2048 00:03:05.057 20:03:41 -- setup/common.sh@33 -- # return 0 00:03:05.057 20:03:41 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:05.057 20:03:41 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:05.057 20:03:41 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:05.057 20:03:41 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:05.057 20:03:41 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:05.057 20:03:41 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:05.057 20:03:41 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:05.057 20:03:41 -- setup/hugepages.sh@207 -- # get_nodes 00:03:05.057 20:03:41 -- setup/hugepages.sh@27 -- # local node 00:03:05.057 20:03:41 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:05.057 20:03:41 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:05.057 20:03:41 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:05.057 20:03:41 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:05.057 20:03:41 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:05.057 20:03:41 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:05.057 20:03:41 -- setup/hugepages.sh@208 -- # clear_hp 00:03:05.057 20:03:41 -- setup/hugepages.sh@37 -- # local node hp 00:03:05.057 20:03:41 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:05.057 20:03:41 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:05.057 20:03:41 -- setup/hugepages.sh@41 -- # echo 0 00:03:05.057 20:03:41 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:05.057 20:03:41 -- setup/hugepages.sh@41 -- # echo 0 00:03:05.057 20:03:41 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:05.057 20:03:41 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:05.057 20:03:41 -- setup/hugepages.sh@41 -- # echo 0 00:03:05.057 20:03:41 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:05.057 20:03:41 -- setup/hugepages.sh@41 -- # echo 0 00:03:05.057 20:03:41 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:05.057 20:03:41 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:05.057 20:03:41 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:05.057 20:03:41 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:03:05.057 20:03:41 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:03:05.057 20:03:41 -- common/autotest_common.sh@10 -- # set +x 00:03:05.057 ************************************ 00:03:05.057 START TEST default_setup 00:03:05.057 ************************************ 00:03:05.057 20:03:41 -- common/autotest_common.sh@1102 -- # default_setup 00:03:05.057 20:03:41 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:05.057 20:03:41 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:05.057 20:03:41 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:05.057 20:03:41 -- setup/hugepages.sh@51 -- # shift 00:03:05.057 20:03:41 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:05.057 20:03:41 -- setup/hugepages.sh@52 -- # local node_ids 00:03:05.057 20:03:41 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:05.057 20:03:41 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:05.057 20:03:41 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:05.057 20:03:41 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:05.057 20:03:41 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:05.057 20:03:41 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:05.057 20:03:41 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:05.057 20:03:41 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:05.057 20:03:41 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:05.057 20:03:41 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:05.057 20:03:41 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:05.057 20:03:41 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:05.057 20:03:41 -- setup/hugepages.sh@73 -- # return 0 00:03:05.057 20:03:41 -- setup/hugepages.sh@137 -- # setup output 00:03:05.057 20:03:41 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:05.057 20:03:41 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:07.596 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:03:07.596 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:07.596 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:07.596 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:07.596 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:07.596 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:07.596 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:07.596 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:07.596 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:07.596 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:07.596 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:07.596 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:07.596 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:07.596 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:07.596 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:07.596 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:07.596 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:08.537 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:08.537 20:03:45 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:08.537 20:03:45 -- setup/hugepages.sh@89 -- # local node 00:03:08.537 20:03:45 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:08.537 20:03:45 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:08.537 20:03:45 -- setup/hugepages.sh@92 -- # local surp 00:03:08.537 20:03:45 -- setup/hugepages.sh@93 -- # local resv 00:03:08.537 20:03:45 -- setup/hugepages.sh@94 -- # local anon 00:03:08.537 20:03:45 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:08.537 20:03:45 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:08.537 20:03:45 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:08.537 20:03:45 -- setup/common.sh@18 -- # local node= 00:03:08.537 20:03:45 -- setup/common.sh@19 -- # local var val 00:03:08.537 20:03:45 -- setup/common.sh@20 -- # local mem_f mem 00:03:08.537 20:03:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:08.537 20:03:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:08.537 20:03:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:08.537 20:03:45 -- setup/common.sh@28 -- # mapfile -t mem 00:03:08.537 20:03:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:08.537 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.537 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.537 20:03:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93323000 kB' 'MemFree: 73385420 kB' 'MemAvailable: 78365100 kB' 'Buffers: 2696 kB' 'Cached: 14309408 kB' 'SwapCached: 0 kB' 'Active: 10186764 kB' 'Inactive: 4658688 kB' 'Active(anon): 9620540 kB' 'Inactive(anon): 0 kB' 'Active(file): 566224 kB' 'Inactive(file): 4658688 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536264 kB' 'Mapped: 240632 kB' 'Shmem: 9087192 kB' 'KReclaimable: 534604 kB' 'Slab: 1074040 kB' 'SReclaimable: 534604 kB' 'SUnreclaim: 539436 kB' 'KernelStack: 19504 kB' 'PageTables: 9476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54001528 kB' 'Committed_AS: 11034424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212692 kB' 'VmallocChunk: 0 kB' 'Percpu: 94464 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2888660 kB' 'DirectMap2M: 19859456 kB' 'DirectMap1G: 79691776 kB' 00:03:08.537 20:03:45 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.537 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.537 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.537 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.537 20:03:45 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.537 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.537 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.537 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.537 20:03:45 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.537 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.537 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.537 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.537 20:03:45 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.537 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.537 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.537 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.537 20:03:45 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.537 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.537 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.537 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.537 20:03:45 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.537 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.537 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.537 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.537 20:03:45 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.537 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.537 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.537 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.537 20:03:45 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.537 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.537 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.537 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.537 20:03:45 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.537 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.537 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.537 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.537 20:03:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.537 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.537 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.537 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.537 20:03:45 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.537 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.537 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.537 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.537 20:03:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.537 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.537 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.537 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.537 20:03:45 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.537 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.537 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.537 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.537 20:03:45 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.537 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.537 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.538 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.538 20:03:45 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.538 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.538 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.538 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.538 20:03:45 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.538 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.538 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.538 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.538 20:03:45 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.538 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.538 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.538 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.538 20:03:45 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.538 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.538 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.538 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.538 20:03:45 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.538 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.538 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.538 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.538 20:03:45 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.538 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.538 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.538 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.538 20:03:45 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.538 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.538 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.538 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.538 20:03:45 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.538 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.538 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.538 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.538 20:03:45 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.538 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.538 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.538 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.538 20:03:45 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.538 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.538 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.538 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.538 20:03:45 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.538 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.538 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.538 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.538 20:03:45 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.538 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.538 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.538 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.538 20:03:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.538 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.538 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.538 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.538 20:03:45 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.538 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.538 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.538 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.538 20:03:45 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.538 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.538 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.538 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.538 20:03:45 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.538 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.538 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.538 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.538 20:03:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.538 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.538 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.538 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.538 20:03:45 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.538 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.538 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.538 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.538 20:03:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.538 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.538 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.538 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.538 20:03:45 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.538 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.538 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.538 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.538 20:03:45 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.538 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.538 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.538 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.538 20:03:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.538 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.538 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.538 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.538 20:03:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.538 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.538 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.538 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.538 20:03:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.538 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.538 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.538 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.538 20:03:45 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.538 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.538 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.538 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.538 20:03:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.538 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.538 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.538 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.538 20:03:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.538 20:03:45 -- setup/common.sh@33 -- # echo 0 00:03:08.538 20:03:45 -- setup/common.sh@33 -- # return 0 00:03:08.538 20:03:45 -- setup/hugepages.sh@97 -- # anon=0 00:03:08.538 20:03:45 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:08.538 20:03:45 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:08.538 20:03:45 -- setup/common.sh@18 -- # local node= 00:03:08.538 20:03:45 -- setup/common.sh@19 -- # local var val 00:03:08.538 20:03:45 -- setup/common.sh@20 -- # local mem_f mem 00:03:08.538 20:03:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:08.538 20:03:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:08.538 20:03:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:08.538 20:03:45 -- setup/common.sh@28 -- # mapfile -t mem 00:03:08.538 20:03:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:08.538 20:03:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93323000 kB' 'MemFree: 73385768 kB' 'MemAvailable: 78365448 kB' 'Buffers: 2696 kB' 'Cached: 14309412 kB' 'SwapCached: 0 kB' 'Active: 10185680 kB' 'Inactive: 4658688 kB' 'Active(anon): 9619456 kB' 'Inactive(anon): 0 kB' 'Active(file): 566224 kB' 'Inactive(file): 4658688 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535600 kB' 'Mapped: 240540 kB' 'Shmem: 9087196 kB' 'KReclaimable: 534604 kB' 'Slab: 1073980 kB' 'SReclaimable: 534604 kB' 'SUnreclaim: 539376 kB' 'KernelStack: 19488 kB' 'PageTables: 9396 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54001528 kB' 'Committed_AS: 11034436 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212692 kB' 'VmallocChunk: 0 kB' 'Percpu: 94464 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2888660 kB' 'DirectMap2M: 19859456 kB' 'DirectMap1G: 79691776 kB' 00:03:08.538 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.538 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.538 20:03:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.538 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.538 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.538 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.538 20:03:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.538 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.538 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.538 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.538 20:03:45 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.538 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.538 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.538 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.538 20:03:45 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.538 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.538 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.538 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.538 20:03:45 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.538 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.538 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.538 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.538 20:03:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.539 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.539 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.539 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.539 20:03:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.539 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.539 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.539 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.539 20:03:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.539 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.539 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.539 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.539 20:03:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.539 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.539 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.539 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.539 20:03:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.539 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.539 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.539 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.539 20:03:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.539 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.539 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.539 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.539 20:03:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.539 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.539 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.539 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.539 20:03:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.539 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.539 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.539 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.539 20:03:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.539 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.539 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.539 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.539 20:03:45 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.539 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.539 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.539 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.539 20:03:45 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.539 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.539 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.539 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.539 20:03:45 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.539 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.539 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.539 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.539 20:03:45 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.539 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.539 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.539 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.539 20:03:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.539 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.539 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.539 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.539 20:03:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.539 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.539 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.539 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.539 20:03:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.539 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.539 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.539 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.539 20:03:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.539 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.539 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.539 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.539 20:03:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.539 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.539 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.539 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.539 20:03:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.539 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.539 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.539 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.539 20:03:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.539 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.539 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.539 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.539 20:03:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.539 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.539 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.539 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.539 20:03:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.539 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.539 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.539 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.539 20:03:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.539 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.539 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.539 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.539 20:03:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.539 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.539 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.539 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.539 20:03:45 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.539 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.539 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.539 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.539 20:03:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.539 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.539 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.539 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.539 20:03:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.539 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.539 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.539 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.539 20:03:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.539 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.539 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.539 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.539 20:03:45 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.539 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.539 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.539 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.539 20:03:45 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.539 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.539 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.539 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.539 20:03:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.539 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.539 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.539 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.539 20:03:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.539 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.539 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.539 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.539 20:03:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.539 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.539 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.539 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.539 20:03:45 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.539 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.539 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.539 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.539 20:03:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.539 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.539 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.539 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.539 20:03:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.539 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.539 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.539 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.539 20:03:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.539 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.539 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.539 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.539 20:03:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.539 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.539 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.539 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.539 20:03:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.539 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.539 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.539 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.539 20:03:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.539 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.539 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.539 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.539 20:03:45 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.539 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.539 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.540 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.540 20:03:45 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.540 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.540 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.540 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.540 20:03:45 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.540 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.540 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.540 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.540 20:03:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.540 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.540 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.540 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.540 20:03:45 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.540 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.540 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.540 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.540 20:03:45 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.540 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.540 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.540 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.540 20:03:45 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.540 20:03:45 -- setup/common.sh@33 -- # echo 0 00:03:08.540 20:03:45 -- setup/common.sh@33 -- # return 0 00:03:08.540 20:03:45 -- setup/hugepages.sh@99 -- # surp=0 00:03:08.540 20:03:45 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:08.540 20:03:45 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:08.540 20:03:45 -- setup/common.sh@18 -- # local node= 00:03:08.540 20:03:45 -- setup/common.sh@19 -- # local var val 00:03:08.540 20:03:45 -- setup/common.sh@20 -- # local mem_f mem 00:03:08.540 20:03:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:08.540 20:03:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:08.540 20:03:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:08.540 20:03:45 -- setup/common.sh@28 -- # mapfile -t mem 00:03:08.540 20:03:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:08.540 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.540 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.540 20:03:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93323000 kB' 'MemFree: 73385768 kB' 'MemAvailable: 78365448 kB' 'Buffers: 2696 kB' 'Cached: 14309424 kB' 'SwapCached: 0 kB' 'Active: 10185644 kB' 'Inactive: 4658688 kB' 'Active(anon): 9619420 kB' 'Inactive(anon): 0 kB' 'Active(file): 566224 kB' 'Inactive(file): 4658688 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535512 kB' 'Mapped: 240540 kB' 'Shmem: 9087208 kB' 'KReclaimable: 534604 kB' 'Slab: 1073980 kB' 'SReclaimable: 534604 kB' 'SUnreclaim: 539376 kB' 'KernelStack: 19472 kB' 'PageTables: 9340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54001528 kB' 'Committed_AS: 11034452 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212708 kB' 'VmallocChunk: 0 kB' 'Percpu: 94464 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2888660 kB' 'DirectMap2M: 19859456 kB' 'DirectMap1G: 79691776 kB' 00:03:08.540 20:03:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.540 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.540 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.540 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.540 20:03:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.540 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.540 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.540 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.540 20:03:45 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.540 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.540 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.540 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.540 20:03:45 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.540 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.540 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.540 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.540 20:03:45 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.540 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.540 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.540 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.540 20:03:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.540 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.540 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.540 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.540 20:03:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.540 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.540 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.540 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.540 20:03:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.540 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.540 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.540 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.540 20:03:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.540 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.540 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.540 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.540 20:03:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.540 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.540 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.540 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.540 20:03:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.540 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.540 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.540 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.540 20:03:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.540 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.540 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.540 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.540 20:03:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.540 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.540 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.540 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.540 20:03:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.540 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.540 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.540 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.540 20:03:45 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.540 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.540 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.540 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.540 20:03:45 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.540 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.540 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.540 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.540 20:03:45 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.540 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.540 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.540 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.540 20:03:45 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.540 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.540 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.540 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.540 20:03:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.540 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.540 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.540 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.540 20:03:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.540 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.540 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.540 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.540 20:03:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.540 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.540 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.540 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.540 20:03:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.540 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.540 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.540 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.540 20:03:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.540 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.540 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.540 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.540 20:03:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.540 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.540 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.540 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.540 20:03:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.540 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.540 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.540 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.540 20:03:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.540 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.540 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.540 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.540 20:03:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.540 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.540 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.540 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.540 20:03:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.541 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.541 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.541 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.541 20:03:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.541 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.541 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.541 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.541 20:03:45 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.541 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.541 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.541 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.541 20:03:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.541 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.541 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.541 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.541 20:03:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.541 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.541 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.541 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.541 20:03:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.541 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.541 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.541 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.541 20:03:45 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.541 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.541 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.541 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.541 20:03:45 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.541 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.541 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.541 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.541 20:03:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.541 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.541 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.541 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.541 20:03:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.541 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.541 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.541 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.541 20:03:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.541 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.541 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.541 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.541 20:03:45 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.541 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.541 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.541 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.541 20:03:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.541 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.541 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.541 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.541 20:03:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.541 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.541 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.541 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.541 20:03:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.541 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.541 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.541 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.541 20:03:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.541 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.541 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.541 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.541 20:03:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.541 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.541 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.541 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.541 20:03:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.541 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.541 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.541 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.541 20:03:45 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.541 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.541 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.541 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.541 20:03:45 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.541 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.541 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.541 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.541 20:03:45 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.541 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.541 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.541 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.541 20:03:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.541 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.541 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.541 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.541 20:03:45 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.541 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.541 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.541 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.541 20:03:45 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.541 20:03:45 -- setup/common.sh@33 -- # echo 0 00:03:08.541 20:03:45 -- setup/common.sh@33 -- # return 0 00:03:08.541 20:03:45 -- setup/hugepages.sh@100 -- # resv=0 00:03:08.541 20:03:45 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:08.541 nr_hugepages=1024 00:03:08.541 20:03:45 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:08.541 resv_hugepages=0 00:03:08.541 20:03:45 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:08.541 surplus_hugepages=0 00:03:08.541 20:03:45 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:08.541 anon_hugepages=0 00:03:08.541 20:03:45 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:08.541 20:03:45 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:08.541 20:03:45 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:08.541 20:03:45 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:08.541 20:03:45 -- setup/common.sh@18 -- # local node= 00:03:08.541 20:03:45 -- setup/common.sh@19 -- # local var val 00:03:08.541 20:03:45 -- setup/common.sh@20 -- # local mem_f mem 00:03:08.541 20:03:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:08.541 20:03:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:08.541 20:03:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:08.541 20:03:45 -- setup/common.sh@28 -- # mapfile -t mem 00:03:08.541 20:03:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:08.541 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.541 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.541 20:03:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93323000 kB' 'MemFree: 73385768 kB' 'MemAvailable: 78365448 kB' 'Buffers: 2696 kB' 'Cached: 14309436 kB' 'SwapCached: 0 kB' 'Active: 10185556 kB' 'Inactive: 4658688 kB' 'Active(anon): 9619332 kB' 'Inactive(anon): 0 kB' 'Active(file): 566224 kB' 'Inactive(file): 4658688 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535384 kB' 'Mapped: 240540 kB' 'Shmem: 9087220 kB' 'KReclaimable: 534604 kB' 'Slab: 1073980 kB' 'SReclaimable: 534604 kB' 'SUnreclaim: 539376 kB' 'KernelStack: 19472 kB' 'PageTables: 9340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54001528 kB' 'Committed_AS: 11034464 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212724 kB' 'VmallocChunk: 0 kB' 'Percpu: 94464 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2888660 kB' 'DirectMap2M: 19859456 kB' 'DirectMap1G: 79691776 kB' 00:03:08.541 20:03:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.541 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.541 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.541 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.541 20:03:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.541 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.541 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.541 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.541 20:03:45 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.541 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.541 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.541 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.541 20:03:45 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.541 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.541 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.541 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.541 20:03:45 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.541 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.541 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.541 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.803 20:03:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.803 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.803 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.803 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.803 20:03:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.803 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.803 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.803 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.803 20:03:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.803 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.803 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.803 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.803 20:03:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.803 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.803 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.803 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.803 20:03:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.803 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.803 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.803 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.803 20:03:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.803 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.803 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.803 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.803 20:03:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.803 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.803 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.803 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.803 20:03:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.803 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.803 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.803 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.803 20:03:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.803 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.803 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.803 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.803 20:03:45 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.803 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.803 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.803 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.803 20:03:45 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.803 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.803 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.803 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.803 20:03:45 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.803 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.803 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.803 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.803 20:03:45 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.803 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.803 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.803 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.803 20:03:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.803 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.803 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.803 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.803 20:03:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.803 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.803 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.803 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.803 20:03:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.803 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.803 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.803 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.803 20:03:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.803 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.803 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.803 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.803 20:03:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.803 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.803 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.803 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.803 20:03:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.803 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.803 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.803 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.803 20:03:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.803 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.803 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.803 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.803 20:03:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.803 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.803 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.803 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.803 20:03:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.803 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.803 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.803 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.803 20:03:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.803 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.803 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.803 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.803 20:03:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.803 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.803 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.803 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.803 20:03:45 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.803 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.803 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.803 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.803 20:03:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.803 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.803 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.803 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.803 20:03:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.803 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.803 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.803 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.803 20:03:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.803 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.803 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.803 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.803 20:03:45 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.803 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.803 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.804 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.804 20:03:45 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.804 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.804 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.804 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.804 20:03:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.804 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.804 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.804 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.804 20:03:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.804 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.804 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.804 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.804 20:03:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.804 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.804 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.804 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.804 20:03:45 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.804 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.804 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.804 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.804 20:03:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.804 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.804 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.804 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.804 20:03:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.804 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.804 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.804 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.804 20:03:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.804 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.804 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.804 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.804 20:03:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.804 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.804 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.804 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.804 20:03:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.804 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.804 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.804 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.804 20:03:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.804 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.804 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.804 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.804 20:03:45 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.804 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.804 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.804 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.804 20:03:45 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.804 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.804 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.804 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.804 20:03:45 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.804 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.804 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.804 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.804 20:03:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.804 20:03:45 -- setup/common.sh@33 -- # echo 1024 00:03:08.804 20:03:45 -- setup/common.sh@33 -- # return 0 00:03:08.804 20:03:45 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:08.804 20:03:45 -- setup/hugepages.sh@112 -- # get_nodes 00:03:08.804 20:03:45 -- setup/hugepages.sh@27 -- # local node 00:03:08.804 20:03:45 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:08.804 20:03:45 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:08.804 20:03:45 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:08.804 20:03:45 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:08.804 20:03:45 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:08.804 20:03:45 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:08.804 20:03:45 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:08.804 20:03:45 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:08.804 20:03:45 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:08.804 20:03:45 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:08.804 20:03:45 -- setup/common.sh@18 -- # local node=0 00:03:08.804 20:03:45 -- setup/common.sh@19 -- # local var val 00:03:08.804 20:03:45 -- setup/common.sh@20 -- # local mem_f mem 00:03:08.804 20:03:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:08.804 20:03:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:08.804 20:03:45 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:08.804 20:03:45 -- setup/common.sh@28 -- # mapfile -t mem 00:03:08.804 20:03:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:08.804 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.804 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.804 20:03:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32634628 kB' 'MemFree: 23575716 kB' 'MemUsed: 9058912 kB' 'SwapCached: 0 kB' 'Active: 4221020 kB' 'Inactive: 1226824 kB' 'Active(anon): 3767296 kB' 'Inactive(anon): 0 kB' 'Active(file): 453724 kB' 'Inactive(file): 1226824 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5062184 kB' 'Mapped: 191820 kB' 'AnonPages: 388780 kB' 'Shmem: 3381636 kB' 'KernelStack: 11912 kB' 'PageTables: 6212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 246776 kB' 'Slab: 525076 kB' 'SReclaimable: 246776 kB' 'SUnreclaim: 278300 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:08.804 20:03:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.804 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.804 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.804 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.804 20:03:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.804 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.804 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.804 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.804 20:03:45 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.804 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.804 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.804 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.804 20:03:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.804 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.804 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.804 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.804 20:03:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.804 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.804 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.804 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.804 20:03:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.804 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.804 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.804 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.804 20:03:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.804 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.804 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.804 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.804 20:03:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.804 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.804 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.804 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.804 20:03:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.804 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.804 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.804 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.804 20:03:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.804 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.804 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.804 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.804 20:03:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.804 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.804 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.804 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.804 20:03:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.804 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.804 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.804 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.804 20:03:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.804 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.804 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.804 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.804 20:03:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.804 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.804 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.804 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.804 20:03:45 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.804 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.804 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.804 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.804 20:03:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.805 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.805 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.805 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.805 20:03:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.805 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.805 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.805 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.805 20:03:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.805 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.805 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.805 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.805 20:03:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.805 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.805 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.805 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.805 20:03:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.805 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.805 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.805 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.805 20:03:45 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.805 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.805 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.805 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.805 20:03:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.805 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.805 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.805 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.805 20:03:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.805 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.805 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.805 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.805 20:03:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.805 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.805 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.805 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.805 20:03:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.805 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.805 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.805 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.805 20:03:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.805 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.805 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.805 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.805 20:03:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.805 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.805 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.805 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.805 20:03:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.805 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.805 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.805 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.805 20:03:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.805 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.805 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.805 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.805 20:03:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.805 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.805 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.805 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.805 20:03:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.805 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.805 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.805 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.805 20:03:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.805 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.805 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.805 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.805 20:03:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.805 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.805 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.805 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.805 20:03:45 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.805 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.805 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.805 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.805 20:03:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.805 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.805 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.805 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.805 20:03:45 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.805 20:03:45 -- setup/common.sh@32 -- # continue 00:03:08.805 20:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.805 20:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.805 20:03:45 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.805 20:03:45 -- setup/common.sh@33 -- # echo 0 00:03:08.805 20:03:45 -- setup/common.sh@33 -- # return 0 00:03:08.805 20:03:45 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:08.805 20:03:46 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:08.805 20:03:46 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:08.805 20:03:46 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:08.805 20:03:46 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:08.805 node0=1024 expecting 1024 00:03:08.805 20:03:46 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:08.805 00:03:08.805 real 0m4.216s 00:03:08.805 user 0m1.322s 00:03:08.805 sys 0m2.161s 00:03:08.805 20:03:46 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:08.805 20:03:46 -- common/autotest_common.sh@10 -- # set +x 00:03:08.805 ************************************ 00:03:08.805 END TEST default_setup 00:03:08.805 ************************************ 00:03:08.805 20:03:46 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:08.805 20:03:46 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:03:08.805 20:03:46 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:03:08.805 20:03:46 -- common/autotest_common.sh@10 -- # set +x 00:03:08.805 ************************************ 00:03:08.805 START TEST per_node_1G_alloc 00:03:08.805 ************************************ 00:03:08.805 20:03:46 -- common/autotest_common.sh@1102 -- # per_node_1G_alloc 00:03:08.805 20:03:46 -- setup/hugepages.sh@143 -- # local IFS=, 00:03:08.805 20:03:46 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:08.805 20:03:46 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:08.805 20:03:46 -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:08.805 20:03:46 -- setup/hugepages.sh@51 -- # shift 00:03:08.805 20:03:46 -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:08.805 20:03:46 -- setup/hugepages.sh@52 -- # local node_ids 00:03:08.805 20:03:46 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:08.805 20:03:46 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:08.805 20:03:46 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:08.805 20:03:46 -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:08.805 20:03:46 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:08.805 20:03:46 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:08.805 20:03:46 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:08.805 20:03:46 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:08.805 20:03:46 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:08.805 20:03:46 -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:08.805 20:03:46 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:08.805 20:03:46 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:08.805 20:03:46 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:08.805 20:03:46 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:08.805 20:03:46 -- setup/hugepages.sh@73 -- # return 0 00:03:08.805 20:03:46 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:08.805 20:03:46 -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:08.805 20:03:46 -- setup/hugepages.sh@146 -- # setup output 00:03:08.805 20:03:46 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:08.805 20:03:46 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:11.343 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:03:11.602 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:11.602 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:11.602 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:11.602 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:11.602 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:11.602 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:11.602 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:11.602 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:11.602 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:11.602 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:11.602 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:11.602 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:11.602 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:11.602 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:11.602 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:11.602 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:11.602 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:11.865 20:03:49 -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:11.865 20:03:49 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:11.865 20:03:49 -- setup/hugepages.sh@89 -- # local node 00:03:11.865 20:03:49 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:11.865 20:03:49 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:11.865 20:03:49 -- setup/hugepages.sh@92 -- # local surp 00:03:11.865 20:03:49 -- setup/hugepages.sh@93 -- # local resv 00:03:11.865 20:03:49 -- setup/hugepages.sh@94 -- # local anon 00:03:11.865 20:03:49 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:11.865 20:03:49 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:11.865 20:03:49 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:11.865 20:03:49 -- setup/common.sh@18 -- # local node= 00:03:11.865 20:03:49 -- setup/common.sh@19 -- # local var val 00:03:11.865 20:03:49 -- setup/common.sh@20 -- # local mem_f mem 00:03:11.865 20:03:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.865 20:03:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:11.865 20:03:49 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:11.865 20:03:49 -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.865 20:03:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.865 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.865 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.866 20:03:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93323000 kB' 'MemFree: 73438644 kB' 'MemAvailable: 78418324 kB' 'Buffers: 2696 kB' 'Cached: 14309520 kB' 'SwapCached: 0 kB' 'Active: 10184516 kB' 'Inactive: 4658688 kB' 'Active(anon): 9618292 kB' 'Inactive(anon): 0 kB' 'Active(file): 566224 kB' 'Inactive(file): 4658688 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534188 kB' 'Mapped: 239380 kB' 'Shmem: 9087304 kB' 'KReclaimable: 534604 kB' 'Slab: 1073024 kB' 'SReclaimable: 534604 kB' 'SUnreclaim: 538420 kB' 'KernelStack: 19424 kB' 'PageTables: 9136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54001528 kB' 'Committed_AS: 11022400 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212932 kB' 'VmallocChunk: 0 kB' 'Percpu: 94464 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2888660 kB' 'DirectMap2M: 19859456 kB' 'DirectMap1G: 79691776 kB' 00:03:11.866 20:03:49 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.866 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.866 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.866 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.866 20:03:49 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.866 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.866 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.866 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.866 20:03:49 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.866 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.866 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.866 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.866 20:03:49 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.866 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.866 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.866 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.866 20:03:49 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.866 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.866 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.866 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.866 20:03:49 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.866 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.866 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.866 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.866 20:03:49 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.866 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.866 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.866 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.866 20:03:49 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.866 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.866 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.866 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.866 20:03:49 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.866 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.866 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.866 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.866 20:03:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.866 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.866 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.866 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.866 20:03:49 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.866 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.866 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.866 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.866 20:03:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.866 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.866 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.866 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.866 20:03:49 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.866 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.866 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.866 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.866 20:03:49 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.866 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.866 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.866 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.866 20:03:49 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.866 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.866 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.866 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.866 20:03:49 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.866 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.866 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.866 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.866 20:03:49 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.866 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.866 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.866 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.866 20:03:49 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.866 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.866 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.866 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.866 20:03:49 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.866 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.866 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.866 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.866 20:03:49 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.866 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.866 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.866 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.866 20:03:49 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.866 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.866 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.866 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.866 20:03:49 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.866 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.866 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.866 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.866 20:03:49 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.866 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.866 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.866 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.866 20:03:49 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.866 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.866 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.866 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.866 20:03:49 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.866 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.866 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.866 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.866 20:03:49 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.866 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.866 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.866 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.866 20:03:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.866 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.866 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.866 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.866 20:03:49 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.866 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.866 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.866 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.866 20:03:49 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.866 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.866 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.866 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.866 20:03:49 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.866 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.866 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.866 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.866 20:03:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.866 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.866 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.866 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.866 20:03:49 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.866 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.866 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.866 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.866 20:03:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.866 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.866 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.866 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.866 20:03:49 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.866 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.866 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.866 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.866 20:03:49 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.866 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.866 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.866 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.866 20:03:49 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.866 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.866 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.866 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.866 20:03:49 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.866 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.866 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.866 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.866 20:03:49 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.866 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.866 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.866 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.866 20:03:49 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.866 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.866 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.866 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.866 20:03:49 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.867 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.867 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.867 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.867 20:03:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.867 20:03:49 -- setup/common.sh@33 -- # echo 0 00:03:11.867 20:03:49 -- setup/common.sh@33 -- # return 0 00:03:11.867 20:03:49 -- setup/hugepages.sh@97 -- # anon=0 00:03:11.867 20:03:49 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:11.867 20:03:49 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:11.867 20:03:49 -- setup/common.sh@18 -- # local node= 00:03:11.867 20:03:49 -- setup/common.sh@19 -- # local var val 00:03:11.867 20:03:49 -- setup/common.sh@20 -- # local mem_f mem 00:03:11.867 20:03:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.867 20:03:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:11.867 20:03:49 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:11.867 20:03:49 -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.867 20:03:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.867 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.867 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.867 20:03:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93323000 kB' 'MemFree: 73440788 kB' 'MemAvailable: 78420468 kB' 'Buffers: 2696 kB' 'Cached: 14309524 kB' 'SwapCached: 0 kB' 'Active: 10184808 kB' 'Inactive: 4658688 kB' 'Active(anon): 9618584 kB' 'Inactive(anon): 0 kB' 'Active(file): 566224 kB' 'Inactive(file): 4658688 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534560 kB' 'Mapped: 239380 kB' 'Shmem: 9087308 kB' 'KReclaimable: 534604 kB' 'Slab: 1073024 kB' 'SReclaimable: 534604 kB' 'SUnreclaim: 538420 kB' 'KernelStack: 19408 kB' 'PageTables: 9092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54001528 kB' 'Committed_AS: 11022412 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212900 kB' 'VmallocChunk: 0 kB' 'Percpu: 94464 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2888660 kB' 'DirectMap2M: 19859456 kB' 'DirectMap1G: 79691776 kB' 00:03:11.867 20:03:49 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.867 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.867 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.867 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.867 20:03:49 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.867 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.867 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.867 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.867 20:03:49 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.867 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.867 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.867 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.867 20:03:49 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.867 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.867 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.867 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.867 20:03:49 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.867 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.867 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.867 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.867 20:03:49 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.867 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.867 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.867 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.867 20:03:49 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.867 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.867 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.867 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.867 20:03:49 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.867 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.867 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.867 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.867 20:03:49 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.867 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.867 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.867 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.867 20:03:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.867 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.867 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.867 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.867 20:03:49 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.867 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.867 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.867 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.867 20:03:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.867 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.867 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.867 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.867 20:03:49 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.867 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.867 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.867 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.867 20:03:49 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.867 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.867 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.867 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.867 20:03:49 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.867 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.867 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.867 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.867 20:03:49 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.867 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.867 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.867 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.867 20:03:49 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.867 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.867 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.867 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.867 20:03:49 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.867 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.867 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.867 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.867 20:03:49 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.867 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.867 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.867 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.867 20:03:49 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.867 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.867 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.867 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.867 20:03:49 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.867 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.867 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.867 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.867 20:03:49 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.867 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.867 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.867 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.867 20:03:49 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.867 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.867 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.867 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.867 20:03:49 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.867 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.867 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.867 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.867 20:03:49 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.867 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.867 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.867 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.867 20:03:49 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.867 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.867 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.867 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.867 20:03:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.867 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.867 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.867 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.867 20:03:49 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.867 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.867 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.867 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.867 20:03:49 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.867 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.867 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.867 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.867 20:03:49 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.867 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.867 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.867 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.867 20:03:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.867 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.867 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.867 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.867 20:03:49 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.867 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.868 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.868 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.868 20:03:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.868 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.868 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.868 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.868 20:03:49 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.868 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.868 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.868 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.868 20:03:49 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.868 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.868 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.868 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.868 20:03:49 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.868 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.868 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.868 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.868 20:03:49 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.868 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.868 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.868 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.868 20:03:49 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.868 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.868 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.868 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.868 20:03:49 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.868 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.868 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.868 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.868 20:03:49 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.868 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.868 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.868 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.868 20:03:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.868 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.868 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.868 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.868 20:03:49 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.868 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.868 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.868 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.868 20:03:49 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.868 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.868 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.868 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.868 20:03:49 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.868 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.868 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.868 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.868 20:03:49 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.868 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.868 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.868 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.868 20:03:49 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.868 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.868 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.868 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.868 20:03:49 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.868 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.868 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.868 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.868 20:03:49 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.868 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.868 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.868 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.868 20:03:49 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.868 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.868 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.868 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.868 20:03:49 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.868 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.868 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.868 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.868 20:03:49 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.868 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.868 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.868 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.868 20:03:49 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.868 20:03:49 -- setup/common.sh@33 -- # echo 0 00:03:11.868 20:03:49 -- setup/common.sh@33 -- # return 0 00:03:11.868 20:03:49 -- setup/hugepages.sh@99 -- # surp=0 00:03:11.868 20:03:49 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:11.868 20:03:49 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:11.868 20:03:49 -- setup/common.sh@18 -- # local node= 00:03:11.868 20:03:49 -- setup/common.sh@19 -- # local var val 00:03:11.868 20:03:49 -- setup/common.sh@20 -- # local mem_f mem 00:03:11.868 20:03:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.868 20:03:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:11.868 20:03:49 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:11.868 20:03:49 -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.868 20:03:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.868 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.868 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.868 20:03:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93323000 kB' 'MemFree: 73440456 kB' 'MemAvailable: 78420136 kB' 'Buffers: 2696 kB' 'Cached: 14309524 kB' 'SwapCached: 0 kB' 'Active: 10184088 kB' 'Inactive: 4658688 kB' 'Active(anon): 9617864 kB' 'Inactive(anon): 0 kB' 'Active(file): 566224 kB' 'Inactive(file): 4658688 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533804 kB' 'Mapped: 239356 kB' 'Shmem: 9087308 kB' 'KReclaimable: 534604 kB' 'Slab: 1073032 kB' 'SReclaimable: 534604 kB' 'SUnreclaim: 538428 kB' 'KernelStack: 19392 kB' 'PageTables: 9064 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54001528 kB' 'Committed_AS: 11022424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212868 kB' 'VmallocChunk: 0 kB' 'Percpu: 94464 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2888660 kB' 'DirectMap2M: 19859456 kB' 'DirectMap1G: 79691776 kB' 00:03:11.868 20:03:49 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.868 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.868 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.868 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.868 20:03:49 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.868 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.868 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.868 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.868 20:03:49 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.868 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.868 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.868 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.868 20:03:49 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.868 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.868 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.868 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.868 20:03:49 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.868 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.868 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.868 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.868 20:03:49 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.868 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.868 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.868 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.868 20:03:49 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.868 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.868 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.868 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.868 20:03:49 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.868 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.868 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.868 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.868 20:03:49 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.868 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.868 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.868 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.868 20:03:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.868 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.868 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.868 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.868 20:03:49 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.868 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.868 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.868 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.868 20:03:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.868 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.868 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.868 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.868 20:03:49 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.869 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.869 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.869 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.869 20:03:49 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.869 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.869 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.869 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.869 20:03:49 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.869 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.869 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.869 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.869 20:03:49 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.869 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.869 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.869 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.869 20:03:49 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.869 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.869 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.869 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.869 20:03:49 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.869 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.869 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.869 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.869 20:03:49 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.869 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.869 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.869 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.869 20:03:49 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.869 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.869 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.869 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.869 20:03:49 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.869 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.869 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.869 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.869 20:03:49 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.869 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.869 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.869 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.869 20:03:49 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.869 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.869 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.869 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.869 20:03:49 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.869 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.869 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.869 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.869 20:03:49 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.869 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.869 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.869 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.869 20:03:49 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.869 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.869 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.869 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.869 20:03:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.869 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.869 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.869 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.869 20:03:49 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.869 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.869 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.869 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.869 20:03:49 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.869 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.869 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.869 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.869 20:03:49 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.869 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.869 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.869 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.869 20:03:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.869 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.869 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.869 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.869 20:03:49 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.869 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.869 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.869 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.869 20:03:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.869 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.869 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.869 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.869 20:03:49 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.869 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.869 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.869 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.869 20:03:49 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.869 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.869 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.869 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.869 20:03:49 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.869 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.869 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.869 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.869 20:03:49 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.869 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.869 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.869 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.869 20:03:49 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.869 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.869 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.869 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.869 20:03:49 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.869 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.869 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.869 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.869 20:03:49 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.869 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.869 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.869 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.869 20:03:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.869 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.869 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.869 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.869 20:03:49 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.869 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.869 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.869 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.869 20:03:49 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.869 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.869 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.869 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.869 20:03:49 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.869 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.869 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.869 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.869 20:03:49 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.869 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.869 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.869 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.869 20:03:49 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.869 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.869 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.869 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.869 20:03:49 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.869 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.869 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.869 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.869 20:03:49 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.869 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.869 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.869 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.869 20:03:49 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.869 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.869 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.869 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.869 20:03:49 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.869 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.869 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.869 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.869 20:03:49 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.869 20:03:49 -- setup/common.sh@33 -- # echo 0 00:03:11.869 20:03:49 -- setup/common.sh@33 -- # return 0 00:03:11.869 20:03:49 -- setup/hugepages.sh@100 -- # resv=0 00:03:11.869 20:03:49 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:11.869 nr_hugepages=1024 00:03:11.869 20:03:49 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:11.869 resv_hugepages=0 00:03:11.869 20:03:49 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:11.869 surplus_hugepages=0 00:03:11.869 20:03:49 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:11.869 anon_hugepages=0 00:03:11.869 20:03:49 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:11.870 20:03:49 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:11.870 20:03:49 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:11.870 20:03:49 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:11.870 20:03:49 -- setup/common.sh@18 -- # local node= 00:03:11.870 20:03:49 -- setup/common.sh@19 -- # local var val 00:03:11.870 20:03:49 -- setup/common.sh@20 -- # local mem_f mem 00:03:11.870 20:03:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.870 20:03:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:11.870 20:03:49 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:11.870 20:03:49 -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.870 20:03:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.870 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.870 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.870 20:03:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93323000 kB' 'MemFree: 73440568 kB' 'MemAvailable: 78420248 kB' 'Buffers: 2696 kB' 'Cached: 14309564 kB' 'SwapCached: 0 kB' 'Active: 10183808 kB' 'Inactive: 4658688 kB' 'Active(anon): 9617584 kB' 'Inactive(anon): 0 kB' 'Active(file): 566224 kB' 'Inactive(file): 4658688 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533480 kB' 'Mapped: 239356 kB' 'Shmem: 9087348 kB' 'KReclaimable: 534604 kB' 'Slab: 1073028 kB' 'SReclaimable: 534604 kB' 'SUnreclaim: 538424 kB' 'KernelStack: 19392 kB' 'PageTables: 9040 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54001528 kB' 'Committed_AS: 11022440 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212868 kB' 'VmallocChunk: 0 kB' 'Percpu: 94464 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2888660 kB' 'DirectMap2M: 19859456 kB' 'DirectMap1G: 79691776 kB' 00:03:11.870 20:03:49 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.870 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.870 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.870 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.870 20:03:49 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.870 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.870 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.870 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.870 20:03:49 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.870 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.870 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.870 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.870 20:03:49 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.870 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.870 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.870 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.870 20:03:49 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.870 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.870 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.870 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.870 20:03:49 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.870 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.870 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.870 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.870 20:03:49 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.870 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.870 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.870 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.870 20:03:49 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.870 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.870 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.870 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.870 20:03:49 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.870 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.870 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.870 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.870 20:03:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.870 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.870 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.870 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.870 20:03:49 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.870 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.870 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.870 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.870 20:03:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.870 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.870 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.870 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.870 20:03:49 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.870 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.870 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.870 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.870 20:03:49 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.870 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.870 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.870 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.870 20:03:49 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.870 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.870 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.870 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.870 20:03:49 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.870 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.870 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.870 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.870 20:03:49 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.870 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.870 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.870 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.870 20:03:49 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.870 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.870 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.870 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.870 20:03:49 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.870 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.870 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.870 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.870 20:03:49 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.870 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.870 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.870 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.870 20:03:49 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.870 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.870 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.870 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.870 20:03:49 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.870 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.870 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.870 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.870 20:03:49 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.870 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.870 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.870 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.870 20:03:49 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.870 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.870 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.870 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.870 20:03:49 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.870 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.871 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.871 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.871 20:03:49 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.871 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.871 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.871 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.871 20:03:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.871 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.871 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.871 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.871 20:03:49 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.871 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.871 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.871 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.871 20:03:49 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.871 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.871 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.871 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.871 20:03:49 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.871 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.871 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.871 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.871 20:03:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.871 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.871 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.871 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.871 20:03:49 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.871 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.871 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.871 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.871 20:03:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.871 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.871 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.871 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.871 20:03:49 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.871 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.871 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.871 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.871 20:03:49 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.871 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.871 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.871 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.871 20:03:49 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.871 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.871 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.871 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.871 20:03:49 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.871 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.871 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.871 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.871 20:03:49 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.871 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.871 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.871 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.871 20:03:49 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.871 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.871 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.871 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.871 20:03:49 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.871 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.871 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.871 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.871 20:03:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.871 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.871 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.871 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.871 20:03:49 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.871 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.871 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.871 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.871 20:03:49 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.871 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.871 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.871 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.871 20:03:49 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.871 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.871 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.871 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.871 20:03:49 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.871 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.871 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.871 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.871 20:03:49 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.871 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.871 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.871 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.871 20:03:49 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.871 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.871 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.871 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.871 20:03:49 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.871 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.871 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.871 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.871 20:03:49 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.871 20:03:49 -- setup/common.sh@33 -- # echo 1024 00:03:11.871 20:03:49 -- setup/common.sh@33 -- # return 0 00:03:11.871 20:03:49 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:11.871 20:03:49 -- setup/hugepages.sh@112 -- # get_nodes 00:03:11.871 20:03:49 -- setup/hugepages.sh@27 -- # local node 00:03:11.871 20:03:49 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:11.871 20:03:49 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:11.871 20:03:49 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:11.871 20:03:49 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:11.871 20:03:49 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:11.871 20:03:49 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:11.871 20:03:49 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:11.871 20:03:49 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:11.871 20:03:49 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:11.871 20:03:49 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:11.871 20:03:49 -- setup/common.sh@18 -- # local node=0 00:03:11.871 20:03:49 -- setup/common.sh@19 -- # local var val 00:03:11.871 20:03:49 -- setup/common.sh@20 -- # local mem_f mem 00:03:11.871 20:03:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.871 20:03:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:11.871 20:03:49 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:11.871 20:03:49 -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.871 20:03:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.871 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.871 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.871 20:03:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32634628 kB' 'MemFree: 24620500 kB' 'MemUsed: 8014128 kB' 'SwapCached: 0 kB' 'Active: 4219236 kB' 'Inactive: 1226824 kB' 'Active(anon): 3765512 kB' 'Inactive(anon): 0 kB' 'Active(file): 453724 kB' 'Inactive(file): 1226824 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5062248 kB' 'Mapped: 191768 kB' 'AnonPages: 387076 kB' 'Shmem: 3381700 kB' 'KernelStack: 11880 kB' 'PageTables: 6004 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 246776 kB' 'Slab: 524080 kB' 'SReclaimable: 246776 kB' 'SUnreclaim: 277304 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:11.871 20:03:49 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.871 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.871 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.871 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.871 20:03:49 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.871 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.871 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.871 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.871 20:03:49 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.871 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.871 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.871 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.871 20:03:49 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.871 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.871 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.871 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.871 20:03:49 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.871 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.871 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.871 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.871 20:03:49 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.871 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.871 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.872 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.872 20:03:49 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.872 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.872 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.872 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.872 20:03:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.872 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.872 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.872 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.872 20:03:49 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.872 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.872 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.872 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.872 20:03:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.872 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.872 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.872 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.872 20:03:49 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.872 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.872 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.872 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.872 20:03:49 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.872 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.872 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.872 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.872 20:03:49 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.872 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.872 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.872 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.872 20:03:49 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.872 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.872 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.872 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.872 20:03:49 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.872 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.872 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.872 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.872 20:03:49 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.872 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.872 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.872 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.872 20:03:49 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.872 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.872 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.872 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.872 20:03:49 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.872 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.872 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.872 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.872 20:03:49 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.872 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.872 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.872 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.872 20:03:49 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.872 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.872 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.872 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.872 20:03:49 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.872 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.872 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.872 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.872 20:03:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.872 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.872 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.872 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.872 20:03:49 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.872 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.872 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.872 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.872 20:03:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.872 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.872 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.872 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.872 20:03:49 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.872 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.872 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.872 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.872 20:03:49 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.872 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.872 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.872 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.872 20:03:49 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.872 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.872 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.872 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.872 20:03:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.872 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.872 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.872 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.872 20:03:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.872 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.872 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.872 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.872 20:03:49 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.872 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.872 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.872 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.872 20:03:49 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.872 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.872 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.872 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.872 20:03:49 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.872 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.872 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.872 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.872 20:03:49 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.872 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.872 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.872 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.872 20:03:49 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.872 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.872 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.872 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.872 20:03:49 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.872 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.872 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.872 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.872 20:03:49 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.872 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.872 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.872 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.872 20:03:49 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.872 20:03:49 -- setup/common.sh@33 -- # echo 0 00:03:11.872 20:03:49 -- setup/common.sh@33 -- # return 0 00:03:11.872 20:03:49 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:11.872 20:03:49 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:11.872 20:03:49 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:11.872 20:03:49 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:11.872 20:03:49 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:11.872 20:03:49 -- setup/common.sh@18 -- # local node=1 00:03:11.872 20:03:49 -- setup/common.sh@19 -- # local var val 00:03:11.872 20:03:49 -- setup/common.sh@20 -- # local mem_f mem 00:03:11.872 20:03:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.872 20:03:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:11.872 20:03:49 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:11.872 20:03:49 -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.872 20:03:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.872 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.872 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.872 20:03:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60688372 kB' 'MemFree: 48821056 kB' 'MemUsed: 11867316 kB' 'SwapCached: 0 kB' 'Active: 5964584 kB' 'Inactive: 3431864 kB' 'Active(anon): 5852084 kB' 'Inactive(anon): 0 kB' 'Active(file): 112500 kB' 'Inactive(file): 3431864 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9250024 kB' 'Mapped: 47588 kB' 'AnonPages: 146404 kB' 'Shmem: 5705660 kB' 'KernelStack: 7512 kB' 'PageTables: 3036 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 287828 kB' 'Slab: 548948 kB' 'SReclaimable: 287828 kB' 'SUnreclaim: 261120 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:11.872 20:03:49 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.872 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.872 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.872 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.872 20:03:49 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.872 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.872 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.872 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.872 20:03:49 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.872 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.872 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.872 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.873 20:03:49 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.873 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.873 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.873 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.873 20:03:49 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.873 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.873 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.873 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.873 20:03:49 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.873 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.873 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.873 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.873 20:03:49 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.873 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.873 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.873 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.873 20:03:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.873 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.873 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.873 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.873 20:03:49 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.873 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.873 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.873 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.873 20:03:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.873 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.873 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.873 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.873 20:03:49 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.873 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.873 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.873 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.873 20:03:49 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.873 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.873 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.873 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.873 20:03:49 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.873 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.873 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.873 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.873 20:03:49 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.873 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.873 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.873 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.873 20:03:49 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.873 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.873 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.873 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.873 20:03:49 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.873 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.873 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.873 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.873 20:03:49 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.873 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.873 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.873 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.873 20:03:49 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.873 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.873 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.873 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.873 20:03:49 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.873 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.873 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.873 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.873 20:03:49 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.873 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.873 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.873 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.873 20:03:49 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.873 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.873 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.873 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.873 20:03:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.873 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.873 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.873 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.873 20:03:49 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.873 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.873 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.873 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.873 20:03:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.873 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.873 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.873 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.873 20:03:49 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.873 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.873 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.873 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.873 20:03:49 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.873 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.873 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.873 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.873 20:03:49 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.873 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.873 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.873 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.873 20:03:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.873 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.873 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.873 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.873 20:03:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.873 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.873 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.873 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.873 20:03:49 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.873 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.873 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.873 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.873 20:03:49 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.873 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.873 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.873 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.873 20:03:49 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.873 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.873 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.873 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.873 20:03:49 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.873 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.873 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.873 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.873 20:03:49 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.873 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.873 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.873 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.873 20:03:49 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.873 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.873 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.873 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.873 20:03:49 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.873 20:03:49 -- setup/common.sh@32 -- # continue 00:03:11.873 20:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.873 20:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.873 20:03:49 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.873 20:03:49 -- setup/common.sh@33 -- # echo 0 00:03:11.873 20:03:49 -- setup/common.sh@33 -- # return 0 00:03:11.873 20:03:49 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:12.133 20:03:49 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:12.133 20:03:49 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:12.133 20:03:49 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:12.133 20:03:49 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:12.133 node0=512 expecting 512 00:03:12.133 20:03:49 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:12.133 20:03:49 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:12.133 20:03:49 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:12.133 20:03:49 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:12.133 node1=512 expecting 512 00:03:12.133 20:03:49 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:12.133 00:03:12.133 real 0m3.239s 00:03:12.133 user 0m1.288s 00:03:12.133 sys 0m1.920s 00:03:12.133 20:03:49 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:12.133 20:03:49 -- common/autotest_common.sh@10 -- # set +x 00:03:12.133 ************************************ 00:03:12.133 END TEST per_node_1G_alloc 00:03:12.133 ************************************ 00:03:12.133 20:03:49 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:12.133 20:03:49 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:03:12.133 20:03:49 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:03:12.133 20:03:49 -- common/autotest_common.sh@10 -- # set +x 00:03:12.133 ************************************ 00:03:12.133 START TEST even_2G_alloc 00:03:12.133 ************************************ 00:03:12.133 20:03:49 -- common/autotest_common.sh@1102 -- # even_2G_alloc 00:03:12.133 20:03:49 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:12.133 20:03:49 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:12.133 20:03:49 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:12.133 20:03:49 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:12.133 20:03:49 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:12.133 20:03:49 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:12.133 20:03:49 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:12.133 20:03:49 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:12.133 20:03:49 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:12.133 20:03:49 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:12.133 20:03:49 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:12.133 20:03:49 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:12.133 20:03:49 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:12.133 20:03:49 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:12.133 20:03:49 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:12.133 20:03:49 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:12.133 20:03:49 -- setup/hugepages.sh@83 -- # : 512 00:03:12.133 20:03:49 -- setup/hugepages.sh@84 -- # : 1 00:03:12.133 20:03:49 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:12.133 20:03:49 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:12.133 20:03:49 -- setup/hugepages.sh@83 -- # : 0 00:03:12.133 20:03:49 -- setup/hugepages.sh@84 -- # : 0 00:03:12.133 20:03:49 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:12.133 20:03:49 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:12.133 20:03:49 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:12.133 20:03:49 -- setup/hugepages.sh@153 -- # setup output 00:03:12.133 20:03:49 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:12.133 20:03:49 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:14.671 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:03:14.671 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:14.671 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:14.671 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:14.671 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:14.671 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:14.671 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:14.934 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:14.934 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:14.934 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:14.934 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:14.934 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:14.934 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:14.934 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:14.934 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:14.934 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:14.934 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:14.934 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:14.934 20:03:52 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:14.934 20:03:52 -- setup/hugepages.sh@89 -- # local node 00:03:14.934 20:03:52 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:14.934 20:03:52 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:14.934 20:03:52 -- setup/hugepages.sh@92 -- # local surp 00:03:14.934 20:03:52 -- setup/hugepages.sh@93 -- # local resv 00:03:14.934 20:03:52 -- setup/hugepages.sh@94 -- # local anon 00:03:14.934 20:03:52 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:14.934 20:03:52 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:14.934 20:03:52 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:14.934 20:03:52 -- setup/common.sh@18 -- # local node= 00:03:14.934 20:03:52 -- setup/common.sh@19 -- # local var val 00:03:14.934 20:03:52 -- setup/common.sh@20 -- # local mem_f mem 00:03:14.934 20:03:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.934 20:03:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:14.934 20:03:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:14.934 20:03:52 -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.934 20:03:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.934 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.934 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.934 20:03:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93323000 kB' 'MemFree: 73439440 kB' 'MemAvailable: 78419120 kB' 'Buffers: 2696 kB' 'Cached: 14309644 kB' 'SwapCached: 0 kB' 'Active: 10184144 kB' 'Inactive: 4658688 kB' 'Active(anon): 9617920 kB' 'Inactive(anon): 0 kB' 'Active(file): 566224 kB' 'Inactive(file): 4658688 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533224 kB' 'Mapped: 239484 kB' 'Shmem: 9087428 kB' 'KReclaimable: 534604 kB' 'Slab: 1073016 kB' 'SReclaimable: 534604 kB' 'SUnreclaim: 538412 kB' 'KernelStack: 19408 kB' 'PageTables: 9136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54001528 kB' 'Committed_AS: 11023028 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212900 kB' 'VmallocChunk: 0 kB' 'Percpu: 94464 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2888660 kB' 'DirectMap2M: 19859456 kB' 'DirectMap1G: 79691776 kB' 00:03:14.934 20:03:52 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.934 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.934 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.934 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.934 20:03:52 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.934 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.934 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.934 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.934 20:03:52 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.934 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.934 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.934 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.934 20:03:52 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.934 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.934 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.934 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.934 20:03:52 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.934 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.934 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.934 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.934 20:03:52 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.934 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.934 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.934 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.934 20:03:52 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.934 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.934 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.934 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.934 20:03:52 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.934 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.934 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.934 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.934 20:03:52 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.934 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.934 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.934 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.934 20:03:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.934 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.934 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.934 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.934 20:03:52 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.934 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.934 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.934 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.934 20:03:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.934 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.934 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.934 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.934 20:03:52 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.934 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.934 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.934 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.934 20:03:52 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.934 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.934 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.934 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.934 20:03:52 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.934 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.934 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.934 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.934 20:03:52 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.934 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.934 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.934 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.934 20:03:52 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.934 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.934 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.934 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.934 20:03:52 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.934 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.934 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.934 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.934 20:03:52 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.934 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.934 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.934 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.934 20:03:52 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.934 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.934 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.934 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.934 20:03:52 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.934 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.934 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.935 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.935 20:03:52 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.935 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.935 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.935 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.935 20:03:52 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.935 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.935 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.935 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.935 20:03:52 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.935 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.935 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.935 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.935 20:03:52 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.935 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.935 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.935 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.935 20:03:52 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.935 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.935 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.935 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.935 20:03:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.935 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.935 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.935 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.935 20:03:52 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.935 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.935 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.935 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.935 20:03:52 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.935 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.935 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.935 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.935 20:03:52 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.935 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.935 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.935 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.935 20:03:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.935 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.935 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.935 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.935 20:03:52 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.935 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.935 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.935 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.935 20:03:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.935 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.935 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.935 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.935 20:03:52 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.935 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.935 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.935 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.935 20:03:52 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.935 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.935 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.935 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.935 20:03:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.935 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.935 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.935 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.935 20:03:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.935 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.935 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.935 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.935 20:03:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.935 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.935 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.935 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.935 20:03:52 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.935 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.935 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.935 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.935 20:03:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.935 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.935 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.935 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.935 20:03:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.935 20:03:52 -- setup/common.sh@33 -- # echo 0 00:03:14.935 20:03:52 -- setup/common.sh@33 -- # return 0 00:03:14.935 20:03:52 -- setup/hugepages.sh@97 -- # anon=0 00:03:14.935 20:03:52 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:14.935 20:03:52 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:14.935 20:03:52 -- setup/common.sh@18 -- # local node= 00:03:14.935 20:03:52 -- setup/common.sh@19 -- # local var val 00:03:14.935 20:03:52 -- setup/common.sh@20 -- # local mem_f mem 00:03:14.935 20:03:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.935 20:03:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:14.935 20:03:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:14.935 20:03:52 -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.935 20:03:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.935 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.935 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.935 20:03:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93323000 kB' 'MemFree: 73439920 kB' 'MemAvailable: 78419600 kB' 'Buffers: 2696 kB' 'Cached: 14309644 kB' 'SwapCached: 0 kB' 'Active: 10184348 kB' 'Inactive: 4658688 kB' 'Active(anon): 9618124 kB' 'Inactive(anon): 0 kB' 'Active(file): 566224 kB' 'Inactive(file): 4658688 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533508 kB' 'Mapped: 239484 kB' 'Shmem: 9087428 kB' 'KReclaimable: 534604 kB' 'Slab: 1073008 kB' 'SReclaimable: 534604 kB' 'SUnreclaim: 538404 kB' 'KernelStack: 19408 kB' 'PageTables: 9120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54001528 kB' 'Committed_AS: 11023040 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212884 kB' 'VmallocChunk: 0 kB' 'Percpu: 94464 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2888660 kB' 'DirectMap2M: 19859456 kB' 'DirectMap1G: 79691776 kB' 00:03:14.935 20:03:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.935 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.935 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.935 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.935 20:03:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.935 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.935 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.935 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.935 20:03:52 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.935 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.935 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.935 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.935 20:03:52 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.935 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.935 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.935 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.935 20:03:52 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.935 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.935 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.935 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.935 20:03:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.935 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.935 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.935 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.935 20:03:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.935 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.935 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.935 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.935 20:03:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.935 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.935 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.935 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.935 20:03:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.935 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.935 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.935 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.935 20:03:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.935 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.935 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.935 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.935 20:03:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.935 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.935 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.935 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.935 20:03:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.935 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.935 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.935 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.935 20:03:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.935 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.935 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.935 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.936 20:03:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.936 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.936 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.936 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.936 20:03:52 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.936 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.936 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.936 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.936 20:03:52 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.936 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.936 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.936 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.936 20:03:52 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.936 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.936 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.936 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.936 20:03:52 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.936 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.936 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.936 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.936 20:03:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.936 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.936 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.936 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.936 20:03:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.936 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.936 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.936 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.936 20:03:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.936 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.936 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.936 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.936 20:03:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.936 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.936 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.936 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.936 20:03:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.936 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.936 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.936 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.936 20:03:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.936 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.936 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.936 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.936 20:03:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.936 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.936 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.936 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.936 20:03:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.936 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.936 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.936 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.936 20:03:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.936 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.936 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.936 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.936 20:03:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.936 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.936 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.936 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.936 20:03:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.936 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.936 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.936 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.936 20:03:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.936 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.936 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.936 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.936 20:03:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.936 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.936 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.936 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.936 20:03:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.936 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.936 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.936 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.936 20:03:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.936 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.936 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.936 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.936 20:03:52 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.936 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.936 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.936 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.936 20:03:52 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.936 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.936 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.936 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.936 20:03:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.936 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.936 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.936 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.936 20:03:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.936 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.936 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.936 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.936 20:03:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.936 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.936 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.936 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.936 20:03:52 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.936 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.936 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.936 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.936 20:03:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.936 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.936 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.936 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.936 20:03:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.936 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.936 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.936 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.936 20:03:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.936 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.936 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.936 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.936 20:03:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.936 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.936 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.936 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.936 20:03:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.936 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.936 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.936 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.936 20:03:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.936 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.936 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.936 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.936 20:03:52 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.936 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.936 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.936 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.936 20:03:52 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.936 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.936 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.936 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.936 20:03:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.936 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.936 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.936 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.936 20:03:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.936 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.936 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.936 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.936 20:03:52 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.936 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.936 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.936 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.936 20:03:52 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.936 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.936 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.936 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.936 20:03:52 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.936 20:03:52 -- setup/common.sh@33 -- # echo 0 00:03:14.936 20:03:52 -- setup/common.sh@33 -- # return 0 00:03:14.936 20:03:52 -- setup/hugepages.sh@99 -- # surp=0 00:03:14.936 20:03:52 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:14.936 20:03:52 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:14.936 20:03:52 -- setup/common.sh@18 -- # local node= 00:03:14.936 20:03:52 -- setup/common.sh@19 -- # local var val 00:03:14.936 20:03:52 -- setup/common.sh@20 -- # local mem_f mem 00:03:14.936 20:03:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.937 20:03:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:14.937 20:03:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:14.937 20:03:52 -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.937 20:03:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.937 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.937 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.937 20:03:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93323000 kB' 'MemFree: 73440340 kB' 'MemAvailable: 78420020 kB' 'Buffers: 2696 kB' 'Cached: 14309656 kB' 'SwapCached: 0 kB' 'Active: 10183920 kB' 'Inactive: 4658688 kB' 'Active(anon): 9617696 kB' 'Inactive(anon): 0 kB' 'Active(file): 566224 kB' 'Inactive(file): 4658688 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533468 kB' 'Mapped: 239408 kB' 'Shmem: 9087440 kB' 'KReclaimable: 534604 kB' 'Slab: 1073008 kB' 'SReclaimable: 534604 kB' 'SUnreclaim: 538404 kB' 'KernelStack: 19408 kB' 'PageTables: 9112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54001528 kB' 'Committed_AS: 11023056 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212884 kB' 'VmallocChunk: 0 kB' 'Percpu: 94464 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2888660 kB' 'DirectMap2M: 19859456 kB' 'DirectMap1G: 79691776 kB' 00:03:14.937 20:03:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.937 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.937 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.937 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.937 20:03:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.937 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.937 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.937 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.937 20:03:52 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.937 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.937 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.937 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.937 20:03:52 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.937 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.937 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.937 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.937 20:03:52 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.937 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.937 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.937 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.937 20:03:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.937 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.937 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.937 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.937 20:03:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.937 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.937 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.937 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.937 20:03:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.937 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.937 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.937 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.937 20:03:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.937 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.937 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.937 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.937 20:03:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.937 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.937 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.937 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.937 20:03:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.937 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.937 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.937 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.937 20:03:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.937 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.937 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.937 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.937 20:03:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.937 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.937 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.937 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.937 20:03:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.937 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.937 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.937 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.937 20:03:52 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.937 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.937 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.937 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.937 20:03:52 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.937 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.937 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.937 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.937 20:03:52 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.937 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.937 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.937 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.937 20:03:52 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.937 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.937 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.937 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.937 20:03:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.937 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.937 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.937 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.937 20:03:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.937 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.937 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.937 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.937 20:03:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.937 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.937 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.937 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.937 20:03:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.937 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.937 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.937 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.937 20:03:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.937 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.937 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.937 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.937 20:03:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.937 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.937 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.937 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.937 20:03:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.937 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.937 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.937 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.937 20:03:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.937 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.937 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.937 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.937 20:03:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.937 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.937 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.937 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.937 20:03:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.937 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.937 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.937 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.937 20:03:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.937 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.937 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.937 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.937 20:03:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.937 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.937 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.937 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.937 20:03:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.937 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.937 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.937 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.937 20:03:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.937 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.937 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.937 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.937 20:03:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.937 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.937 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.937 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.937 20:03:52 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.937 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.937 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.937 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.937 20:03:52 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.937 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.937 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.937 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.938 20:03:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.938 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.938 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.938 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.938 20:03:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.938 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.938 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.938 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.938 20:03:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.938 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.938 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.938 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.938 20:03:52 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.938 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.938 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.938 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.938 20:03:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.938 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.938 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.938 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.938 20:03:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.938 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.938 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.938 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.938 20:03:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.938 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.938 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.938 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.938 20:03:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.938 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.938 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.938 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.938 20:03:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.938 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.938 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.938 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.938 20:03:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.938 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.938 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.938 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.938 20:03:52 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.938 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.938 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.938 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.938 20:03:52 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.938 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.938 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.938 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.938 20:03:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.938 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.938 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.938 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.938 20:03:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.938 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.938 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.938 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.938 20:03:52 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.938 20:03:52 -- setup/common.sh@32 -- # continue 00:03:14.938 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.938 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.938 20:03:52 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.938 20:03:52 -- setup/common.sh@33 -- # echo 0 00:03:14.938 20:03:52 -- setup/common.sh@33 -- # return 0 00:03:14.938 20:03:52 -- setup/hugepages.sh@100 -- # resv=0 00:03:14.938 20:03:52 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:14.938 nr_hugepages=1024 00:03:14.938 20:03:52 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:14.938 resv_hugepages=0 00:03:14.938 20:03:52 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:14.938 surplus_hugepages=0 00:03:14.938 20:03:52 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:14.938 anon_hugepages=0 00:03:14.938 20:03:52 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:14.938 20:03:52 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:14.938 20:03:52 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:14.938 20:03:52 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:14.938 20:03:52 -- setup/common.sh@18 -- # local node= 00:03:14.938 20:03:52 -- setup/common.sh@19 -- # local var val 00:03:14.938 20:03:52 -- setup/common.sh@20 -- # local mem_f mem 00:03:14.938 20:03:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.938 20:03:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:14.938 20:03:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:14.938 20:03:52 -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.938 20:03:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.199 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.199 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.199 20:03:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93323000 kB' 'MemFree: 73440088 kB' 'MemAvailable: 78419768 kB' 'Buffers: 2696 kB' 'Cached: 14309672 kB' 'SwapCached: 0 kB' 'Active: 10183948 kB' 'Inactive: 4658688 kB' 'Active(anon): 9617724 kB' 'Inactive(anon): 0 kB' 'Active(file): 566224 kB' 'Inactive(file): 4658688 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533480 kB' 'Mapped: 239408 kB' 'Shmem: 9087456 kB' 'KReclaimable: 534604 kB' 'Slab: 1073008 kB' 'SReclaimable: 534604 kB' 'SUnreclaim: 538404 kB' 'KernelStack: 19408 kB' 'PageTables: 9112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54001528 kB' 'Committed_AS: 11023068 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212884 kB' 'VmallocChunk: 0 kB' 'Percpu: 94464 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2888660 kB' 'DirectMap2M: 19859456 kB' 'DirectMap1G: 79691776 kB' 00:03:15.199 20:03:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.199 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.199 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.199 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.199 20:03:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.199 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.199 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.199 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.199 20:03:52 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.199 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.199 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.199 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.200 20:03:52 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.200 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.200 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.200 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.200 20:03:52 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.200 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.200 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.200 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.200 20:03:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.200 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.200 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.200 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.200 20:03:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.200 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.200 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.200 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.200 20:03:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.200 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.200 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.200 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.200 20:03:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.200 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.200 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.200 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.200 20:03:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.200 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.200 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.200 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.200 20:03:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.200 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.200 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.200 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.200 20:03:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.200 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.200 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.200 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.200 20:03:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.200 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.200 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.200 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.200 20:03:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.200 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.200 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.200 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.200 20:03:52 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.200 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.200 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.200 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.200 20:03:52 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.200 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.200 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.200 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.200 20:03:52 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.200 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.200 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.200 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.200 20:03:52 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.200 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.200 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.200 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.200 20:03:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.200 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.200 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.200 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.200 20:03:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.200 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.200 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.200 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.200 20:03:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.200 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.200 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.200 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.200 20:03:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.200 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.200 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.200 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.200 20:03:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.200 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.200 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.200 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.200 20:03:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.200 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.200 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.200 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.200 20:03:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.200 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.200 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.200 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.200 20:03:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.200 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.200 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.200 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.200 20:03:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.200 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.200 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.200 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.200 20:03:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.200 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.200 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.200 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.200 20:03:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.200 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.200 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.200 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.200 20:03:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.200 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.200 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.200 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.200 20:03:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.200 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.200 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.200 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.200 20:03:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.200 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.200 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.200 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.200 20:03:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.200 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.200 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.200 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.200 20:03:52 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.200 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.200 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.200 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.200 20:03:52 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.200 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.200 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.200 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.200 20:03:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.200 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.200 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.200 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.200 20:03:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.200 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.200 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.200 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.200 20:03:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.200 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.200 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.200 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.200 20:03:52 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.200 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.200 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.200 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.200 20:03:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.200 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.200 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.200 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.200 20:03:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.200 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.200 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.200 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.200 20:03:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.201 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.201 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.201 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.201 20:03:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.201 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.201 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.201 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.201 20:03:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.201 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.201 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.201 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.201 20:03:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.201 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.201 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.201 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.201 20:03:52 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.201 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.201 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.201 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.201 20:03:52 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.201 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.201 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.201 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.201 20:03:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.201 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.201 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.201 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.201 20:03:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.201 20:03:52 -- setup/common.sh@33 -- # echo 1024 00:03:15.201 20:03:52 -- setup/common.sh@33 -- # return 0 00:03:15.201 20:03:52 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:15.201 20:03:52 -- setup/hugepages.sh@112 -- # get_nodes 00:03:15.201 20:03:52 -- setup/hugepages.sh@27 -- # local node 00:03:15.201 20:03:52 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:15.201 20:03:52 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:15.201 20:03:52 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:15.201 20:03:52 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:15.201 20:03:52 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:15.201 20:03:52 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:15.201 20:03:52 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:15.201 20:03:52 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:15.201 20:03:52 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:15.201 20:03:52 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:15.201 20:03:52 -- setup/common.sh@18 -- # local node=0 00:03:15.201 20:03:52 -- setup/common.sh@19 -- # local var val 00:03:15.201 20:03:52 -- setup/common.sh@20 -- # local mem_f mem 00:03:15.201 20:03:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.201 20:03:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:15.201 20:03:52 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:15.201 20:03:52 -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.201 20:03:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.201 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.201 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.201 20:03:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32634628 kB' 'MemFree: 24617000 kB' 'MemUsed: 8017628 kB' 'SwapCached: 0 kB' 'Active: 4219084 kB' 'Inactive: 1226824 kB' 'Active(anon): 3765360 kB' 'Inactive(anon): 0 kB' 'Active(file): 453724 kB' 'Inactive(file): 1226824 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5062312 kB' 'Mapped: 191776 kB' 'AnonPages: 386796 kB' 'Shmem: 3381764 kB' 'KernelStack: 11912 kB' 'PageTables: 6072 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 246776 kB' 'Slab: 524128 kB' 'SReclaimable: 246776 kB' 'SUnreclaim: 277352 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:15.201 20:03:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.201 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.201 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.201 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.201 20:03:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.201 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.201 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.201 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.201 20:03:52 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.201 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.201 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.201 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.201 20:03:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.201 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.201 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.201 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.201 20:03:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.201 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.201 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.201 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.201 20:03:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.201 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.201 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.201 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.201 20:03:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.201 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.201 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.201 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.201 20:03:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.201 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.201 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.201 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.201 20:03:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.201 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.201 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.201 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.201 20:03:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.201 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.201 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.201 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.201 20:03:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.201 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.201 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.201 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.201 20:03:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.201 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.201 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.201 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.201 20:03:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.201 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.201 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.201 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.201 20:03:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.201 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.201 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.201 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.201 20:03:52 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.201 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.201 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.201 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.201 20:03:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.201 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.201 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.201 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.201 20:03:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.201 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.201 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.201 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.201 20:03:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.201 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.201 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.201 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.201 20:03:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.202 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.202 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.202 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.202 20:03:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.202 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.202 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.202 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.202 20:03:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.202 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.202 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.202 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.202 20:03:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.202 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.202 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.202 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.202 20:03:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.202 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.202 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.202 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.202 20:03:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.202 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.202 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.202 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.202 20:03:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.202 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.202 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.202 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.202 20:03:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.202 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.202 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.202 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.202 20:03:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.202 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.202 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.202 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.202 20:03:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.202 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.202 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.202 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.202 20:03:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.202 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.202 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.202 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.202 20:03:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.202 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.202 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.202 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.202 20:03:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.202 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.202 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.202 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.202 20:03:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.202 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.202 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.202 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.202 20:03:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.202 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.202 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.202 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.202 20:03:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.202 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.202 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.202 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.202 20:03:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.202 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.202 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.202 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.202 20:03:52 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.202 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.202 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.202 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.202 20:03:52 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.202 20:03:52 -- setup/common.sh@33 -- # echo 0 00:03:15.202 20:03:52 -- setup/common.sh@33 -- # return 0 00:03:15.202 20:03:52 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:15.202 20:03:52 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:15.202 20:03:52 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:15.202 20:03:52 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:15.202 20:03:52 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:15.202 20:03:52 -- setup/common.sh@18 -- # local node=1 00:03:15.202 20:03:52 -- setup/common.sh@19 -- # local var val 00:03:15.202 20:03:52 -- setup/common.sh@20 -- # local mem_f mem 00:03:15.202 20:03:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.202 20:03:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:15.202 20:03:52 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:15.202 20:03:52 -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.202 20:03:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.202 20:03:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60688372 kB' 'MemFree: 48823308 kB' 'MemUsed: 11865064 kB' 'SwapCached: 0 kB' 'Active: 5965228 kB' 'Inactive: 3431864 kB' 'Active(anon): 5852728 kB' 'Inactive(anon): 0 kB' 'Active(file): 112500 kB' 'Inactive(file): 3431864 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9250068 kB' 'Mapped: 47632 kB' 'AnonPages: 147032 kB' 'Shmem: 5705704 kB' 'KernelStack: 7512 kB' 'PageTables: 3092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 287828 kB' 'Slab: 548880 kB' 'SReclaimable: 287828 kB' 'SUnreclaim: 261052 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:15.202 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.202 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.202 20:03:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.202 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.202 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.202 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.202 20:03:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.202 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.202 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.202 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.202 20:03:52 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.202 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.202 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.202 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.202 20:03:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.202 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.202 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.202 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.202 20:03:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.202 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.202 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.202 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.202 20:03:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.202 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.202 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.202 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.202 20:03:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.202 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.202 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.202 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.202 20:03:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.202 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.202 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.202 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.202 20:03:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.202 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.202 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.202 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.202 20:03:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.202 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.202 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.202 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.202 20:03:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.202 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.202 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.202 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.203 20:03:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.203 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.203 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.203 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.203 20:03:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.203 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.203 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.203 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.203 20:03:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.203 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.203 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.203 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.203 20:03:52 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.203 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.203 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.203 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.203 20:03:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.203 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.203 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.203 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.203 20:03:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.203 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.203 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.203 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.203 20:03:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.203 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.203 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.203 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.203 20:03:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.203 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.203 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.203 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.203 20:03:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.203 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.203 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.203 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.203 20:03:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.203 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.203 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.203 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.203 20:03:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.203 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.203 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.203 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.203 20:03:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.203 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.203 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.203 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.203 20:03:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.203 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.203 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.203 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.203 20:03:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.203 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.203 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.203 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.203 20:03:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.203 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.203 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.203 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.203 20:03:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.203 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.203 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.203 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.203 20:03:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.203 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.203 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.203 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.203 20:03:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.203 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.203 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.203 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.203 20:03:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.203 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.203 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.203 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.203 20:03:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.203 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.203 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.203 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.203 20:03:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.203 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.203 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.203 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.203 20:03:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.203 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.203 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.203 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.203 20:03:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.203 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.203 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.203 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.203 20:03:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.203 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.203 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.203 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.203 20:03:52 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.203 20:03:52 -- setup/common.sh@32 -- # continue 00:03:15.203 20:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.203 20:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.203 20:03:52 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.203 20:03:52 -- setup/common.sh@33 -- # echo 0 00:03:15.203 20:03:52 -- setup/common.sh@33 -- # return 0 00:03:15.203 20:03:52 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:15.203 20:03:52 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:15.203 20:03:52 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:15.203 20:03:52 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:15.203 20:03:52 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:15.203 node0=512 expecting 512 00:03:15.203 20:03:52 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:15.203 20:03:52 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:15.203 20:03:52 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:15.203 20:03:52 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:15.203 node1=512 expecting 512 00:03:15.203 20:03:52 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:15.203 00:03:15.203 real 0m3.101s 00:03:15.203 user 0m1.207s 00:03:15.203 sys 0m1.834s 00:03:15.203 20:03:52 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:15.203 20:03:52 -- common/autotest_common.sh@10 -- # set +x 00:03:15.203 ************************************ 00:03:15.203 END TEST even_2G_alloc 00:03:15.203 ************************************ 00:03:15.203 20:03:52 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:15.203 20:03:52 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:03:15.203 20:03:52 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:03:15.203 20:03:52 -- common/autotest_common.sh@10 -- # set +x 00:03:15.203 ************************************ 00:03:15.203 START TEST odd_alloc 00:03:15.203 ************************************ 00:03:15.203 20:03:52 -- common/autotest_common.sh@1102 -- # odd_alloc 00:03:15.203 20:03:52 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:15.203 20:03:52 -- setup/hugepages.sh@49 -- # local size=2098176 00:03:15.203 20:03:52 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:15.203 20:03:52 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:15.203 20:03:52 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:15.203 20:03:52 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:15.203 20:03:52 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:15.203 20:03:52 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:15.203 20:03:52 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:15.203 20:03:52 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:15.203 20:03:52 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:15.203 20:03:52 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:15.203 20:03:52 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:15.203 20:03:52 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:15.203 20:03:52 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:15.203 20:03:52 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:15.203 20:03:52 -- setup/hugepages.sh@83 -- # : 513 00:03:15.203 20:03:52 -- setup/hugepages.sh@84 -- # : 1 00:03:15.203 20:03:52 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:15.204 20:03:52 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:15.204 20:03:52 -- setup/hugepages.sh@83 -- # : 0 00:03:15.204 20:03:52 -- setup/hugepages.sh@84 -- # : 0 00:03:15.204 20:03:52 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:15.204 20:03:52 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:15.204 20:03:52 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:15.204 20:03:52 -- setup/hugepages.sh@160 -- # setup output 00:03:15.204 20:03:52 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:15.204 20:03:52 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:17.790 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:03:18.050 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:18.050 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:18.050 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:18.050 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:18.050 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:18.050 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:18.050 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:18.050 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:18.050 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:18.050 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:18.050 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:18.050 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:18.050 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:18.050 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:18.050 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:18.050 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:18.050 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:18.314 20:03:55 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:18.314 20:03:55 -- setup/hugepages.sh@89 -- # local node 00:03:18.314 20:03:55 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:18.314 20:03:55 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:18.314 20:03:55 -- setup/hugepages.sh@92 -- # local surp 00:03:18.314 20:03:55 -- setup/hugepages.sh@93 -- # local resv 00:03:18.314 20:03:55 -- setup/hugepages.sh@94 -- # local anon 00:03:18.314 20:03:55 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:18.314 20:03:55 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:18.314 20:03:55 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:18.314 20:03:55 -- setup/common.sh@18 -- # local node= 00:03:18.314 20:03:55 -- setup/common.sh@19 -- # local var val 00:03:18.314 20:03:55 -- setup/common.sh@20 -- # local mem_f mem 00:03:18.314 20:03:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.314 20:03:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.314 20:03:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.314 20:03:55 -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.314 20:03:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.314 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.314 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.314 20:03:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93323000 kB' 'MemFree: 73428200 kB' 'MemAvailable: 78407880 kB' 'Buffers: 2696 kB' 'Cached: 14309768 kB' 'SwapCached: 0 kB' 'Active: 10186092 kB' 'Inactive: 4658688 kB' 'Active(anon): 9619868 kB' 'Inactive(anon): 0 kB' 'Active(file): 566224 kB' 'Inactive(file): 4658688 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535648 kB' 'Mapped: 239520 kB' 'Shmem: 9087552 kB' 'KReclaimable: 534604 kB' 'Slab: 1072604 kB' 'SReclaimable: 534604 kB' 'SUnreclaim: 538000 kB' 'KernelStack: 19360 kB' 'PageTables: 8968 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54000504 kB' 'Committed_AS: 11027716 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212932 kB' 'VmallocChunk: 0 kB' 'Percpu: 94464 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2888660 kB' 'DirectMap2M: 19859456 kB' 'DirectMap1G: 79691776 kB' 00:03:18.314 20:03:55 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.314 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.314 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.314 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.314 20:03:55 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.314 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.314 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.314 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.314 20:03:55 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.314 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.314 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.314 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.314 20:03:55 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.314 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.314 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.314 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.314 20:03:55 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.314 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.314 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.314 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.314 20:03:55 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.314 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.314 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.314 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.314 20:03:55 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.314 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.314 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.314 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.314 20:03:55 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.314 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.314 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.314 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.314 20:03:55 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.314 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.314 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.314 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.314 20:03:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.314 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.314 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.314 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.314 20:03:55 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.314 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.314 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.314 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.314 20:03:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.314 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.314 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.314 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.314 20:03:55 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.314 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.314 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.314 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.314 20:03:55 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.314 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.314 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.314 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.314 20:03:55 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.314 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.314 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.314 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.314 20:03:55 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.314 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.314 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.314 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.314 20:03:55 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.314 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.314 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.314 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.314 20:03:55 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.314 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.314 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.314 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.314 20:03:55 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.314 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.314 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.314 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.315 20:03:55 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.315 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.315 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.315 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.315 20:03:55 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.315 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.315 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.315 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.315 20:03:55 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.315 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.315 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.315 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.315 20:03:55 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.315 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.315 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.315 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.315 20:03:55 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.315 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.315 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.315 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.315 20:03:55 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.315 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.315 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.315 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.315 20:03:55 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.315 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.315 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.315 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.315 20:03:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.315 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.315 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.315 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.315 20:03:55 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.315 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.315 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.315 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.315 20:03:55 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.315 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.315 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.315 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.315 20:03:55 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.315 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.315 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.315 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.315 20:03:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.315 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.315 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.315 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.315 20:03:55 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.315 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.315 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.315 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.315 20:03:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.315 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.315 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.315 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.315 20:03:55 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.315 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.315 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.315 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.315 20:03:55 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.315 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.315 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.315 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.315 20:03:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.315 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.315 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.315 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.315 20:03:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.315 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.315 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.315 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.315 20:03:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.315 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.315 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.315 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.315 20:03:55 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.315 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.315 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.315 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.315 20:03:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.315 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.315 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.315 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.315 20:03:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.315 20:03:55 -- setup/common.sh@33 -- # echo 0 00:03:18.315 20:03:55 -- setup/common.sh@33 -- # return 0 00:03:18.315 20:03:55 -- setup/hugepages.sh@97 -- # anon=0 00:03:18.315 20:03:55 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:18.315 20:03:55 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:18.315 20:03:55 -- setup/common.sh@18 -- # local node= 00:03:18.315 20:03:55 -- setup/common.sh@19 -- # local var val 00:03:18.315 20:03:55 -- setup/common.sh@20 -- # local mem_f mem 00:03:18.315 20:03:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.315 20:03:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.315 20:03:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.315 20:03:55 -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.315 20:03:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.315 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.315 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.315 20:03:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93323000 kB' 'MemFree: 73430472 kB' 'MemAvailable: 78410152 kB' 'Buffers: 2696 kB' 'Cached: 14309772 kB' 'SwapCached: 0 kB' 'Active: 10187136 kB' 'Inactive: 4658688 kB' 'Active(anon): 9620912 kB' 'Inactive(anon): 0 kB' 'Active(file): 566224 kB' 'Inactive(file): 4658688 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536772 kB' 'Mapped: 239520 kB' 'Shmem: 9087556 kB' 'KReclaimable: 534604 kB' 'Slab: 1072588 kB' 'SReclaimable: 534604 kB' 'SUnreclaim: 537984 kB' 'KernelStack: 19696 kB' 'PageTables: 9716 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54000504 kB' 'Committed_AS: 11027728 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212932 kB' 'VmallocChunk: 0 kB' 'Percpu: 94464 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2888660 kB' 'DirectMap2M: 19859456 kB' 'DirectMap1G: 79691776 kB' 00:03:18.315 20:03:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.315 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.315 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.315 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.315 20:03:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.315 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.315 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.315 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.315 20:03:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.315 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.315 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.315 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.315 20:03:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.315 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.315 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.315 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.315 20:03:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.315 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.315 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.315 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.315 20:03:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.315 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.315 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.315 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.315 20:03:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.315 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.315 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.315 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.315 20:03:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.315 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.315 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.315 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.315 20:03:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.315 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.315 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.315 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.315 20:03:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.315 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.315 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.315 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.315 20:03:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.315 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.316 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.316 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.316 20:03:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.316 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.316 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.316 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.316 20:03:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.316 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.316 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.316 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.316 20:03:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.316 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.316 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.316 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.316 20:03:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.316 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.316 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.316 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.316 20:03:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.316 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.316 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.316 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.316 20:03:55 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.316 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.316 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.316 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.316 20:03:55 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.316 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.316 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.316 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.316 20:03:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.316 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.316 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.316 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.316 20:03:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.316 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.316 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.316 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.316 20:03:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.316 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.316 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.316 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.316 20:03:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.316 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.316 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.316 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.316 20:03:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.316 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.316 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.316 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.316 20:03:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.316 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.316 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.316 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.316 20:03:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.316 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.316 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.316 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.316 20:03:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.316 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.316 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.316 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.316 20:03:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.316 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.316 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.316 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.316 20:03:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.316 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.316 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.316 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.316 20:03:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.316 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.316 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.316 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.316 20:03:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.316 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.316 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.316 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.316 20:03:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.316 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.316 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.316 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.316 20:03:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.316 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.316 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.316 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.316 20:03:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.316 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.316 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.316 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.316 20:03:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.316 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.316 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.316 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.316 20:03:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.316 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.316 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.316 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.316 20:03:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.316 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.316 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.316 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.316 20:03:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.316 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.316 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.316 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.316 20:03:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.316 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.316 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.316 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.316 20:03:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.316 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.316 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.316 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.316 20:03:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.316 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.316 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.316 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.316 20:03:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.316 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.316 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.316 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.316 20:03:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.316 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.316 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.316 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.316 20:03:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.316 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.316 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.316 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.316 20:03:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.316 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.316 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.316 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.316 20:03:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.316 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.316 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.316 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.316 20:03:55 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.316 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.316 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.316 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.316 20:03:55 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.316 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.316 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.316 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.316 20:03:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.316 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.316 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.316 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.316 20:03:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.316 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.316 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.316 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.316 20:03:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.316 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.316 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.316 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.316 20:03:55 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.316 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.316 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.316 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.316 20:03:55 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.316 20:03:55 -- setup/common.sh@33 -- # echo 0 00:03:18.317 20:03:55 -- setup/common.sh@33 -- # return 0 00:03:18.317 20:03:55 -- setup/hugepages.sh@99 -- # surp=0 00:03:18.317 20:03:55 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:18.317 20:03:55 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:18.317 20:03:55 -- setup/common.sh@18 -- # local node= 00:03:18.317 20:03:55 -- setup/common.sh@19 -- # local var val 00:03:18.317 20:03:55 -- setup/common.sh@20 -- # local mem_f mem 00:03:18.317 20:03:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.317 20:03:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.317 20:03:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.317 20:03:55 -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.317 20:03:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.317 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.317 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.317 20:03:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93323000 kB' 'MemFree: 73429804 kB' 'MemAvailable: 78409484 kB' 'Buffers: 2696 kB' 'Cached: 14309784 kB' 'SwapCached: 0 kB' 'Active: 10186060 kB' 'Inactive: 4658688 kB' 'Active(anon): 9619836 kB' 'Inactive(anon): 0 kB' 'Active(file): 566224 kB' 'Inactive(file): 4658688 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535544 kB' 'Mapped: 239424 kB' 'Shmem: 9087568 kB' 'KReclaimable: 534604 kB' 'Slab: 1072548 kB' 'SReclaimable: 534604 kB' 'SUnreclaim: 537944 kB' 'KernelStack: 19552 kB' 'PageTables: 9124 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54000504 kB' 'Committed_AS: 11026352 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212996 kB' 'VmallocChunk: 0 kB' 'Percpu: 94464 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2888660 kB' 'DirectMap2M: 19859456 kB' 'DirectMap1G: 79691776 kB' 00:03:18.317 20:03:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.317 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.317 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.317 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.317 20:03:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.317 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.317 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.317 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.317 20:03:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.317 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.317 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.317 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.317 20:03:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.317 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.317 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.317 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.317 20:03:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.317 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.317 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.317 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.317 20:03:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.317 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.317 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.317 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.317 20:03:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.317 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.317 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.317 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.317 20:03:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.317 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.317 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.317 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.317 20:03:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.317 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.317 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.317 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.317 20:03:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.317 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.317 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.317 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.317 20:03:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.317 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.317 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.317 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.317 20:03:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.317 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.317 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.317 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.317 20:03:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.317 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.317 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.317 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.317 20:03:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.317 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.317 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.317 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.317 20:03:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.317 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.317 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.317 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.317 20:03:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.317 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.317 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.317 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.317 20:03:55 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.317 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.317 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.317 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.317 20:03:55 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.317 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.317 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.317 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.317 20:03:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.317 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.317 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.317 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.317 20:03:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.317 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.317 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.317 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.317 20:03:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.317 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.317 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.317 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.317 20:03:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.317 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.317 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.317 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.317 20:03:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.317 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.317 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.317 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.317 20:03:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.317 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.317 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.317 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.317 20:03:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.317 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.317 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.317 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.317 20:03:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.317 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.317 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.317 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.317 20:03:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.317 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.317 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.317 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.317 20:03:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.317 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.317 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.317 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.317 20:03:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.317 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.317 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.317 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.317 20:03:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.317 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.317 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.317 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.317 20:03:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.317 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.317 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.317 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.317 20:03:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.317 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.317 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.317 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.317 20:03:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.317 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.317 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.318 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.318 20:03:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.318 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.318 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.318 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.318 20:03:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.318 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.318 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.318 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.318 20:03:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.318 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.318 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.318 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.318 20:03:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.318 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.318 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.318 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.318 20:03:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.318 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.318 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.318 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.318 20:03:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.318 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.318 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.318 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.318 20:03:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.318 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.318 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.318 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.318 20:03:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.318 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.318 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.318 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.318 20:03:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.318 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.318 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.318 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.318 20:03:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.318 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.318 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.318 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.318 20:03:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.318 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.318 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.318 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.318 20:03:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.318 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.318 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.318 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.318 20:03:55 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.318 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.318 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.318 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.318 20:03:55 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.318 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.318 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.318 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.318 20:03:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.318 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.318 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.318 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.318 20:03:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.318 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.318 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.318 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.318 20:03:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.318 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.318 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.318 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.318 20:03:55 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.318 20:03:55 -- setup/common.sh@33 -- # echo 0 00:03:18.318 20:03:55 -- setup/common.sh@33 -- # return 0 00:03:18.318 20:03:55 -- setup/hugepages.sh@100 -- # resv=0 00:03:18.318 20:03:55 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:18.318 nr_hugepages=1025 00:03:18.318 20:03:55 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:18.318 resv_hugepages=0 00:03:18.318 20:03:55 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:18.318 surplus_hugepages=0 00:03:18.318 20:03:55 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:18.318 anon_hugepages=0 00:03:18.318 20:03:55 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:18.318 20:03:55 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:18.318 20:03:55 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:18.318 20:03:55 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:18.318 20:03:55 -- setup/common.sh@18 -- # local node= 00:03:18.318 20:03:55 -- setup/common.sh@19 -- # local var val 00:03:18.318 20:03:55 -- setup/common.sh@20 -- # local mem_f mem 00:03:18.318 20:03:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.318 20:03:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.318 20:03:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.318 20:03:55 -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.318 20:03:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.318 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.318 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.318 20:03:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93323000 kB' 'MemFree: 73428732 kB' 'MemAvailable: 78408412 kB' 'Buffers: 2696 kB' 'Cached: 14309796 kB' 'SwapCached: 0 kB' 'Active: 10186308 kB' 'Inactive: 4658688 kB' 'Active(anon): 9620084 kB' 'Inactive(anon): 0 kB' 'Active(file): 566224 kB' 'Inactive(file): 4658688 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535768 kB' 'Mapped: 239424 kB' 'Shmem: 9087580 kB' 'KReclaimable: 534604 kB' 'Slab: 1072548 kB' 'SReclaimable: 534604 kB' 'SUnreclaim: 537944 kB' 'KernelStack: 19616 kB' 'PageTables: 9676 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54000504 kB' 'Committed_AS: 11027756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213028 kB' 'VmallocChunk: 0 kB' 'Percpu: 94464 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2888660 kB' 'DirectMap2M: 19859456 kB' 'DirectMap1G: 79691776 kB' 00:03:18.318 20:03:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.318 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.318 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.318 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.318 20:03:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.318 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.318 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.318 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.318 20:03:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.318 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.318 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.318 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.318 20:03:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.318 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.318 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.318 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.318 20:03:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.318 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.318 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.318 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.318 20:03:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.318 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.318 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.318 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.318 20:03:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.318 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.318 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.318 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.318 20:03:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.318 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.318 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.318 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.318 20:03:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.318 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.318 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.318 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.318 20:03:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.318 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.319 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.319 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.319 20:03:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.319 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.319 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.319 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.319 20:03:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.319 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.319 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.319 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.319 20:03:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.319 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.319 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.319 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.319 20:03:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.319 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.319 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.319 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.319 20:03:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.319 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.319 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.319 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.319 20:03:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.319 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.319 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.319 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.319 20:03:55 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.319 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.319 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.319 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.319 20:03:55 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.319 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.319 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.319 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.319 20:03:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.319 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.319 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.319 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.319 20:03:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.319 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.319 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.319 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.319 20:03:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.319 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.319 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.319 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.319 20:03:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.319 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.319 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.319 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.319 20:03:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.319 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.319 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.319 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.319 20:03:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.319 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.319 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.319 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.319 20:03:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.319 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.319 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.319 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.319 20:03:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.319 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.319 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.319 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.319 20:03:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.319 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.319 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.319 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.319 20:03:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.319 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.319 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.319 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.319 20:03:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.319 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.319 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.319 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.319 20:03:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.319 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.319 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.319 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.319 20:03:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.319 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.319 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.319 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.319 20:03:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.319 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.319 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.319 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.319 20:03:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.319 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.319 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.319 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.319 20:03:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.319 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.319 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.319 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.319 20:03:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.319 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.319 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.319 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.319 20:03:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.319 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.319 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.319 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.319 20:03:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.319 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.319 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.319 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.319 20:03:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.319 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.319 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.319 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.319 20:03:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.319 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.319 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.319 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.319 20:03:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.319 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.319 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.319 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.319 20:03:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.319 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.319 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.319 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.319 20:03:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.319 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.319 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.319 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.319 20:03:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.319 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.319 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.319 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.319 20:03:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.319 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.319 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.319 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.319 20:03:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.319 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.319 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.319 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.319 20:03:55 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.319 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.320 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.320 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.320 20:03:55 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.320 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.320 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.320 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.320 20:03:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.320 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.320 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.320 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.320 20:03:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.320 20:03:55 -- setup/common.sh@33 -- # echo 1025 00:03:18.320 20:03:55 -- setup/common.sh@33 -- # return 0 00:03:18.320 20:03:55 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:18.320 20:03:55 -- setup/hugepages.sh@112 -- # get_nodes 00:03:18.320 20:03:55 -- setup/hugepages.sh@27 -- # local node 00:03:18.320 20:03:55 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:18.320 20:03:55 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:18.320 20:03:55 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:18.320 20:03:55 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:18.320 20:03:55 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:18.320 20:03:55 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:18.320 20:03:55 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:18.320 20:03:55 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:18.320 20:03:55 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:18.320 20:03:55 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:18.320 20:03:55 -- setup/common.sh@18 -- # local node=0 00:03:18.320 20:03:55 -- setup/common.sh@19 -- # local var val 00:03:18.320 20:03:55 -- setup/common.sh@20 -- # local mem_f mem 00:03:18.320 20:03:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.320 20:03:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:18.320 20:03:55 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:18.320 20:03:55 -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.320 20:03:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.320 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.320 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.320 20:03:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32634628 kB' 'MemFree: 24607236 kB' 'MemUsed: 8027392 kB' 'SwapCached: 0 kB' 'Active: 4218720 kB' 'Inactive: 1226824 kB' 'Active(anon): 3764996 kB' 'Inactive(anon): 0 kB' 'Active(file): 453724 kB' 'Inactive(file): 1226824 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5062408 kB' 'Mapped: 191792 kB' 'AnonPages: 386384 kB' 'Shmem: 3381860 kB' 'KernelStack: 11880 kB' 'PageTables: 5916 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 246776 kB' 'Slab: 524100 kB' 'SReclaimable: 246776 kB' 'SUnreclaim: 277324 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:18.320 20:03:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.320 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.320 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.320 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.320 20:03:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.320 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.320 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.320 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.320 20:03:55 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.320 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.320 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.320 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.320 20:03:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.320 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.320 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.320 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.320 20:03:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.320 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.320 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.320 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.320 20:03:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.320 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.320 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.320 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.320 20:03:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.320 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.320 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.320 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.320 20:03:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.320 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.320 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.320 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.320 20:03:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.320 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.320 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.320 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.320 20:03:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.320 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.320 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.320 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.320 20:03:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.320 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.320 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.320 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.320 20:03:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.320 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.320 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.320 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.320 20:03:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.320 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.320 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.320 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.320 20:03:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.320 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.320 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.320 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.320 20:03:55 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.320 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.320 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.320 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.320 20:03:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.320 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.320 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.320 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.320 20:03:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.320 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.320 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.320 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.320 20:03:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.320 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.320 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.320 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.320 20:03:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.320 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.320 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.320 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.320 20:03:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.320 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.320 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.320 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.320 20:03:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.320 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.320 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.320 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.320 20:03:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.320 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.320 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.320 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.320 20:03:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.320 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.320 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.320 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.320 20:03:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.320 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.320 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.320 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.320 20:03:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.320 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.320 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.320 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.320 20:03:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.320 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.320 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.320 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.320 20:03:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.320 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.320 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.320 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.320 20:03:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.320 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.320 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.320 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.320 20:03:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.321 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.321 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.321 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.321 20:03:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.321 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.321 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.321 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.321 20:03:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.321 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.321 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.321 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.321 20:03:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.321 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.321 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.321 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.321 20:03:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.321 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.321 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.321 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.321 20:03:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.321 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.321 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.321 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.321 20:03:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.321 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.321 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.321 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.321 20:03:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.321 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.321 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.321 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.321 20:03:55 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.321 20:03:55 -- setup/common.sh@33 -- # echo 0 00:03:18.321 20:03:55 -- setup/common.sh@33 -- # return 0 00:03:18.321 20:03:55 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:18.321 20:03:55 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:18.321 20:03:55 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:18.321 20:03:55 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:18.321 20:03:55 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:18.321 20:03:55 -- setup/common.sh@18 -- # local node=1 00:03:18.321 20:03:55 -- setup/common.sh@19 -- # local var val 00:03:18.321 20:03:55 -- setup/common.sh@20 -- # local mem_f mem 00:03:18.321 20:03:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.321 20:03:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:18.321 20:03:55 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:18.321 20:03:55 -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.321 20:03:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.321 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.321 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.321 20:03:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60688372 kB' 'MemFree: 48821152 kB' 'MemUsed: 11867220 kB' 'SwapCached: 0 kB' 'Active: 5967276 kB' 'Inactive: 3431864 kB' 'Active(anon): 5854776 kB' 'Inactive(anon): 0 kB' 'Active(file): 112500 kB' 'Inactive(file): 3431864 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9250100 kB' 'Mapped: 47632 kB' 'AnonPages: 149104 kB' 'Shmem: 5705736 kB' 'KernelStack: 7752 kB' 'PageTables: 3404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 287828 kB' 'Slab: 548448 kB' 'SReclaimable: 287828 kB' 'SUnreclaim: 260620 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:18.321 20:03:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.321 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.321 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.321 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.321 20:03:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.321 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.321 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.321 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.321 20:03:55 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.321 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.321 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.321 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.321 20:03:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.321 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.321 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.321 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.321 20:03:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.321 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.321 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.321 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.321 20:03:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.321 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.321 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.321 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.321 20:03:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.321 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.321 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.321 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.321 20:03:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.321 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.321 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.321 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.321 20:03:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.321 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.321 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.321 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.582 20:03:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.582 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.582 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.582 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.582 20:03:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.582 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.582 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.582 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.582 20:03:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.582 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.582 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.582 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.582 20:03:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.582 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.582 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.582 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.582 20:03:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.582 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.582 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.582 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.582 20:03:55 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.582 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.582 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.582 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.582 20:03:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.582 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.582 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.582 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.582 20:03:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.582 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.582 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.582 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.582 20:03:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.582 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.582 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.582 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.582 20:03:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.582 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.582 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.582 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.582 20:03:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.582 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.582 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.582 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.582 20:03:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.582 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.582 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.582 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.582 20:03:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.582 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.582 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.582 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.582 20:03:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.582 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.582 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.582 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.582 20:03:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.582 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.582 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.582 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.582 20:03:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.582 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.582 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.582 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.582 20:03:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.582 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.582 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.582 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.582 20:03:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.582 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.582 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.582 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.582 20:03:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.582 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.582 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.582 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.582 20:03:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.582 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.582 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.582 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.582 20:03:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.582 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.582 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.582 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.582 20:03:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.582 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.582 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.582 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.582 20:03:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.582 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.582 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.582 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.582 20:03:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.582 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.582 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.582 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.582 20:03:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.582 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.582 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.582 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.582 20:03:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.582 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.582 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.582 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.582 20:03:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.582 20:03:55 -- setup/common.sh@32 -- # continue 00:03:18.582 20:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.582 20:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.582 20:03:55 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.582 20:03:55 -- setup/common.sh@33 -- # echo 0 00:03:18.582 20:03:55 -- setup/common.sh@33 -- # return 0 00:03:18.582 20:03:55 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:18.582 20:03:55 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:18.582 20:03:55 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:18.582 20:03:55 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:18.582 20:03:55 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:18.582 node0=512 expecting 513 00:03:18.582 20:03:55 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:18.582 20:03:55 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:18.582 20:03:55 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:18.582 20:03:55 -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:18.582 node1=513 expecting 512 00:03:18.582 20:03:55 -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:18.582 00:03:18.582 real 0m3.297s 00:03:18.582 user 0m1.394s 00:03:18.582 sys 0m1.947s 00:03:18.582 20:03:55 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:18.582 20:03:55 -- common/autotest_common.sh@10 -- # set +x 00:03:18.582 ************************************ 00:03:18.582 END TEST odd_alloc 00:03:18.582 ************************************ 00:03:18.582 20:03:55 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:18.583 20:03:55 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:03:18.583 20:03:55 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:03:18.583 20:03:55 -- common/autotest_common.sh@10 -- # set +x 00:03:18.583 ************************************ 00:03:18.583 START TEST custom_alloc 00:03:18.583 ************************************ 00:03:18.583 20:03:55 -- common/autotest_common.sh@1102 -- # custom_alloc 00:03:18.583 20:03:55 -- setup/hugepages.sh@167 -- # local IFS=, 00:03:18.583 20:03:55 -- setup/hugepages.sh@169 -- # local node 00:03:18.583 20:03:55 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:18.583 20:03:55 -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:18.583 20:03:55 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:18.583 20:03:55 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:18.583 20:03:55 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:18.583 20:03:55 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:18.583 20:03:55 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:18.583 20:03:55 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:18.583 20:03:55 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:18.583 20:03:55 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:18.583 20:03:55 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:18.583 20:03:55 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:18.583 20:03:55 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:18.583 20:03:55 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:18.583 20:03:55 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:18.583 20:03:55 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:18.583 20:03:55 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:18.583 20:03:55 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:18.583 20:03:55 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:18.583 20:03:55 -- setup/hugepages.sh@83 -- # : 256 00:03:18.583 20:03:55 -- setup/hugepages.sh@84 -- # : 1 00:03:18.583 20:03:55 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:18.583 20:03:55 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:18.583 20:03:55 -- setup/hugepages.sh@83 -- # : 0 00:03:18.583 20:03:55 -- setup/hugepages.sh@84 -- # : 0 00:03:18.583 20:03:55 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:18.583 20:03:55 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:18.583 20:03:55 -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:18.583 20:03:55 -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:18.583 20:03:55 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:18.583 20:03:55 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:18.583 20:03:55 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:18.583 20:03:55 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:18.583 20:03:55 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:18.583 20:03:55 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:18.583 20:03:55 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:18.583 20:03:55 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:18.583 20:03:55 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:18.583 20:03:55 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:18.583 20:03:55 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:18.583 20:03:55 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:18.583 20:03:55 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:18.583 20:03:55 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:18.583 20:03:55 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:18.583 20:03:55 -- setup/hugepages.sh@78 -- # return 0 00:03:18.583 20:03:55 -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:18.583 20:03:55 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:18.583 20:03:55 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:18.583 20:03:55 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:18.583 20:03:55 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:18.583 20:03:55 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:18.583 20:03:55 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:18.583 20:03:55 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:18.583 20:03:55 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:18.583 20:03:55 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:18.583 20:03:55 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:18.583 20:03:55 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:18.583 20:03:55 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:18.583 20:03:55 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:18.583 20:03:55 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:18.583 20:03:55 -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:18.583 20:03:55 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:18.583 20:03:55 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:18.583 20:03:55 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:18.583 20:03:55 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:18.583 20:03:55 -- setup/hugepages.sh@78 -- # return 0 00:03:18.583 20:03:55 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:18.583 20:03:55 -- setup/hugepages.sh@187 -- # setup output 00:03:18.583 20:03:55 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:18.583 20:03:55 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:21.122 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:03:21.382 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:21.382 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:21.382 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:21.382 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:21.382 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:21.382 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:21.382 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:21.382 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:21.382 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:21.382 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:21.382 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:21.382 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:21.644 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:21.644 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:21.644 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:21.644 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:21.644 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:21.644 20:03:58 -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:21.644 20:03:58 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:21.644 20:03:58 -- setup/hugepages.sh@89 -- # local node 00:03:21.644 20:03:58 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:21.644 20:03:58 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:21.644 20:03:58 -- setup/hugepages.sh@92 -- # local surp 00:03:21.644 20:03:58 -- setup/hugepages.sh@93 -- # local resv 00:03:21.644 20:03:58 -- setup/hugepages.sh@94 -- # local anon 00:03:21.644 20:03:58 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:21.644 20:03:58 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:21.644 20:03:58 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:21.644 20:03:58 -- setup/common.sh@18 -- # local node= 00:03:21.644 20:03:58 -- setup/common.sh@19 -- # local var val 00:03:21.644 20:03:58 -- setup/common.sh@20 -- # local mem_f mem 00:03:21.644 20:03:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.644 20:03:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.645 20:03:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.645 20:03:58 -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.645 20:03:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.645 20:03:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93323000 kB' 'MemFree: 72369548 kB' 'MemAvailable: 77349228 kB' 'Buffers: 2696 kB' 'Cached: 14309892 kB' 'SwapCached: 0 kB' 'Active: 10186076 kB' 'Inactive: 4658688 kB' 'Active(anon): 9619852 kB' 'Inactive(anon): 0 kB' 'Active(file): 566224 kB' 'Inactive(file): 4658688 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535624 kB' 'Mapped: 239564 kB' 'Shmem: 9087676 kB' 'KReclaimable: 534604 kB' 'Slab: 1072444 kB' 'SReclaimable: 534604 kB' 'SUnreclaim: 537840 kB' 'KernelStack: 19440 kB' 'PageTables: 9156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53477240 kB' 'Committed_AS: 11023408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212916 kB' 'VmallocChunk: 0 kB' 'Percpu: 94464 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2888660 kB' 'DirectMap2M: 19859456 kB' 'DirectMap1G: 79691776 kB' 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # continue 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # continue 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # continue 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # continue 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # continue 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # continue 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # continue 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # continue 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # continue 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # continue 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # continue 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # continue 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # continue 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # continue 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # continue 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # continue 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # continue 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # continue 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # continue 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # continue 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # continue 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # continue 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # continue 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # continue 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # continue 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # continue 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # continue 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # continue 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # continue 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # continue 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # continue 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # continue 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # continue 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # continue 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # continue 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # continue 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # continue 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # continue 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # continue 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # continue 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.645 20:03:58 -- setup/common.sh@33 -- # echo 0 00:03:21.645 20:03:58 -- setup/common.sh@33 -- # return 0 00:03:21.645 20:03:58 -- setup/hugepages.sh@97 -- # anon=0 00:03:21.645 20:03:58 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:21.645 20:03:58 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:21.645 20:03:58 -- setup/common.sh@18 -- # local node= 00:03:21.645 20:03:58 -- setup/common.sh@19 -- # local var val 00:03:21.645 20:03:58 -- setup/common.sh@20 -- # local mem_f mem 00:03:21.645 20:03:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.645 20:03:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.645 20:03:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.645 20:03:58 -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.645 20:03:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.645 20:03:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93323000 kB' 'MemFree: 72371092 kB' 'MemAvailable: 77350772 kB' 'Buffers: 2696 kB' 'Cached: 14309896 kB' 'SwapCached: 0 kB' 'Active: 10185264 kB' 'Inactive: 4658688 kB' 'Active(anon): 9619040 kB' 'Inactive(anon): 0 kB' 'Active(file): 566224 kB' 'Inactive(file): 4658688 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534796 kB' 'Mapped: 239440 kB' 'Shmem: 9087680 kB' 'KReclaimable: 534604 kB' 'Slab: 1072524 kB' 'SReclaimable: 534604 kB' 'SUnreclaim: 537920 kB' 'KernelStack: 19360 kB' 'PageTables: 8972 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53477240 kB' 'Committed_AS: 11023420 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212820 kB' 'VmallocChunk: 0 kB' 'Percpu: 94464 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2888660 kB' 'DirectMap2M: 19859456 kB' 'DirectMap1G: 79691776 kB' 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # continue 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # continue 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # continue 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # continue 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # continue 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # continue 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # continue 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # continue 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # continue 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # continue 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # continue 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # continue 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # continue 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.645 20:03:58 -- setup/common.sh@32 -- # continue 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.645 20:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.646 20:03:58 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.646 20:03:58 -- setup/common.sh@32 -- # continue 00:03:21.646 20:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.646 20:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.646 20:03:58 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.646 20:03:58 -- setup/common.sh@32 -- # continue 00:03:21.646 20:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.646 20:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.646 20:03:58 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.646 20:03:58 -- setup/common.sh@32 -- # continue 00:03:21.646 20:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.646 20:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.646 20:03:58 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.646 20:03:58 -- setup/common.sh@32 -- # continue 00:03:21.646 20:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.646 20:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.646 20:03:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.646 20:03:58 -- setup/common.sh@32 -- # continue 00:03:21.646 20:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.646 20:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.646 20:03:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.646 20:03:58 -- setup/common.sh@32 -- # continue 00:03:21.646 20:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.646 20:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.646 20:03:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.646 20:03:58 -- setup/common.sh@32 -- # continue 00:03:21.646 20:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.646 20:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.646 20:03:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.646 20:03:58 -- setup/common.sh@32 -- # continue 00:03:21.646 20:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.646 20:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.646 20:03:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.646 20:03:58 -- setup/common.sh@32 -- # continue 00:03:21.646 20:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.646 20:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.646 20:03:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.646 20:03:58 -- setup/common.sh@32 -- # continue 00:03:21.646 20:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.646 20:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.646 20:03:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.646 20:03:58 -- setup/common.sh@32 -- # continue 00:03:21.646 20:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.646 20:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.646 20:03:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.646 20:03:58 -- setup/common.sh@32 -- # continue 00:03:21.646 20:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.646 20:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.646 20:03:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.646 20:03:58 -- setup/common.sh@32 -- # continue 00:03:21.646 20:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.646 20:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.646 20:03:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.646 20:03:58 -- setup/common.sh@32 -- # continue 00:03:21.646 20:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.646 20:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.646 20:03:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.646 20:03:58 -- setup/common.sh@32 -- # continue 00:03:21.646 20:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.646 20:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.646 20:03:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.646 20:03:58 -- setup/common.sh@32 -- # continue 00:03:21.646 20:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.646 20:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.646 20:03:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.646 20:03:58 -- setup/common.sh@32 -- # continue 00:03:21.646 20:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.646 20:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.646 20:03:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.646 20:03:58 -- setup/common.sh@32 -- # continue 00:03:21.646 20:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.646 20:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.646 20:03:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.646 20:03:58 -- setup/common.sh@32 -- # continue 00:03:21.646 20:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.646 20:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.646 20:03:58 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.646 20:03:58 -- setup/common.sh@32 -- # continue 00:03:21.646 20:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.646 20:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.646 20:03:58 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.646 20:03:58 -- setup/common.sh@32 -- # continue 00:03:21.646 20:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.646 20:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.646 20:03:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.646 20:03:58 -- setup/common.sh@32 -- # continue 00:03:21.646 20:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.646 20:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.646 20:03:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.646 20:03:58 -- setup/common.sh@32 -- # continue 00:03:21.646 20:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.646 20:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.646 20:03:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.646 20:03:58 -- setup/common.sh@32 -- # continue 00:03:21.646 20:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.646 20:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.646 20:03:58 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.646 20:03:58 -- setup/common.sh@32 -- # continue 00:03:21.646 20:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.646 20:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.646 20:03:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.646 20:03:58 -- setup/common.sh@32 -- # continue 00:03:21.646 20:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.646 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.646 20:03:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.646 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.646 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.646 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.646 20:03:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.646 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.646 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.646 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.646 20:03:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.646 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.646 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.646 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.646 20:03:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.646 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.646 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.646 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.646 20:03:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.646 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.646 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.646 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.646 20:03:59 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.646 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.646 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.646 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.646 20:03:59 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.646 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.646 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.646 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.646 20:03:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.646 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.646 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.646 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.646 20:03:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.646 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.646 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.646 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.646 20:03:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.646 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.646 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.646 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.646 20:03:59 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.646 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.646 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.646 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.646 20:03:59 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.646 20:03:59 -- setup/common.sh@33 -- # echo 0 00:03:21.646 20:03:59 -- setup/common.sh@33 -- # return 0 00:03:21.646 20:03:59 -- setup/hugepages.sh@99 -- # surp=0 00:03:21.646 20:03:59 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:21.646 20:03:59 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:21.646 20:03:59 -- setup/common.sh@18 -- # local node= 00:03:21.646 20:03:59 -- setup/common.sh@19 -- # local var val 00:03:21.646 20:03:59 -- setup/common.sh@20 -- # local mem_f mem 00:03:21.646 20:03:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.646 20:03:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.646 20:03:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.646 20:03:59 -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.646 20:03:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.646 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.646 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.646 20:03:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93323000 kB' 'MemFree: 72375352 kB' 'MemAvailable: 77355032 kB' 'Buffers: 2696 kB' 'Cached: 14309908 kB' 'SwapCached: 0 kB' 'Active: 10185540 kB' 'Inactive: 4658688 kB' 'Active(anon): 9619316 kB' 'Inactive(anon): 0 kB' 'Active(file): 566224 kB' 'Inactive(file): 4658688 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535008 kB' 'Mapped: 239428 kB' 'Shmem: 9087692 kB' 'KReclaimable: 534604 kB' 'Slab: 1072524 kB' 'SReclaimable: 534604 kB' 'SUnreclaim: 537920 kB' 'KernelStack: 19408 kB' 'PageTables: 9112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53477240 kB' 'Committed_AS: 11023436 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212836 kB' 'VmallocChunk: 0 kB' 'Percpu: 94464 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2888660 kB' 'DirectMap2M: 19859456 kB' 'DirectMap1G: 79691776 kB' 00:03:21.646 20:03:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.646 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.646 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.646 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.646 20:03:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.646 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.646 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.646 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.646 20:03:59 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.646 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.646 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.646 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.646 20:03:59 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.646 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.646 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.646 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.646 20:03:59 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.646 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.646 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.646 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.646 20:03:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.646 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.646 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.646 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.646 20:03:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.646 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.646 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.646 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.646 20:03:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.646 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.646 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.646 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.646 20:03:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.646 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.646 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.646 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.646 20:03:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.646 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.646 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.646 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.646 20:03:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.646 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.646 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.646 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.646 20:03:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.646 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.646 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.646 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.646 20:03:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.646 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.646 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.646 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.646 20:03:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.646 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.646 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.646 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.646 20:03:59 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.646 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.646 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.646 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.646 20:03:59 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.646 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.646 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.646 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.646 20:03:59 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.646 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.646 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.646 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.646 20:03:59 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.646 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.646 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.646 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.646 20:03:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.646 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.646 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.646 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.646 20:03:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.646 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.646 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.646 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.646 20:03:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.646 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.646 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.646 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.646 20:03:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.646 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.646 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.646 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.646 20:03:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.646 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.646 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.646 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.646 20:03:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.646 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.647 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.647 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.647 20:03:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.647 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.647 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.647 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.647 20:03:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.647 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.647 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.647 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.647 20:03:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.647 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.647 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.647 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.647 20:03:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.647 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.647 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.647 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.647 20:03:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.647 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.647 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.647 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.647 20:03:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.647 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.647 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.647 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.647 20:03:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.647 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.647 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.647 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.647 20:03:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.647 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.647 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.647 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.647 20:03:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.647 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.647 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.647 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.647 20:03:59 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.647 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.647 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.647 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.647 20:03:59 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.647 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.647 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.647 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.647 20:03:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.647 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.647 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.647 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.647 20:03:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.647 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.647 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.647 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.647 20:03:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.647 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.647 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.647 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.647 20:03:59 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.647 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.647 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.647 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.647 20:03:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.647 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.647 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.647 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.647 20:03:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.647 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.647 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.647 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.647 20:03:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.647 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.647 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.647 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.647 20:03:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.647 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.647 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.647 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.647 20:03:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.647 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.647 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.647 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.647 20:03:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.647 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.647 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.647 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.647 20:03:59 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.647 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.647 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.647 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.647 20:03:59 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.647 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.647 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.647 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.647 20:03:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.647 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.647 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.647 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.647 20:03:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.647 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.647 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.647 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.647 20:03:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.647 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.647 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.647 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.647 20:03:59 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.647 20:03:59 -- setup/common.sh@33 -- # echo 0 00:03:21.647 20:03:59 -- setup/common.sh@33 -- # return 0 00:03:21.647 20:03:59 -- setup/hugepages.sh@100 -- # resv=0 00:03:21.647 20:03:59 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:21.647 nr_hugepages=1536 00:03:21.647 20:03:59 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:21.647 resv_hugepages=0 00:03:21.647 20:03:59 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:21.647 surplus_hugepages=0 00:03:21.647 20:03:59 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:21.647 anon_hugepages=0 00:03:21.647 20:03:59 -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:21.647 20:03:59 -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:21.647 20:03:59 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:21.647 20:03:59 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:21.647 20:03:59 -- setup/common.sh@18 -- # local node= 00:03:21.647 20:03:59 -- setup/common.sh@19 -- # local var val 00:03:21.647 20:03:59 -- setup/common.sh@20 -- # local mem_f mem 00:03:21.647 20:03:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.647 20:03:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.647 20:03:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.647 20:03:59 -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.647 20:03:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.647 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.647 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.647 20:03:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93323000 kB' 'MemFree: 72378096 kB' 'MemAvailable: 77357776 kB' 'Buffers: 2696 kB' 'Cached: 14309908 kB' 'SwapCached: 0 kB' 'Active: 10185540 kB' 'Inactive: 4658688 kB' 'Active(anon): 9619316 kB' 'Inactive(anon): 0 kB' 'Active(file): 566224 kB' 'Inactive(file): 4658688 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535008 kB' 'Mapped: 239428 kB' 'Shmem: 9087692 kB' 'KReclaimable: 534604 kB' 'Slab: 1072524 kB' 'SReclaimable: 534604 kB' 'SUnreclaim: 537920 kB' 'KernelStack: 19408 kB' 'PageTables: 9112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53477240 kB' 'Committed_AS: 11023448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212852 kB' 'VmallocChunk: 0 kB' 'Percpu: 94464 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2888660 kB' 'DirectMap2M: 19859456 kB' 'DirectMap1G: 79691776 kB' 00:03:21.647 20:03:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.647 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.647 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.647 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.647 20:03:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.647 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.647 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.647 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.647 20:03:59 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.647 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.647 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.647 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.647 20:03:59 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.647 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.647 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.647 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.647 20:03:59 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.647 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.647 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.647 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.647 20:03:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.647 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.647 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.647 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.647 20:03:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.647 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.647 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.647 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.647 20:03:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.647 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.647 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.647 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.647 20:03:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.647 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.647 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.647 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.647 20:03:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.647 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.647 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.647 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.647 20:03:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.647 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.647 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.647 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.647 20:03:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.647 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.647 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.647 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.647 20:03:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.647 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.647 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.647 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.647 20:03:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.647 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.647 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.647 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.647 20:03:59 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.647 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.647 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.647 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.647 20:03:59 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.647 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.647 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.647 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.647 20:03:59 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.647 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.647 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.647 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.647 20:03:59 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.647 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.647 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.647 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.647 20:03:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.647 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.647 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.647 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.647 20:03:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.647 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.647 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.647 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.647 20:03:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.647 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.648 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.648 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.648 20:03:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.648 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.648 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.648 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.648 20:03:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.648 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.648 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.648 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.648 20:03:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.648 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.648 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.648 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.648 20:03:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.648 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.648 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.648 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.648 20:03:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.648 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.648 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.648 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.648 20:03:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.648 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.648 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.648 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.648 20:03:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.648 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.648 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.648 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.648 20:03:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.648 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.648 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.648 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.648 20:03:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.648 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.648 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.648 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.648 20:03:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.648 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.648 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.648 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.648 20:03:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.648 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.648 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.648 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.648 20:03:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.648 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.648 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.648 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.648 20:03:59 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.648 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.648 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.648 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.648 20:03:59 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.648 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.648 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.648 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.648 20:03:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.648 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.648 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.648 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.648 20:03:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.648 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.648 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.648 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.648 20:03:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.648 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.648 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.648 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.648 20:03:59 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.648 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.648 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.648 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.648 20:03:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.648 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.648 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.648 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.648 20:03:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.648 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.648 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.648 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.648 20:03:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.648 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.648 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.648 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.648 20:03:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.648 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.648 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.648 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.648 20:03:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.648 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.648 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.648 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.648 20:03:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.648 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.648 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.648 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.648 20:03:59 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.648 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.648 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.648 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.648 20:03:59 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.648 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.648 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.648 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.648 20:03:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.909 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.909 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.909 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.909 20:03:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.909 20:03:59 -- setup/common.sh@33 -- # echo 1536 00:03:21.909 20:03:59 -- setup/common.sh@33 -- # return 0 00:03:21.909 20:03:59 -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:21.909 20:03:59 -- setup/hugepages.sh@112 -- # get_nodes 00:03:21.909 20:03:59 -- setup/hugepages.sh@27 -- # local node 00:03:21.909 20:03:59 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:21.909 20:03:59 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:21.909 20:03:59 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:21.909 20:03:59 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:21.909 20:03:59 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:21.909 20:03:59 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:21.909 20:03:59 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:21.909 20:03:59 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:21.909 20:03:59 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:21.909 20:03:59 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:21.909 20:03:59 -- setup/common.sh@18 -- # local node=0 00:03:21.909 20:03:59 -- setup/common.sh@19 -- # local var val 00:03:21.909 20:03:59 -- setup/common.sh@20 -- # local mem_f mem 00:03:21.909 20:03:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.909 20:03:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:21.909 20:03:59 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:21.909 20:03:59 -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.909 20:03:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.909 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.909 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.910 20:03:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32634628 kB' 'MemFree: 24595408 kB' 'MemUsed: 8039220 kB' 'SwapCached: 0 kB' 'Active: 4218084 kB' 'Inactive: 1226824 kB' 'Active(anon): 3764360 kB' 'Inactive(anon): 0 kB' 'Active(file): 453724 kB' 'Inactive(file): 1226824 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5062448 kB' 'Mapped: 191796 kB' 'AnonPages: 385636 kB' 'Shmem: 3381900 kB' 'KernelStack: 11864 kB' 'PageTables: 5924 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 246776 kB' 'Slab: 523784 kB' 'SReclaimable: 246776 kB' 'SUnreclaim: 277008 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:21.910 20:03:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.910 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.910 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.910 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.910 20:03:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.910 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.910 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.910 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.910 20:03:59 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.910 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.910 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.910 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.910 20:03:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.910 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.910 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.910 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.910 20:03:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.910 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.910 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.910 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.910 20:03:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.910 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.910 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.910 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.910 20:03:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.910 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.910 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.910 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.910 20:03:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.910 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.910 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.910 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.910 20:03:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.910 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.910 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.910 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.910 20:03:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.910 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.910 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.910 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.910 20:03:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.910 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.910 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.910 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.910 20:03:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.910 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.910 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.910 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.910 20:03:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.910 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.910 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.910 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.910 20:03:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.910 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.910 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.910 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.910 20:03:59 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.910 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.910 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.910 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.910 20:03:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.910 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.910 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.910 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.910 20:03:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.910 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.910 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.910 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.910 20:03:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.910 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.910 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.910 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.910 20:03:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.910 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.910 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.910 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.910 20:03:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.910 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.910 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.910 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.910 20:03:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.910 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.910 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.910 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.910 20:03:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.910 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.910 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.910 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.910 20:03:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.910 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.910 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.910 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.910 20:03:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.910 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.910 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.910 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.910 20:03:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.910 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.910 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.910 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.910 20:03:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.910 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.910 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.910 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.910 20:03:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.910 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.910 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.910 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.910 20:03:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.910 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.910 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.910 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.910 20:03:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.910 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.910 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.910 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.910 20:03:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.910 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.910 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.910 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.910 20:03:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.910 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.910 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.910 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.910 20:03:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.910 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.910 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.910 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.910 20:03:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.910 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.910 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.910 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.910 20:03:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.910 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.910 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.910 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.910 20:03:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.910 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.910 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.910 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.910 20:03:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.910 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.910 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.910 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.910 20:03:59 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.910 20:03:59 -- setup/common.sh@33 -- # echo 0 00:03:21.910 20:03:59 -- setup/common.sh@33 -- # return 0 00:03:21.910 20:03:59 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:21.910 20:03:59 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:21.910 20:03:59 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:21.910 20:03:59 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:21.910 20:03:59 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:21.910 20:03:59 -- setup/common.sh@18 -- # local node=1 00:03:21.910 20:03:59 -- setup/common.sh@19 -- # local var val 00:03:21.910 20:03:59 -- setup/common.sh@20 -- # local mem_f mem 00:03:21.910 20:03:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.910 20:03:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:21.910 20:03:59 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:21.910 20:03:59 -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.910 20:03:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.911 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.911 20:03:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60688372 kB' 'MemFree: 47783412 kB' 'MemUsed: 12904960 kB' 'SwapCached: 0 kB' 'Active: 5967820 kB' 'Inactive: 3431864 kB' 'Active(anon): 5855320 kB' 'Inactive(anon): 0 kB' 'Active(file): 112500 kB' 'Inactive(file): 3431864 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9250196 kB' 'Mapped: 47632 kB' 'AnonPages: 149716 kB' 'Shmem: 5705832 kB' 'KernelStack: 7560 kB' 'PageTables: 3240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 287828 kB' 'Slab: 548724 kB' 'SReclaimable: 287828 kB' 'SUnreclaim: 260896 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:21.911 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.911 20:03:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.911 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.911 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.911 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.911 20:03:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.911 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.911 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.911 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.911 20:03:59 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.911 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.911 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.911 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.911 20:03:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.911 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.911 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.911 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.911 20:03:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.911 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.911 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.911 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.911 20:03:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.911 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.911 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.911 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.911 20:03:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.911 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.911 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.911 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.911 20:03:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.911 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.911 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.911 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.911 20:03:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.911 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.911 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.911 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.911 20:03:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.911 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.911 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.911 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.911 20:03:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.911 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.911 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.911 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.911 20:03:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.911 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.911 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.911 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.911 20:03:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.911 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.911 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.911 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.911 20:03:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.911 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.911 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.911 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.911 20:03:59 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.911 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.911 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.911 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.911 20:03:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.911 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.911 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.911 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.911 20:03:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.911 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.911 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.911 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.911 20:03:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.911 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.911 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.911 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.911 20:03:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.911 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.911 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.911 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.911 20:03:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.911 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.911 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.911 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.911 20:03:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.911 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.911 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.911 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.911 20:03:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.911 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.911 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.911 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.911 20:03:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.911 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.911 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.911 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.911 20:03:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.911 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.911 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.911 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.911 20:03:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.911 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.911 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.911 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.911 20:03:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.911 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.911 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.911 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.911 20:03:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.911 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.911 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.911 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.911 20:03:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.911 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.911 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.911 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.911 20:03:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.911 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.911 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.911 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.911 20:03:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.911 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.911 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.911 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.911 20:03:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.911 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.911 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.911 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.911 20:03:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.911 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.911 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.911 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.911 20:03:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.911 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.911 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.911 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.911 20:03:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.911 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.911 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.911 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.911 20:03:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.911 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.911 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.911 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.911 20:03:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.911 20:03:59 -- setup/common.sh@32 -- # continue 00:03:21.911 20:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.911 20:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.911 20:03:59 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.911 20:03:59 -- setup/common.sh@33 -- # echo 0 00:03:21.911 20:03:59 -- setup/common.sh@33 -- # return 0 00:03:21.911 20:03:59 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:21.912 20:03:59 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:21.912 20:03:59 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:21.912 20:03:59 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:21.912 20:03:59 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:21.912 node0=512 expecting 512 00:03:21.912 20:03:59 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:21.912 20:03:59 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:21.912 20:03:59 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:21.912 20:03:59 -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:21.912 node1=1024 expecting 1024 00:03:21.912 20:03:59 -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:21.912 00:03:21.912 real 0m3.331s 00:03:21.912 user 0m1.372s 00:03:21.912 sys 0m2.019s 00:03:21.912 20:03:59 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:21.912 20:03:59 -- common/autotest_common.sh@10 -- # set +x 00:03:21.912 ************************************ 00:03:21.912 END TEST custom_alloc 00:03:21.912 ************************************ 00:03:21.912 20:03:59 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:21.912 20:03:59 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:03:21.912 20:03:59 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:03:21.912 20:03:59 -- common/autotest_common.sh@10 -- # set +x 00:03:21.912 ************************************ 00:03:21.912 START TEST no_shrink_alloc 00:03:21.912 ************************************ 00:03:21.912 20:03:59 -- common/autotest_common.sh@1102 -- # no_shrink_alloc 00:03:21.912 20:03:59 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:21.912 20:03:59 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:21.912 20:03:59 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:21.912 20:03:59 -- setup/hugepages.sh@51 -- # shift 00:03:21.912 20:03:59 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:21.912 20:03:59 -- setup/hugepages.sh@52 -- # local node_ids 00:03:21.912 20:03:59 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:21.912 20:03:59 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:21.912 20:03:59 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:21.912 20:03:59 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:21.912 20:03:59 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:21.912 20:03:59 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:21.912 20:03:59 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:21.912 20:03:59 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:21.912 20:03:59 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:21.912 20:03:59 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:21.912 20:03:59 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:21.912 20:03:59 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:21.912 20:03:59 -- setup/hugepages.sh@73 -- # return 0 00:03:21.912 20:03:59 -- setup/hugepages.sh@198 -- # setup output 00:03:21.912 20:03:59 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:21.912 20:03:59 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:24.451 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:03:24.710 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:24.710 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:24.710 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:24.710 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:24.710 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:24.710 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:24.710 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:24.710 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:24.710 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:24.710 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:24.710 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:24.710 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:24.710 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:24.710 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:24.710 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:24.710 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:24.710 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:24.974 20:04:02 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:24.974 20:04:02 -- setup/hugepages.sh@89 -- # local node 00:03:24.974 20:04:02 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:24.974 20:04:02 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:24.974 20:04:02 -- setup/hugepages.sh@92 -- # local surp 00:03:24.974 20:04:02 -- setup/hugepages.sh@93 -- # local resv 00:03:24.974 20:04:02 -- setup/hugepages.sh@94 -- # local anon 00:03:24.974 20:04:02 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:24.974 20:04:02 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:24.974 20:04:02 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:24.974 20:04:02 -- setup/common.sh@18 -- # local node= 00:03:24.974 20:04:02 -- setup/common.sh@19 -- # local var val 00:03:24.974 20:04:02 -- setup/common.sh@20 -- # local mem_f mem 00:03:24.974 20:04:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.974 20:04:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.974 20:04:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.974 20:04:02 -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.974 20:04:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.974 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.974 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.974 20:04:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93323000 kB' 'MemFree: 73441516 kB' 'MemAvailable: 78421132 kB' 'Buffers: 2696 kB' 'Cached: 14310020 kB' 'SwapCached: 0 kB' 'Active: 10186180 kB' 'Inactive: 4658688 kB' 'Active(anon): 9619956 kB' 'Inactive(anon): 0 kB' 'Active(file): 566224 kB' 'Inactive(file): 4658688 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534996 kB' 'Mapped: 239520 kB' 'Shmem: 9087804 kB' 'KReclaimable: 534540 kB' 'Slab: 1072436 kB' 'SReclaimable: 534540 kB' 'SUnreclaim: 537896 kB' 'KernelStack: 19456 kB' 'PageTables: 9264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54001528 kB' 'Committed_AS: 11024236 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212900 kB' 'VmallocChunk: 0 kB' 'Percpu: 94464 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2888660 kB' 'DirectMap2M: 19859456 kB' 'DirectMap1G: 79691776 kB' 00:03:24.974 20:04:02 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.974 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.974 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.974 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.974 20:04:02 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.974 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.974 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.974 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.974 20:04:02 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.974 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.974 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.974 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.974 20:04:02 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.974 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.974 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.974 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.974 20:04:02 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.974 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.974 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.974 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.974 20:04:02 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.974 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.974 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.974 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.974 20:04:02 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.974 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.974 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.974 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.974 20:04:02 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.974 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.974 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.974 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.974 20:04:02 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.974 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.974 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.974 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.974 20:04:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.974 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.974 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.974 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.974 20:04:02 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.974 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.974 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.974 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.974 20:04:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.974 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.974 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.974 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.974 20:04:02 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.974 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.974 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.974 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.974 20:04:02 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.974 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.974 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.974 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.974 20:04:02 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.974 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.974 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.974 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.974 20:04:02 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.974 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.974 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.974 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.974 20:04:02 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.974 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.974 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.975 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.975 20:04:02 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.975 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.975 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.975 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.975 20:04:02 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.975 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.975 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.975 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.975 20:04:02 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.975 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.975 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.975 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.975 20:04:02 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.975 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.975 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.975 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.975 20:04:02 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.975 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.975 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.975 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.975 20:04:02 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.975 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.975 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.975 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.975 20:04:02 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.975 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.975 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.975 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.975 20:04:02 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.975 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.975 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.975 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.975 20:04:02 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.975 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.975 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.975 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.975 20:04:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.975 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.975 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.975 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.975 20:04:02 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.975 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.975 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.975 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.975 20:04:02 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.975 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.975 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.975 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.975 20:04:02 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.975 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.975 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.975 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.975 20:04:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.975 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.975 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.975 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.975 20:04:02 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.975 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.975 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.975 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.975 20:04:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.975 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.975 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.975 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.975 20:04:02 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.975 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.975 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.975 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.975 20:04:02 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.975 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.975 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.975 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.975 20:04:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.975 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.975 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.975 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.975 20:04:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.975 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.975 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.975 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.975 20:04:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.975 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.975 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.975 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.975 20:04:02 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.975 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.975 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.975 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.975 20:04:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.975 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.975 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.975 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.975 20:04:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.975 20:04:02 -- setup/common.sh@33 -- # echo 0 00:03:24.975 20:04:02 -- setup/common.sh@33 -- # return 0 00:03:24.975 20:04:02 -- setup/hugepages.sh@97 -- # anon=0 00:03:24.975 20:04:02 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:24.975 20:04:02 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:24.975 20:04:02 -- setup/common.sh@18 -- # local node= 00:03:24.975 20:04:02 -- setup/common.sh@19 -- # local var val 00:03:24.975 20:04:02 -- setup/common.sh@20 -- # local mem_f mem 00:03:24.975 20:04:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.975 20:04:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.975 20:04:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.975 20:04:02 -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.975 20:04:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.975 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.975 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.975 20:04:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93323000 kB' 'MemFree: 73442020 kB' 'MemAvailable: 78421636 kB' 'Buffers: 2696 kB' 'Cached: 14310024 kB' 'SwapCached: 0 kB' 'Active: 10185868 kB' 'Inactive: 4658688 kB' 'Active(anon): 9619644 kB' 'Inactive(anon): 0 kB' 'Active(file): 566224 kB' 'Inactive(file): 4658688 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534732 kB' 'Mapped: 239512 kB' 'Shmem: 9087808 kB' 'KReclaimable: 534540 kB' 'Slab: 1072432 kB' 'SReclaimable: 534540 kB' 'SUnreclaim: 537892 kB' 'KernelStack: 19440 kB' 'PageTables: 9200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54001528 kB' 'Committed_AS: 11024248 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212852 kB' 'VmallocChunk: 0 kB' 'Percpu: 94464 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2888660 kB' 'DirectMap2M: 19859456 kB' 'DirectMap1G: 79691776 kB' 00:03:24.975 20:04:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.975 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.975 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.975 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.975 20:04:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.975 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.975 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.975 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.975 20:04:02 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.975 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.975 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.975 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.975 20:04:02 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.975 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.975 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.975 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.975 20:04:02 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.975 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.975 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.975 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.975 20:04:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.975 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.975 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.975 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.975 20:04:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.975 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.975 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.975 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.975 20:04:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.975 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.975 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.975 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.975 20:04:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.975 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.975 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.975 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.975 20:04:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.976 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.976 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.976 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.976 20:04:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.976 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.976 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.976 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.976 20:04:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.976 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.976 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.976 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.976 20:04:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.976 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.976 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.976 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.976 20:04:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.976 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.976 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.976 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.976 20:04:02 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.976 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.976 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.976 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.976 20:04:02 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.976 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.976 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.976 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.976 20:04:02 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.976 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.976 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.976 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.976 20:04:02 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.976 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.976 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.976 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.976 20:04:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.976 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.976 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.976 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.976 20:04:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.976 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.976 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.976 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.976 20:04:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.976 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.976 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.976 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.976 20:04:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.976 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.976 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.976 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.976 20:04:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.976 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.976 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.976 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.976 20:04:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.976 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.976 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.976 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.976 20:04:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.976 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.976 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.976 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.976 20:04:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.976 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.976 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.976 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.976 20:04:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.976 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.976 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.976 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.976 20:04:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.976 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.976 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.976 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.976 20:04:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.976 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.976 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.976 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.976 20:04:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.976 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.976 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.976 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.976 20:04:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.976 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.976 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.976 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.976 20:04:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.976 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.976 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.976 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.976 20:04:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.976 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.976 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.976 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.976 20:04:02 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.976 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.976 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.976 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.976 20:04:02 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.976 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.976 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.976 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.976 20:04:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.976 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.976 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.976 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.976 20:04:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.976 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.976 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.976 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.976 20:04:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.976 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.976 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.976 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.976 20:04:02 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.976 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.976 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.976 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.976 20:04:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.976 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.976 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.976 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.976 20:04:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.976 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.976 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.976 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.976 20:04:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.976 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.976 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.976 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.976 20:04:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.976 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.976 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.976 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.976 20:04:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.976 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.976 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.976 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.976 20:04:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.976 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.976 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.976 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.976 20:04:02 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.976 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.976 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.976 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.976 20:04:02 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.976 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.976 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.976 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.976 20:04:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.976 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.976 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.976 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.976 20:04:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.976 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.976 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.976 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.976 20:04:02 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.976 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.976 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.976 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.977 20:04:02 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.977 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.977 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.977 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.977 20:04:02 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.977 20:04:02 -- setup/common.sh@33 -- # echo 0 00:03:24.977 20:04:02 -- setup/common.sh@33 -- # return 0 00:03:24.977 20:04:02 -- setup/hugepages.sh@99 -- # surp=0 00:03:24.977 20:04:02 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:24.977 20:04:02 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:24.977 20:04:02 -- setup/common.sh@18 -- # local node= 00:03:24.977 20:04:02 -- setup/common.sh@19 -- # local var val 00:03:24.977 20:04:02 -- setup/common.sh@20 -- # local mem_f mem 00:03:24.977 20:04:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.977 20:04:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.977 20:04:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.977 20:04:02 -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.977 20:04:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.977 20:04:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93323000 kB' 'MemFree: 73442052 kB' 'MemAvailable: 78421668 kB' 'Buffers: 2696 kB' 'Cached: 14310036 kB' 'SwapCached: 0 kB' 'Active: 10185388 kB' 'Inactive: 4658688 kB' 'Active(anon): 9619164 kB' 'Inactive(anon): 0 kB' 'Active(file): 566224 kB' 'Inactive(file): 4658688 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534688 kB' 'Mapped: 239432 kB' 'Shmem: 9087820 kB' 'KReclaimable: 534540 kB' 'Slab: 1072416 kB' 'SReclaimable: 534540 kB' 'SUnreclaim: 537876 kB' 'KernelStack: 19440 kB' 'PageTables: 9196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54001528 kB' 'Committed_AS: 11024264 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212852 kB' 'VmallocChunk: 0 kB' 'Percpu: 94464 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2888660 kB' 'DirectMap2M: 19859456 kB' 'DirectMap1G: 79691776 kB' 00:03:24.977 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.977 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.977 20:04:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.977 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.977 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.977 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.977 20:04:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.977 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.977 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.977 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.977 20:04:02 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.977 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.977 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.977 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.977 20:04:02 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.977 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.977 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.977 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.977 20:04:02 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.977 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.977 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.977 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.977 20:04:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.977 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.977 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.977 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.977 20:04:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.977 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.977 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.977 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.977 20:04:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.977 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.977 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.977 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.977 20:04:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.977 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.977 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.977 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.977 20:04:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.977 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.977 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.977 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.977 20:04:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.977 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.977 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.977 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.977 20:04:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.977 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.977 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.977 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.977 20:04:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.977 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.977 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.977 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.977 20:04:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.977 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.977 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.977 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.977 20:04:02 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.977 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.977 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.977 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.977 20:04:02 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.977 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.977 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.977 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.977 20:04:02 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.977 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.977 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.977 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.977 20:04:02 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.977 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.977 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.977 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.977 20:04:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.977 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.977 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.977 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.977 20:04:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.977 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.977 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.977 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.977 20:04:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.977 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.977 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.977 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.977 20:04:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.977 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.977 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.977 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.977 20:04:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.977 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.977 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.977 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.977 20:04:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.977 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.977 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.977 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.977 20:04:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.977 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.977 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.977 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.977 20:04:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.977 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.977 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.977 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.977 20:04:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.977 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.977 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.977 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.977 20:04:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.977 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.977 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.977 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.977 20:04:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.977 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.977 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.977 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.977 20:04:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.977 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.977 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.977 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.977 20:04:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.977 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.977 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.977 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.977 20:04:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.977 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.978 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.978 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.978 20:04:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.978 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.978 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.978 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.978 20:04:02 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.978 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.978 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.978 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.978 20:04:02 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.978 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.978 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.978 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.978 20:04:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.978 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.978 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.978 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.978 20:04:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.978 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.978 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.978 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.978 20:04:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.978 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.978 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.978 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.978 20:04:02 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.978 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.978 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.978 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.978 20:04:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.978 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.978 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.978 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.978 20:04:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.978 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.978 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.978 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.978 20:04:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.978 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.978 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.978 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.978 20:04:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.978 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.978 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.978 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.978 20:04:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.978 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.978 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.978 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.978 20:04:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.978 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.978 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.978 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.978 20:04:02 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.978 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.978 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.978 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.978 20:04:02 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.978 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.978 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.978 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.978 20:04:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.978 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.978 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.978 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.978 20:04:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.978 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.978 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.978 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.978 20:04:02 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.978 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.978 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.978 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.978 20:04:02 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.978 20:04:02 -- setup/common.sh@33 -- # echo 0 00:03:24.978 20:04:02 -- setup/common.sh@33 -- # return 0 00:03:24.978 20:04:02 -- setup/hugepages.sh@100 -- # resv=0 00:03:24.978 20:04:02 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:24.978 nr_hugepages=1024 00:03:24.978 20:04:02 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:24.978 resv_hugepages=0 00:03:24.978 20:04:02 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:24.978 surplus_hugepages=0 00:03:24.978 20:04:02 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:24.978 anon_hugepages=0 00:03:24.978 20:04:02 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:24.978 20:04:02 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:24.978 20:04:02 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:24.978 20:04:02 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:24.978 20:04:02 -- setup/common.sh@18 -- # local node= 00:03:24.978 20:04:02 -- setup/common.sh@19 -- # local var val 00:03:24.978 20:04:02 -- setup/common.sh@20 -- # local mem_f mem 00:03:24.978 20:04:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.978 20:04:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.978 20:04:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.978 20:04:02 -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.978 20:04:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.978 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.978 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.978 20:04:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93323000 kB' 'MemFree: 73441804 kB' 'MemAvailable: 78421420 kB' 'Buffers: 2696 kB' 'Cached: 14310048 kB' 'SwapCached: 0 kB' 'Active: 10184980 kB' 'Inactive: 4658688 kB' 'Active(anon): 9618756 kB' 'Inactive(anon): 0 kB' 'Active(file): 566224 kB' 'Inactive(file): 4658688 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534228 kB' 'Mapped: 239432 kB' 'Shmem: 9087832 kB' 'KReclaimable: 534540 kB' 'Slab: 1072416 kB' 'SReclaimable: 534540 kB' 'SUnreclaim: 537876 kB' 'KernelStack: 19424 kB' 'PageTables: 9140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54001528 kB' 'Committed_AS: 11024280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212868 kB' 'VmallocChunk: 0 kB' 'Percpu: 94464 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2888660 kB' 'DirectMap2M: 19859456 kB' 'DirectMap1G: 79691776 kB' 00:03:24.978 20:04:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.978 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.978 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.978 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.978 20:04:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.978 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.978 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.978 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.978 20:04:02 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.978 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.978 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.978 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.978 20:04:02 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.978 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.978 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.978 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.978 20:04:02 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.978 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.978 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.978 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.978 20:04:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.978 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.978 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.978 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.978 20:04:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.978 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.978 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.978 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.978 20:04:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.978 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.978 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.978 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.978 20:04:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.978 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.978 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.978 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.978 20:04:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.978 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.979 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.979 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.979 20:04:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.979 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.979 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.979 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.979 20:04:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.979 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.979 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.979 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.979 20:04:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.979 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.979 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.979 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.979 20:04:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.979 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.979 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.979 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.979 20:04:02 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.979 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.979 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.979 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.979 20:04:02 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.979 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.979 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.979 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.979 20:04:02 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.979 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.979 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.979 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.979 20:04:02 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.979 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.979 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.979 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.979 20:04:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.979 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.979 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.979 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.979 20:04:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.979 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.979 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.979 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.979 20:04:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.979 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.979 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.979 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.979 20:04:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.979 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.979 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.979 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.979 20:04:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.979 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.979 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.979 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.979 20:04:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.979 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.979 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.979 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.979 20:04:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.979 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.979 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.979 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.979 20:04:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.979 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.979 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.979 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.979 20:04:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.979 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.979 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.979 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.979 20:04:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.979 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.979 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.979 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.979 20:04:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.979 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.979 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.979 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.979 20:04:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.979 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.979 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.979 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.979 20:04:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.979 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.979 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.979 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.979 20:04:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.979 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.979 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.979 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.979 20:04:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.979 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.979 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.979 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.979 20:04:02 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.979 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.979 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.979 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.979 20:04:02 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.979 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.979 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.979 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.979 20:04:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.979 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.979 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.979 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.979 20:04:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.979 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.979 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.979 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.979 20:04:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.979 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.979 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.979 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.979 20:04:02 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.979 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.979 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.979 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.979 20:04:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.979 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.979 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.979 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.979 20:04:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.979 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.979 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.979 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.979 20:04:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.979 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.979 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.979 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.979 20:04:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.979 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.979 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.980 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.980 20:04:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.980 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.980 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.980 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.980 20:04:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.980 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.980 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.980 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.980 20:04:02 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.980 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.980 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.980 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.980 20:04:02 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.980 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.980 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.980 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.980 20:04:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.980 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.980 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.980 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.980 20:04:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.980 20:04:02 -- setup/common.sh@33 -- # echo 1024 00:03:24.980 20:04:02 -- setup/common.sh@33 -- # return 0 00:03:24.980 20:04:02 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:24.980 20:04:02 -- setup/hugepages.sh@112 -- # get_nodes 00:03:24.980 20:04:02 -- setup/hugepages.sh@27 -- # local node 00:03:24.980 20:04:02 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:24.980 20:04:02 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:24.980 20:04:02 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:24.980 20:04:02 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:24.980 20:04:02 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:24.980 20:04:02 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:24.980 20:04:02 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:24.980 20:04:02 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:24.980 20:04:02 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:24.980 20:04:02 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:24.980 20:04:02 -- setup/common.sh@18 -- # local node=0 00:03:24.980 20:04:02 -- setup/common.sh@19 -- # local var val 00:03:24.980 20:04:02 -- setup/common.sh@20 -- # local mem_f mem 00:03:24.980 20:04:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.980 20:04:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:24.980 20:04:02 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:24.980 20:04:02 -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.980 20:04:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.980 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.980 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.980 20:04:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32634628 kB' 'MemFree: 23576420 kB' 'MemUsed: 9058208 kB' 'SwapCached: 0 kB' 'Active: 4218276 kB' 'Inactive: 1226824 kB' 'Active(anon): 3764552 kB' 'Inactive(anon): 0 kB' 'Active(file): 453724 kB' 'Inactive(file): 1226824 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5062452 kB' 'Mapped: 191800 kB' 'AnonPages: 385852 kB' 'Shmem: 3381904 kB' 'KernelStack: 11896 kB' 'PageTables: 6020 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 246712 kB' 'Slab: 523700 kB' 'SReclaimable: 246712 kB' 'SUnreclaim: 276988 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:24.980 20:04:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.980 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.980 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.980 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.980 20:04:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.980 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.980 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.980 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.980 20:04:02 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.980 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.980 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.980 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.980 20:04:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.980 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.980 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.980 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.980 20:04:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.980 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.980 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.980 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.980 20:04:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.980 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.980 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.980 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.980 20:04:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.980 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.980 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.980 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.980 20:04:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.980 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.980 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.980 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.980 20:04:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.980 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.980 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.980 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.980 20:04:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.980 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.980 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.980 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.980 20:04:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.980 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.980 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.980 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.980 20:04:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.980 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.980 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.980 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.980 20:04:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.980 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.980 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.980 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.980 20:04:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.980 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.980 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.980 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.980 20:04:02 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.980 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.980 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.980 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.980 20:04:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.980 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.980 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.980 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.980 20:04:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.980 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.980 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.980 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.980 20:04:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.980 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.980 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.980 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.980 20:04:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.980 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.980 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.980 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.980 20:04:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.980 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.980 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.980 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.980 20:04:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.980 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.980 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.980 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.980 20:04:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.980 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.980 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.980 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.980 20:04:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.980 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.980 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.980 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.980 20:04:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.980 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.980 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.980 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.980 20:04:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.980 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.980 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.980 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.980 20:04:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.980 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.980 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.981 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.981 20:04:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.981 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.981 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.981 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.981 20:04:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.981 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.981 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.981 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.981 20:04:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.981 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.981 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.981 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.981 20:04:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.981 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.981 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.981 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.981 20:04:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.981 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.981 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.981 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.981 20:04:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.981 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.981 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.981 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.981 20:04:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.981 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.981 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.981 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.981 20:04:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.981 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.981 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.981 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.981 20:04:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.981 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.981 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.981 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.981 20:04:02 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.981 20:04:02 -- setup/common.sh@32 -- # continue 00:03:24.981 20:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.981 20:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.981 20:04:02 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.981 20:04:02 -- setup/common.sh@33 -- # echo 0 00:03:24.981 20:04:02 -- setup/common.sh@33 -- # return 0 00:03:24.981 20:04:02 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:24.981 20:04:02 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:24.981 20:04:02 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:24.981 20:04:02 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:24.981 20:04:02 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:24.981 node0=1024 expecting 1024 00:03:24.981 20:04:02 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:24.981 20:04:02 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:24.981 20:04:02 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:24.981 20:04:02 -- setup/hugepages.sh@202 -- # setup output 00:03:24.981 20:04:02 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:24.981 20:04:02 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:28.280 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:03:28.280 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:28.280 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:28.280 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:28.280 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:28.280 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:28.280 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:28.280 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:28.280 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:28.280 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:28.280 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:28.280 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:28.280 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:28.280 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:28.280 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:28.280 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:28.280 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:28.280 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:28.280 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:28.280 20:04:05 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:28.280 20:04:05 -- setup/hugepages.sh@89 -- # local node 00:03:28.280 20:04:05 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:28.280 20:04:05 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:28.280 20:04:05 -- setup/hugepages.sh@92 -- # local surp 00:03:28.280 20:04:05 -- setup/hugepages.sh@93 -- # local resv 00:03:28.280 20:04:05 -- setup/hugepages.sh@94 -- # local anon 00:03:28.280 20:04:05 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:28.280 20:04:05 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:28.280 20:04:05 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:28.280 20:04:05 -- setup/common.sh@18 -- # local node= 00:03:28.280 20:04:05 -- setup/common.sh@19 -- # local var val 00:03:28.280 20:04:05 -- setup/common.sh@20 -- # local mem_f mem 00:03:28.280 20:04:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.280 20:04:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.280 20:04:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.280 20:04:05 -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.280 20:04:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.280 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.280 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.280 20:04:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93323000 kB' 'MemFree: 73427620 kB' 'MemAvailable: 78407236 kB' 'Buffers: 2696 kB' 'Cached: 14310132 kB' 'SwapCached: 0 kB' 'Active: 10187776 kB' 'Inactive: 4658688 kB' 'Active(anon): 9621552 kB' 'Inactive(anon): 0 kB' 'Active(file): 566224 kB' 'Inactive(file): 4658688 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536480 kB' 'Mapped: 239532 kB' 'Shmem: 9087916 kB' 'KReclaimable: 534540 kB' 'Slab: 1071880 kB' 'SReclaimable: 534540 kB' 'SUnreclaim: 537340 kB' 'KernelStack: 19440 kB' 'PageTables: 9248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54001528 kB' 'Committed_AS: 11025164 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212948 kB' 'VmallocChunk: 0 kB' 'Percpu: 94464 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2888660 kB' 'DirectMap2M: 19859456 kB' 'DirectMap1G: 79691776 kB' 00:03:28.280 20:04:05 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.280 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.280 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.280 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.280 20:04:05 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.280 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.280 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.280 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.280 20:04:05 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.280 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.280 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.280 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.280 20:04:05 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.280 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.280 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.280 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.280 20:04:05 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.280 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.280 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.280 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.280 20:04:05 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.280 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.280 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.280 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.280 20:04:05 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.280 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.280 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.280 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.280 20:04:05 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.280 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.280 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.280 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.280 20:04:05 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.280 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.280 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.280 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.280 20:04:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.280 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.280 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.280 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.280 20:04:05 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.280 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.280 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.280 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.280 20:04:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.280 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.280 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.280 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.280 20:04:05 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.280 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.280 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.280 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.280 20:04:05 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.280 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.280 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.280 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.280 20:04:05 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.280 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.280 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.280 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.280 20:04:05 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.280 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.280 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.280 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.280 20:04:05 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.280 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.280 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.280 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.280 20:04:05 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.280 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.280 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.280 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.280 20:04:05 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.280 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.280 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.280 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.280 20:04:05 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.280 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.280 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.280 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.280 20:04:05 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.280 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.280 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.280 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.280 20:04:05 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.280 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.280 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.280 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.280 20:04:05 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.280 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.280 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.280 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.280 20:04:05 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.280 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.280 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.280 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.280 20:04:05 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.281 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.281 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.281 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.281 20:04:05 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.281 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.281 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.281 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.281 20:04:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.281 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.281 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.281 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.281 20:04:05 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.281 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.281 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.281 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.281 20:04:05 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.281 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.281 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.281 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.281 20:04:05 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.281 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.281 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.281 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.281 20:04:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.281 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.281 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.281 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.281 20:04:05 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.281 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.281 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.281 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.281 20:04:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.281 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.281 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.281 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.281 20:04:05 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.281 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.281 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.281 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.281 20:04:05 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.281 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.281 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.281 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.281 20:04:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.281 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.281 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.281 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.281 20:04:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.281 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.281 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.281 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.281 20:04:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.281 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.281 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.281 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.281 20:04:05 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.281 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.281 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.281 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.281 20:04:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.281 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.281 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.281 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.281 20:04:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.281 20:04:05 -- setup/common.sh@33 -- # echo 0 00:03:28.281 20:04:05 -- setup/common.sh@33 -- # return 0 00:03:28.281 20:04:05 -- setup/hugepages.sh@97 -- # anon=0 00:03:28.281 20:04:05 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:28.281 20:04:05 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:28.281 20:04:05 -- setup/common.sh@18 -- # local node= 00:03:28.281 20:04:05 -- setup/common.sh@19 -- # local var val 00:03:28.281 20:04:05 -- setup/common.sh@20 -- # local mem_f mem 00:03:28.281 20:04:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.281 20:04:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.281 20:04:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.281 20:04:05 -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.281 20:04:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.281 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.281 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.281 20:04:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93323000 kB' 'MemFree: 73429968 kB' 'MemAvailable: 78409584 kB' 'Buffers: 2696 kB' 'Cached: 14310136 kB' 'SwapCached: 0 kB' 'Active: 10186604 kB' 'Inactive: 4658688 kB' 'Active(anon): 9620380 kB' 'Inactive(anon): 0 kB' 'Active(file): 566224 kB' 'Inactive(file): 4658688 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535788 kB' 'Mapped: 239440 kB' 'Shmem: 9087920 kB' 'KReclaimable: 534540 kB' 'Slab: 1071852 kB' 'SReclaimable: 534540 kB' 'SUnreclaim: 537312 kB' 'KernelStack: 19408 kB' 'PageTables: 9112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54001528 kB' 'Committed_AS: 11025176 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212900 kB' 'VmallocChunk: 0 kB' 'Percpu: 94464 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2888660 kB' 'DirectMap2M: 19859456 kB' 'DirectMap1G: 79691776 kB' 00:03:28.281 20:04:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.281 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.281 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.281 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.281 20:04:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.281 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.281 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.281 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.281 20:04:05 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.281 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.281 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.281 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.281 20:04:05 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.281 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.281 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.281 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.281 20:04:05 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.281 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.281 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.281 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.281 20:04:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.281 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.281 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.281 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.281 20:04:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.281 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.281 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.281 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.281 20:04:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.281 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.281 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.281 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.281 20:04:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.281 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.281 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.281 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.281 20:04:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.281 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.281 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.281 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.281 20:04:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.281 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.281 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.281 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.281 20:04:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.281 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.281 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.281 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.281 20:04:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.281 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.281 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.281 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.281 20:04:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.281 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.281 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.281 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.281 20:04:05 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.281 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.281 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.281 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.281 20:04:05 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.281 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.281 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.281 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.281 20:04:05 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.282 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.282 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.282 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.282 20:04:05 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.282 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.282 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.282 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.282 20:04:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.282 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.282 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.282 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.282 20:04:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.282 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.282 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.282 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.282 20:04:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.282 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.282 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.282 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.282 20:04:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.282 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.282 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.282 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.282 20:04:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.282 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.282 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.282 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.282 20:04:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.282 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.282 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.282 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.282 20:04:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.282 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.282 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.282 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.282 20:04:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.282 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.282 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.282 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.282 20:04:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.282 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.282 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.282 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.282 20:04:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.282 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.282 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.282 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.282 20:04:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.282 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.282 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.282 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.282 20:04:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.282 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.282 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.282 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.282 20:04:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.282 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.282 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.282 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.282 20:04:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.282 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.282 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.282 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.282 20:04:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.282 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.282 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.282 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.282 20:04:05 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.282 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.282 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.282 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.282 20:04:05 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.282 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.282 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.282 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.282 20:04:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.282 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.282 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.282 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.282 20:04:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.282 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.282 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.282 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.282 20:04:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.282 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.282 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.282 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.282 20:04:05 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.282 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.282 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.282 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.282 20:04:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.282 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.282 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.282 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.282 20:04:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.282 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.282 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.282 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.282 20:04:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.282 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.282 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.282 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.282 20:04:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.282 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.282 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.282 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.282 20:04:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.282 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.282 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.282 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.282 20:04:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.282 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.282 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.282 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.282 20:04:05 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.282 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.282 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.282 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.282 20:04:05 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.282 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.282 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.282 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.282 20:04:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.282 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.282 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.282 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.282 20:04:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.282 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.282 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.282 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.282 20:04:05 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.282 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.282 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.282 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.282 20:04:05 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.282 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.282 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.282 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.282 20:04:05 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.282 20:04:05 -- setup/common.sh@33 -- # echo 0 00:03:28.282 20:04:05 -- setup/common.sh@33 -- # return 0 00:03:28.282 20:04:05 -- setup/hugepages.sh@99 -- # surp=0 00:03:28.282 20:04:05 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:28.282 20:04:05 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:28.282 20:04:05 -- setup/common.sh@18 -- # local node= 00:03:28.282 20:04:05 -- setup/common.sh@19 -- # local var val 00:03:28.282 20:04:05 -- setup/common.sh@20 -- # local mem_f mem 00:03:28.282 20:04:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.282 20:04:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.282 20:04:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.282 20:04:05 -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.282 20:04:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.282 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.282 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.283 20:04:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93323000 kB' 'MemFree: 73430288 kB' 'MemAvailable: 78409904 kB' 'Buffers: 2696 kB' 'Cached: 14310148 kB' 'SwapCached: 0 kB' 'Active: 10186608 kB' 'Inactive: 4658688 kB' 'Active(anon): 9620384 kB' 'Inactive(anon): 0 kB' 'Active(file): 566224 kB' 'Inactive(file): 4658688 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535792 kB' 'Mapped: 239440 kB' 'Shmem: 9087932 kB' 'KReclaimable: 534540 kB' 'Slab: 1071852 kB' 'SReclaimable: 534540 kB' 'SUnreclaim: 537312 kB' 'KernelStack: 19408 kB' 'PageTables: 9112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54001528 kB' 'Committed_AS: 11025192 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212900 kB' 'VmallocChunk: 0 kB' 'Percpu: 94464 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2888660 kB' 'DirectMap2M: 19859456 kB' 'DirectMap1G: 79691776 kB' 00:03:28.283 20:04:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.283 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.283 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.283 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.283 20:04:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.283 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.283 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.283 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.283 20:04:05 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.283 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.283 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.283 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.283 20:04:05 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.283 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.283 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.283 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.283 20:04:05 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.283 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.283 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.283 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.283 20:04:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.283 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.283 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.283 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.283 20:04:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.283 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.283 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.283 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.283 20:04:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.283 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.283 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.283 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.283 20:04:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.283 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.283 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.283 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.283 20:04:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.283 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.283 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.283 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.283 20:04:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.283 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.283 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.283 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.283 20:04:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.283 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.283 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.283 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.283 20:04:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.283 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.283 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.283 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.283 20:04:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.283 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.283 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.283 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.283 20:04:05 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.283 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.283 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.283 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.283 20:04:05 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.283 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.283 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.283 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.283 20:04:05 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.283 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.283 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.283 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.283 20:04:05 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.283 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.283 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.283 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.283 20:04:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.283 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.283 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.283 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.283 20:04:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.283 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.283 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.283 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.283 20:04:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.283 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.283 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.283 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.283 20:04:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.283 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.283 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.283 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.283 20:04:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.283 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.283 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.283 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.283 20:04:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.283 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.283 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.283 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.283 20:04:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.283 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.283 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.283 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.283 20:04:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.283 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.283 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.283 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.283 20:04:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.283 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.283 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.283 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.283 20:04:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.283 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.283 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.283 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.283 20:04:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.283 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.283 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.283 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.283 20:04:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.283 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.283 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.283 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.283 20:04:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.283 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.283 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.283 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.283 20:04:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.283 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.283 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.283 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.283 20:04:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.283 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.283 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.283 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.283 20:04:05 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.283 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.283 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.283 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.283 20:04:05 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.283 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.283 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.283 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.283 20:04:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.283 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.283 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.283 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.283 20:04:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.284 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.284 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.284 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.284 20:04:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.284 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.284 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.284 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.284 20:04:05 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.284 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.284 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.284 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.284 20:04:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.284 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.284 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.284 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.284 20:04:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.284 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.284 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.284 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.284 20:04:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.284 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.284 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.284 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.284 20:04:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.284 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.284 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.284 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.284 20:04:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.284 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.284 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.284 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.284 20:04:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.284 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.284 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.284 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.284 20:04:05 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.284 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.284 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.284 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.284 20:04:05 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.284 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.284 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.284 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.284 20:04:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.284 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.284 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.284 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.284 20:04:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.284 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.284 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.284 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.284 20:04:05 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.284 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.284 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.284 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.284 20:04:05 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.284 20:04:05 -- setup/common.sh@33 -- # echo 0 00:03:28.284 20:04:05 -- setup/common.sh@33 -- # return 0 00:03:28.284 20:04:05 -- setup/hugepages.sh@100 -- # resv=0 00:03:28.284 20:04:05 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:28.284 nr_hugepages=1024 00:03:28.284 20:04:05 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:28.284 resv_hugepages=0 00:03:28.284 20:04:05 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:28.284 surplus_hugepages=0 00:03:28.284 20:04:05 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:28.284 anon_hugepages=0 00:03:28.284 20:04:05 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:28.284 20:04:05 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:28.284 20:04:05 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:28.284 20:04:05 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:28.284 20:04:05 -- setup/common.sh@18 -- # local node= 00:03:28.284 20:04:05 -- setup/common.sh@19 -- # local var val 00:03:28.284 20:04:05 -- setup/common.sh@20 -- # local mem_f mem 00:03:28.284 20:04:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.284 20:04:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.284 20:04:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.284 20:04:05 -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.284 20:04:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.284 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.284 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.284 20:04:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93323000 kB' 'MemFree: 73431012 kB' 'MemAvailable: 78410628 kB' 'Buffers: 2696 kB' 'Cached: 14310172 kB' 'SwapCached: 0 kB' 'Active: 10186264 kB' 'Inactive: 4658688 kB' 'Active(anon): 9620040 kB' 'Inactive(anon): 0 kB' 'Active(file): 566224 kB' 'Inactive(file): 4658688 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535380 kB' 'Mapped: 239440 kB' 'Shmem: 9087956 kB' 'KReclaimable: 534540 kB' 'Slab: 1071852 kB' 'SReclaimable: 534540 kB' 'SUnreclaim: 537312 kB' 'KernelStack: 19392 kB' 'PageTables: 9056 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54001528 kB' 'Committed_AS: 11025208 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212884 kB' 'VmallocChunk: 0 kB' 'Percpu: 94464 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2888660 kB' 'DirectMap2M: 19859456 kB' 'DirectMap1G: 79691776 kB' 00:03:28.284 20:04:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.284 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.284 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.284 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.284 20:04:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.284 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.284 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.284 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.284 20:04:05 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.284 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.284 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.284 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.284 20:04:05 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.284 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.284 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.284 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.545 20:04:05 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.545 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.545 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.545 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.545 20:04:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.545 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.545 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.545 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.545 20:04:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.545 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.545 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.545 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.545 20:04:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.545 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.545 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.545 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.545 20:04:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.545 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.545 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.545 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.545 20:04:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.545 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.545 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.546 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.546 20:04:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.546 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.546 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.546 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.546 20:04:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.546 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.546 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.546 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.546 20:04:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.546 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.546 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.546 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.546 20:04:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.546 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.546 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.546 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.546 20:04:05 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.546 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.546 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.546 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.546 20:04:05 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.546 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.546 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.546 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.546 20:04:05 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.546 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.546 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.546 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.546 20:04:05 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.546 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.546 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.546 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.546 20:04:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.546 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.546 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.546 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.546 20:04:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.546 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.546 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.546 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.546 20:04:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.546 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.546 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.546 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.546 20:04:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.546 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.546 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.546 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.546 20:04:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.546 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.546 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.546 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.546 20:04:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.546 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.546 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.546 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.546 20:04:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.546 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.546 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.546 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.546 20:04:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.546 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.546 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.546 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.546 20:04:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.546 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.546 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.546 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.546 20:04:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.546 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.546 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.546 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.546 20:04:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.546 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.546 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.546 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.546 20:04:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.546 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.546 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.546 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.546 20:04:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.546 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.546 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.546 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.546 20:04:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.546 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.546 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.546 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.546 20:04:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.546 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.546 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.546 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.546 20:04:05 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.546 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.546 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.546 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.546 20:04:05 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.546 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.546 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.546 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.546 20:04:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.546 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.546 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.546 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.546 20:04:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.546 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.546 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.546 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.546 20:04:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.546 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.546 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.546 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.546 20:04:05 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.546 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.546 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.546 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.546 20:04:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.546 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.546 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.546 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.546 20:04:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.546 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.546 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.546 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.546 20:04:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.546 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.546 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.546 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.546 20:04:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.546 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.546 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.546 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.546 20:04:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.546 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.546 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.546 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.546 20:04:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.546 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.546 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.546 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.546 20:04:05 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.546 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.546 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.546 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.546 20:04:05 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.546 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.546 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.546 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.546 20:04:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.546 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.546 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.546 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.546 20:04:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.546 20:04:05 -- setup/common.sh@33 -- # echo 1024 00:03:28.546 20:04:05 -- setup/common.sh@33 -- # return 0 00:03:28.547 20:04:05 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:28.547 20:04:05 -- setup/hugepages.sh@112 -- # get_nodes 00:03:28.547 20:04:05 -- setup/hugepages.sh@27 -- # local node 00:03:28.547 20:04:05 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:28.547 20:04:05 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:28.547 20:04:05 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:28.547 20:04:05 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:28.547 20:04:05 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:28.547 20:04:05 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:28.547 20:04:05 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:28.547 20:04:05 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:28.547 20:04:05 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:28.547 20:04:05 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:28.547 20:04:05 -- setup/common.sh@18 -- # local node=0 00:03:28.547 20:04:05 -- setup/common.sh@19 -- # local var val 00:03:28.547 20:04:05 -- setup/common.sh@20 -- # local mem_f mem 00:03:28.547 20:04:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.547 20:04:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:28.547 20:04:05 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:28.547 20:04:05 -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.547 20:04:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.547 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.547 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.547 20:04:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32634628 kB' 'MemFree: 23581968 kB' 'MemUsed: 9052660 kB' 'SwapCached: 0 kB' 'Active: 4219600 kB' 'Inactive: 1226824 kB' 'Active(anon): 3765876 kB' 'Inactive(anon): 0 kB' 'Active(file): 453724 kB' 'Inactive(file): 1226824 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5062476 kB' 'Mapped: 191808 kB' 'AnonPages: 387180 kB' 'Shmem: 3381928 kB' 'KernelStack: 11864 kB' 'PageTables: 6016 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 246712 kB' 'Slab: 523312 kB' 'SReclaimable: 246712 kB' 'SUnreclaim: 276600 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:28.547 20:04:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.547 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.547 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.547 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.547 20:04:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.547 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.547 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.547 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.547 20:04:05 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.547 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.547 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.547 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.547 20:04:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.547 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.547 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.547 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.547 20:04:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.547 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.547 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.547 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.547 20:04:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.547 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.547 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.547 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.547 20:04:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.547 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.547 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.547 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.547 20:04:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.547 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.547 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.547 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.547 20:04:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.547 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.547 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.547 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.547 20:04:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.547 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.547 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.547 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.547 20:04:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.547 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.547 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.547 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.547 20:04:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.547 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.547 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.547 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.547 20:04:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.547 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.547 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.547 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.547 20:04:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.547 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.547 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.547 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.547 20:04:05 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.547 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.547 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.547 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.547 20:04:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.547 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.547 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.547 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.547 20:04:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.547 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.547 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.547 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.547 20:04:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.547 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.547 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.547 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.547 20:04:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.547 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.547 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.547 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.547 20:04:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.547 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.547 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.547 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.547 20:04:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.547 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.547 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.547 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.547 20:04:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.547 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.547 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.547 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.547 20:04:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.547 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.547 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.547 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.547 20:04:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.547 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.547 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.547 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.547 20:04:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.547 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.547 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.547 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.548 20:04:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.548 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.548 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.548 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.548 20:04:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.548 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.548 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.548 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.548 20:04:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.548 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.548 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.548 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.548 20:04:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.548 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.548 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.548 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.548 20:04:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.548 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.548 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.548 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.548 20:04:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.548 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.548 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.548 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.548 20:04:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.548 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.548 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.548 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.548 20:04:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.548 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.548 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.548 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.548 20:04:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.548 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.548 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.548 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.548 20:04:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.548 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.548 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.548 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.548 20:04:05 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.548 20:04:05 -- setup/common.sh@32 -- # continue 00:03:28.548 20:04:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.548 20:04:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.548 20:04:05 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.548 20:04:05 -- setup/common.sh@33 -- # echo 0 00:03:28.548 20:04:05 -- setup/common.sh@33 -- # return 0 00:03:28.548 20:04:05 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:28.548 20:04:05 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:28.548 20:04:05 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:28.548 20:04:05 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:28.548 20:04:05 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:28.548 node0=1024 expecting 1024 00:03:28.548 20:04:05 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:28.548 00:03:28.548 real 0m6.586s 00:03:28.548 user 0m2.620s 00:03:28.548 sys 0m4.065s 00:03:28.548 20:04:05 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:28.548 20:04:05 -- common/autotest_common.sh@10 -- # set +x 00:03:28.548 ************************************ 00:03:28.548 END TEST no_shrink_alloc 00:03:28.548 ************************************ 00:03:28.548 20:04:05 -- setup/hugepages.sh@217 -- # clear_hp 00:03:28.548 20:04:05 -- setup/hugepages.sh@37 -- # local node hp 00:03:28.548 20:04:05 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:28.548 20:04:05 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:28.548 20:04:05 -- setup/hugepages.sh@41 -- # echo 0 00:03:28.548 20:04:05 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:28.548 20:04:05 -- setup/hugepages.sh@41 -- # echo 0 00:03:28.548 20:04:05 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:28.548 20:04:05 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:28.548 20:04:05 -- setup/hugepages.sh@41 -- # echo 0 00:03:28.548 20:04:05 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:28.548 20:04:05 -- setup/hugepages.sh@41 -- # echo 0 00:03:28.548 20:04:05 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:28.548 20:04:05 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:28.548 00:03:28.548 real 0m24.123s 00:03:28.548 user 0m9.348s 00:03:28.548 sys 0m14.197s 00:03:28.548 20:04:05 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:28.548 20:04:05 -- common/autotest_common.sh@10 -- # set +x 00:03:28.548 ************************************ 00:03:28.548 END TEST hugepages 00:03:28.548 ************************************ 00:03:28.548 20:04:05 -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:28.548 20:04:05 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:03:28.548 20:04:05 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:03:28.548 20:04:05 -- common/autotest_common.sh@10 -- # set +x 00:03:28.548 ************************************ 00:03:28.548 START TEST driver 00:03:28.548 ************************************ 00:03:28.548 20:04:05 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:28.548 * Looking for test storage... 00:03:28.548 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:28.548 20:04:05 -- setup/driver.sh@68 -- # setup reset 00:03:28.548 20:04:05 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:28.548 20:04:05 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:33.823 20:04:10 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:33.823 20:04:10 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:03:33.823 20:04:10 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:03:33.823 20:04:10 -- common/autotest_common.sh@10 -- # set +x 00:03:33.823 ************************************ 00:03:33.823 START TEST guess_driver 00:03:33.823 ************************************ 00:03:33.823 20:04:10 -- common/autotest_common.sh@1102 -- # guess_driver 00:03:33.823 20:04:10 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:33.823 20:04:10 -- setup/driver.sh@47 -- # local fail=0 00:03:33.823 20:04:10 -- setup/driver.sh@49 -- # pick_driver 00:03:33.823 20:04:10 -- setup/driver.sh@36 -- # vfio 00:03:33.823 20:04:10 -- setup/driver.sh@21 -- # local iommu_grups 00:03:33.823 20:04:10 -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:33.823 20:04:10 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:33.823 20:04:10 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:33.823 20:04:10 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:33.823 20:04:10 -- setup/driver.sh@29 -- # (( 220 > 0 )) 00:03:33.823 20:04:10 -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:33.823 20:04:10 -- setup/driver.sh@14 -- # mod vfio_pci 00:03:33.823 20:04:10 -- setup/driver.sh@12 -- # dep vfio_pci 00:03:33.823 20:04:10 -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:33.823 20:04:10 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:33.823 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:33.823 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:33.824 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:33.824 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:33.824 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:33.824 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:33.824 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:33.824 20:04:10 -- setup/driver.sh@30 -- # return 0 00:03:33.824 20:04:10 -- setup/driver.sh@37 -- # echo vfio-pci 00:03:33.824 20:04:10 -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:33.824 20:04:10 -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:33.824 20:04:10 -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:33.824 Looking for driver=vfio-pci 00:03:33.824 20:04:10 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:33.824 20:04:10 -- setup/driver.sh@45 -- # setup output config 00:03:33.824 20:04:10 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:33.824 20:04:10 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:36.361 20:04:13 -- setup/driver.sh@58 -- # [[ denied == \-\> ]] 00:03:36.361 20:04:13 -- setup/driver.sh@58 -- # continue 00:03:36.361 20:04:13 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:36.361 20:04:13 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:36.361 20:04:13 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:36.361 20:04:13 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:36.361 20:04:13 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:36.361 20:04:13 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:36.361 20:04:13 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:36.361 20:04:13 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:36.361 20:04:13 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:36.361 20:04:13 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:36.361 20:04:13 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:36.361 20:04:13 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:36.361 20:04:13 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:36.361 20:04:13 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:36.361 20:04:13 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:36.361 20:04:13 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:36.361 20:04:13 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:36.361 20:04:13 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:36.361 20:04:13 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:36.361 20:04:13 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:36.361 20:04:13 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:36.361 20:04:13 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:36.361 20:04:13 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:36.361 20:04:13 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:36.361 20:04:13 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:36.361 20:04:13 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:36.361 20:04:13 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:36.361 20:04:13 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:36.361 20:04:13 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:36.361 20:04:13 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:36.361 20:04:13 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:36.361 20:04:13 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:36.361 20:04:13 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:36.361 20:04:13 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:36.361 20:04:13 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:36.361 20:04:13 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:36.361 20:04:13 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:36.361 20:04:13 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:36.361 20:04:13 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:36.361 20:04:13 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:36.361 20:04:13 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:36.361 20:04:13 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:36.361 20:04:13 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:36.361 20:04:13 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:36.361 20:04:13 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:36.361 20:04:13 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:36.361 20:04:13 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:36.361 20:04:13 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:36.361 20:04:13 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:37.301 20:04:14 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:37.301 20:04:14 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:37.301 20:04:14 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:37.301 20:04:14 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:37.301 20:04:14 -- setup/driver.sh@65 -- # setup reset 00:03:37.301 20:04:14 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:37.301 20:04:14 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:42.574 00:03:42.574 real 0m8.555s 00:03:42.575 user 0m2.637s 00:03:42.575 sys 0m4.467s 00:03:42.575 20:04:18 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:42.575 20:04:18 -- common/autotest_common.sh@10 -- # set +x 00:03:42.575 ************************************ 00:03:42.575 END TEST guess_driver 00:03:42.575 ************************************ 00:03:42.575 00:03:42.575 real 0m13.160s 00:03:42.575 user 0m4.035s 00:03:42.575 sys 0m6.929s 00:03:42.575 20:04:18 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:42.575 20:04:18 -- common/autotest_common.sh@10 -- # set +x 00:03:42.575 ************************************ 00:03:42.575 END TEST driver 00:03:42.575 ************************************ 00:03:42.575 20:04:19 -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:42.575 20:04:19 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:03:42.575 20:04:19 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:03:42.575 20:04:19 -- common/autotest_common.sh@10 -- # set +x 00:03:42.575 ************************************ 00:03:42.575 START TEST devices 00:03:42.575 ************************************ 00:03:42.575 20:04:19 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:42.575 * Looking for test storage... 00:03:42.575 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:42.575 20:04:19 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:42.575 20:04:19 -- setup/devices.sh@192 -- # setup reset 00:03:42.575 20:04:19 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:42.575 20:04:19 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:45.136 20:04:22 -- setup/devices.sh@194 -- # get_zoned_devs 00:03:45.136 20:04:22 -- common/autotest_common.sh@1652 -- # zoned_devs=() 00:03:45.136 20:04:22 -- common/autotest_common.sh@1652 -- # local -gA zoned_devs 00:03:45.136 20:04:22 -- common/autotest_common.sh@1653 -- # local nvme bdf 00:03:45.136 20:04:22 -- common/autotest_common.sh@1655 -- # for nvme in /sys/block/nvme* 00:03:45.136 20:04:22 -- common/autotest_common.sh@1656 -- # is_block_zoned nvme0n1 00:03:45.136 20:04:22 -- common/autotest_common.sh@1645 -- # local device=nvme0n1 00:03:45.136 20:04:22 -- common/autotest_common.sh@1647 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:45.136 20:04:22 -- common/autotest_common.sh@1648 -- # [[ none != none ]] 00:03:45.136 20:04:22 -- common/autotest_common.sh@1655 -- # for nvme in /sys/block/nvme* 00:03:45.136 20:04:22 -- common/autotest_common.sh@1656 -- # is_block_zoned nvme1n1 00:03:45.136 20:04:22 -- common/autotest_common.sh@1645 -- # local device=nvme1n1 00:03:45.136 20:04:22 -- common/autotest_common.sh@1647 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:45.136 20:04:22 -- common/autotest_common.sh@1648 -- # [[ none != none ]] 00:03:45.136 20:04:22 -- common/autotest_common.sh@1655 -- # for nvme in /sys/block/nvme* 00:03:45.136 20:04:22 -- common/autotest_common.sh@1656 -- # is_block_zoned nvme1n2 00:03:45.136 20:04:22 -- common/autotest_common.sh@1645 -- # local device=nvme1n2 00:03:45.136 20:04:22 -- common/autotest_common.sh@1647 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:45.136 20:04:22 -- common/autotest_common.sh@1648 -- # [[ host-managed != none ]] 00:03:45.136 20:04:22 -- common/autotest_common.sh@1657 -- # zoned_devs["${nvme##*/}"]=0000:5f:00.0 00:03:45.136 20:04:22 -- setup/devices.sh@196 -- # blocks=() 00:03:45.136 20:04:22 -- setup/devices.sh@196 -- # declare -a blocks 00:03:45.136 20:04:22 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:45.136 20:04:22 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:45.136 20:04:22 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:45.136 20:04:22 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:45.136 20:04:22 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:45.136 20:04:22 -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:45.136 20:04:22 -- setup/devices.sh@202 -- # pci=0000:5e:00.0 00:03:45.136 20:04:22 -- setup/devices.sh@203 -- # [[ 0000:5f:00.0 == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:03:45.136 20:04:22 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:45.136 20:04:22 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:03:45.136 20:04:22 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:45.136 No valid GPT data, bailing 00:03:45.136 20:04:22 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:45.136 20:04:22 -- scripts/common.sh@393 -- # pt= 00:03:45.136 20:04:22 -- scripts/common.sh@394 -- # return 1 00:03:45.136 20:04:22 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:45.136 20:04:22 -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:45.136 20:04:22 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:45.136 20:04:22 -- setup/common.sh@80 -- # echo 1000204886016 00:03:45.136 20:04:22 -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:03:45.136 20:04:22 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:45.136 20:04:22 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:5e:00.0 00:03:45.136 20:04:22 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:45.136 20:04:22 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:03:45.136 20:04:22 -- setup/devices.sh@201 -- # ctrl=nvme1 00:03:45.136 20:04:22 -- setup/devices.sh@202 -- # pci=0000:5f:00.0 00:03:45.136 20:04:22 -- setup/devices.sh@203 -- # [[ 0000:5f:00.0 == *\0\0\0\0\:\5\f\:\0\0\.\0* ]] 00:03:45.136 20:04:22 -- setup/devices.sh@203 -- # continue 00:03:45.136 20:04:22 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:45.136 20:04:22 -- setup/devices.sh@201 -- # ctrl=nvme1n2 00:03:45.136 20:04:22 -- setup/devices.sh@201 -- # ctrl=nvme1 00:03:45.136 20:04:22 -- setup/devices.sh@202 -- # pci=0000:5f:00.0 00:03:45.136 20:04:22 -- setup/devices.sh@203 -- # [[ 0000:5f:00.0 == *\0\0\0\0\:\5\f\:\0\0\.\0* ]] 00:03:45.136 20:04:22 -- setup/devices.sh@203 -- # continue 00:03:45.136 20:04:22 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:45.136 20:04:22 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:45.136 20:04:22 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:45.136 20:04:22 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:03:45.136 20:04:22 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:03:45.136 20:04:22 -- common/autotest_common.sh@10 -- # set +x 00:03:45.136 ************************************ 00:03:45.136 START TEST nvme_mount 00:03:45.136 ************************************ 00:03:45.136 20:04:22 -- common/autotest_common.sh@1102 -- # nvme_mount 00:03:45.136 20:04:22 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:45.136 20:04:22 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:45.136 20:04:22 -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:45.136 20:04:22 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:45.136 20:04:22 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:45.136 20:04:22 -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:45.136 20:04:22 -- setup/common.sh@40 -- # local part_no=1 00:03:45.136 20:04:22 -- setup/common.sh@41 -- # local size=1073741824 00:03:45.136 20:04:22 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:45.136 20:04:22 -- setup/common.sh@44 -- # parts=() 00:03:45.136 20:04:22 -- setup/common.sh@44 -- # local parts 00:03:45.136 20:04:22 -- setup/common.sh@46 -- # (( part = 1 )) 00:03:45.136 20:04:22 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:45.136 20:04:22 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:45.136 20:04:22 -- setup/common.sh@46 -- # (( part++ )) 00:03:45.137 20:04:22 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:45.137 20:04:22 -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:45.137 20:04:22 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:45.137 20:04:22 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:46.076 Creating new GPT entries in memory. 00:03:46.076 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:46.076 other utilities. 00:03:46.076 20:04:23 -- setup/common.sh@57 -- # (( part = 1 )) 00:03:46.076 20:04:23 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:46.076 20:04:23 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:46.076 20:04:23 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:46.076 20:04:23 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:47.456 Creating new GPT entries in memory. 00:03:47.456 The operation has completed successfully. 00:03:47.456 20:04:24 -- setup/common.sh@57 -- # (( part++ )) 00:03:47.456 20:04:24 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:47.456 20:04:24 -- setup/common.sh@62 -- # wait 1573927 00:03:47.456 20:04:24 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:47.456 20:04:24 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:47.456 20:04:24 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:47.456 20:04:24 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:47.456 20:04:24 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:47.456 20:04:24 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:47.456 20:04:24 -- setup/devices.sh@105 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:47.456 20:04:24 -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:47.456 20:04:24 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:47.456 20:04:24 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:47.456 20:04:24 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:47.456 20:04:24 -- setup/devices.sh@53 -- # local found=0 00:03:47.456 20:04:24 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:47.456 20:04:24 -- setup/devices.sh@56 -- # : 00:03:47.456 20:04:24 -- setup/devices.sh@59 -- # local pci status 00:03:47.456 20:04:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.456 20:04:24 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:47.456 20:04:24 -- setup/devices.sh@47 -- # setup output config 00:03:47.456 20:04:24 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:47.456 20:04:24 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:49.993 20:04:27 -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:49.993 20:04:27 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:49.993 20:04:27 -- setup/devices.sh@63 -- # found=1 00:03:49.993 20:04:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.993 20:04:27 -- setup/devices.sh@62 -- # [[ 0000:5f:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:49.993 20:04:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.253 20:04:27 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:50.253 20:04:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.253 20:04:27 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:50.253 20:04:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.253 20:04:27 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:50.253 20:04:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.253 20:04:27 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:50.253 20:04:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.253 20:04:27 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:50.253 20:04:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.253 20:04:27 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:50.253 20:04:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.253 20:04:27 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:50.253 20:04:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.253 20:04:27 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:50.253 20:04:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.253 20:04:27 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:50.253 20:04:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.253 20:04:27 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:50.253 20:04:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.253 20:04:27 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:50.253 20:04:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.253 20:04:27 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:50.253 20:04:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.253 20:04:27 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:50.253 20:04:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.253 20:04:27 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:50.253 20:04:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.253 20:04:27 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:50.253 20:04:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.253 20:04:27 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:50.253 20:04:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.513 20:04:27 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:50.513 20:04:27 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:50.513 20:04:27 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:50.513 20:04:27 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:50.513 20:04:27 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:50.513 20:04:27 -- setup/devices.sh@110 -- # cleanup_nvme 00:03:50.513 20:04:27 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:50.513 20:04:27 -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:50.513 20:04:27 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:50.513 20:04:27 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:50.513 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:50.513 20:04:27 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:50.513 20:04:27 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:50.772 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:50.772 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:50.772 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:50.772 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:50.772 20:04:28 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:50.772 20:04:28 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:50.772 20:04:28 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:50.772 20:04:28 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:50.772 20:04:28 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:50.772 20:04:28 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:50.772 20:04:28 -- setup/devices.sh@116 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:50.772 20:04:28 -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:50.772 20:04:28 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:50.772 20:04:28 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:50.772 20:04:28 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:50.772 20:04:28 -- setup/devices.sh@53 -- # local found=0 00:03:50.772 20:04:28 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:50.772 20:04:28 -- setup/devices.sh@56 -- # : 00:03:50.772 20:04:28 -- setup/devices.sh@59 -- # local pci status 00:03:50.772 20:04:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.772 20:04:28 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:50.772 20:04:28 -- setup/devices.sh@47 -- # setup output config 00:03:50.772 20:04:28 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:50.772 20:04:28 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:54.065 20:04:31 -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.065 20:04:31 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:54.065 20:04:31 -- setup/devices.sh@63 -- # found=1 00:03:54.065 20:04:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.065 20:04:31 -- setup/devices.sh@62 -- # [[ 0000:5f:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.065 20:04:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.065 20:04:31 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.065 20:04:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.065 20:04:31 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.065 20:04:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.066 20:04:31 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.066 20:04:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.066 20:04:31 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.066 20:04:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.066 20:04:31 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.066 20:04:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.066 20:04:31 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.066 20:04:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.066 20:04:31 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.066 20:04:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.066 20:04:31 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.066 20:04:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.066 20:04:31 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.066 20:04:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.066 20:04:31 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.066 20:04:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.066 20:04:31 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.066 20:04:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.066 20:04:31 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.066 20:04:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.066 20:04:31 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.066 20:04:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.066 20:04:31 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.066 20:04:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.066 20:04:31 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.066 20:04:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.066 20:04:31 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.066 20:04:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.066 20:04:31 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:54.066 20:04:31 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:54.066 20:04:31 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:54.326 20:04:31 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:54.326 20:04:31 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:54.326 20:04:31 -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:54.326 20:04:31 -- setup/devices.sh@125 -- # verify 0000:5e:00.0 data@nvme0n1 '' '' 00:03:54.326 20:04:31 -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:54.326 20:04:31 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:54.326 20:04:31 -- setup/devices.sh@50 -- # local mount_point= 00:03:54.326 20:04:31 -- setup/devices.sh@51 -- # local test_file= 00:03:54.326 20:04:31 -- setup/devices.sh@53 -- # local found=0 00:03:54.326 20:04:31 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:54.326 20:04:31 -- setup/devices.sh@59 -- # local pci status 00:03:54.326 20:04:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.326 20:04:31 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:54.326 20:04:31 -- setup/devices.sh@47 -- # setup output config 00:03:54.326 20:04:31 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:54.326 20:04:31 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:56.863 20:04:34 -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:56.863 20:04:34 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:56.863 20:04:34 -- setup/devices.sh@63 -- # found=1 00:03:56.863 20:04:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.863 20:04:34 -- setup/devices.sh@62 -- # [[ 0000:5f:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:57.123 20:04:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.123 20:04:34 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:57.123 20:04:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.123 20:04:34 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:57.123 20:04:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.123 20:04:34 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:57.123 20:04:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.123 20:04:34 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:57.123 20:04:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.123 20:04:34 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:57.123 20:04:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.123 20:04:34 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:57.123 20:04:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.123 20:04:34 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:57.123 20:04:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.123 20:04:34 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:57.123 20:04:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.123 20:04:34 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:57.123 20:04:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.123 20:04:34 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:57.123 20:04:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.123 20:04:34 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:57.123 20:04:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.123 20:04:34 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:57.123 20:04:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.123 20:04:34 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:57.123 20:04:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.123 20:04:34 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:57.123 20:04:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.123 20:04:34 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:57.123 20:04:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.123 20:04:34 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:57.123 20:04:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.383 20:04:34 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:57.383 20:04:34 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:57.383 20:04:34 -- setup/devices.sh@68 -- # return 0 00:03:57.383 20:04:34 -- setup/devices.sh@128 -- # cleanup_nvme 00:03:57.383 20:04:34 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:57.383 20:04:34 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:57.383 20:04:34 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:57.383 20:04:34 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:57.383 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:57.383 00:03:57.383 real 0m12.246s 00:03:57.383 user 0m3.783s 00:03:57.383 sys 0m6.306s 00:03:57.383 20:04:34 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:57.383 20:04:34 -- common/autotest_common.sh@10 -- # set +x 00:03:57.383 ************************************ 00:03:57.383 END TEST nvme_mount 00:03:57.383 ************************************ 00:03:57.383 20:04:34 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:57.383 20:04:34 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:03:57.383 20:04:34 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:03:57.383 20:04:34 -- common/autotest_common.sh@10 -- # set +x 00:03:57.383 ************************************ 00:03:57.383 START TEST dm_mount 00:03:57.383 ************************************ 00:03:57.383 20:04:34 -- common/autotest_common.sh@1102 -- # dm_mount 00:03:57.383 20:04:34 -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:57.383 20:04:34 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:57.383 20:04:34 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:57.383 20:04:34 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:57.383 20:04:34 -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:57.383 20:04:34 -- setup/common.sh@40 -- # local part_no=2 00:03:57.383 20:04:34 -- setup/common.sh@41 -- # local size=1073741824 00:03:57.383 20:04:34 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:57.383 20:04:34 -- setup/common.sh@44 -- # parts=() 00:03:57.383 20:04:34 -- setup/common.sh@44 -- # local parts 00:03:57.383 20:04:34 -- setup/common.sh@46 -- # (( part = 1 )) 00:03:57.383 20:04:34 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:57.383 20:04:34 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:57.383 20:04:34 -- setup/common.sh@46 -- # (( part++ )) 00:03:57.383 20:04:34 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:57.383 20:04:34 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:57.383 20:04:34 -- setup/common.sh@46 -- # (( part++ )) 00:03:57.383 20:04:34 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:57.383 20:04:34 -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:57.383 20:04:34 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:57.383 20:04:34 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:58.765 Creating new GPT entries in memory. 00:03:58.765 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:58.765 other utilities. 00:03:58.765 20:04:35 -- setup/common.sh@57 -- # (( part = 1 )) 00:03:58.765 20:04:35 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:58.765 20:04:35 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:58.765 20:04:35 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:58.765 20:04:35 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:59.705 Creating new GPT entries in memory. 00:03:59.705 The operation has completed successfully. 00:03:59.705 20:04:36 -- setup/common.sh@57 -- # (( part++ )) 00:03:59.705 20:04:36 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:59.705 20:04:36 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:59.705 20:04:36 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:59.705 20:04:36 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:00.644 The operation has completed successfully. 00:04:00.644 20:04:37 -- setup/common.sh@57 -- # (( part++ )) 00:04:00.644 20:04:37 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:00.644 20:04:37 -- setup/common.sh@62 -- # wait 1578769 00:04:00.644 20:04:37 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:00.644 20:04:37 -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:00.644 20:04:37 -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:00.644 20:04:37 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:00.644 20:04:37 -- setup/devices.sh@160 -- # for t in {1..5} 00:04:00.644 20:04:37 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:00.644 20:04:37 -- setup/devices.sh@161 -- # break 00:04:00.644 20:04:37 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:00.644 20:04:37 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:00.644 20:04:37 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:00.644 20:04:37 -- setup/devices.sh@166 -- # dm=dm-0 00:04:00.645 20:04:37 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:00.645 20:04:37 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:00.645 20:04:37 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:00.645 20:04:37 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:00.645 20:04:37 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:00.645 20:04:37 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:00.645 20:04:37 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:00.645 20:04:37 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:00.645 20:04:37 -- setup/devices.sh@174 -- # verify 0000:5e:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:00.645 20:04:37 -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:00.645 20:04:37 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:00.645 20:04:37 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:00.645 20:04:37 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:00.645 20:04:37 -- setup/devices.sh@53 -- # local found=0 00:04:00.645 20:04:37 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:00.645 20:04:37 -- setup/devices.sh@56 -- # : 00:04:00.645 20:04:37 -- setup/devices.sh@59 -- # local pci status 00:04:00.645 20:04:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.645 20:04:37 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:00.645 20:04:37 -- setup/devices.sh@47 -- # setup output config 00:04:00.645 20:04:37 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:00.645 20:04:37 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:03.941 20:04:40 -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.941 20:04:40 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:03.941 20:04:40 -- setup/devices.sh@63 -- # found=1 00:04:03.941 20:04:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.941 20:04:40 -- setup/devices.sh@62 -- # [[ 0000:5f:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.941 20:04:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.941 20:04:40 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.941 20:04:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.941 20:04:40 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.941 20:04:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.941 20:04:40 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.941 20:04:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.941 20:04:40 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.941 20:04:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.941 20:04:40 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.941 20:04:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.941 20:04:40 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.941 20:04:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.941 20:04:40 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.941 20:04:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.941 20:04:40 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.941 20:04:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.941 20:04:40 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.941 20:04:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.941 20:04:40 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.941 20:04:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.941 20:04:40 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.941 20:04:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.941 20:04:40 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.941 20:04:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.941 20:04:40 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.941 20:04:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.941 20:04:40 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.941 20:04:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.941 20:04:40 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.941 20:04:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.941 20:04:40 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.941 20:04:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.941 20:04:41 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:03.941 20:04:41 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:03.941 20:04:41 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:03.941 20:04:41 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:03.941 20:04:41 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:03.941 20:04:41 -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:03.941 20:04:41 -- setup/devices.sh@184 -- # verify 0000:5e:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:03.941 20:04:41 -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:03.941 20:04:41 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:03.941 20:04:41 -- setup/devices.sh@50 -- # local mount_point= 00:04:03.941 20:04:41 -- setup/devices.sh@51 -- # local test_file= 00:04:03.941 20:04:41 -- setup/devices.sh@53 -- # local found=0 00:04:03.941 20:04:41 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:03.941 20:04:41 -- setup/devices.sh@59 -- # local pci status 00:04:03.941 20:04:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.941 20:04:41 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:03.941 20:04:41 -- setup/devices.sh@47 -- # setup output config 00:04:03.941 20:04:41 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:03.941 20:04:41 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:06.482 20:04:43 -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.482 20:04:43 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:06.482 20:04:43 -- setup/devices.sh@63 -- # found=1 00:04:06.482 20:04:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.482 20:04:43 -- setup/devices.sh@62 -- # [[ 0000:5f:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.482 20:04:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.482 20:04:43 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.482 20:04:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.482 20:04:43 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.482 20:04:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.482 20:04:43 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.482 20:04:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.482 20:04:43 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.482 20:04:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.482 20:04:43 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.482 20:04:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.482 20:04:43 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.482 20:04:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.482 20:04:43 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.482 20:04:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.482 20:04:43 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.482 20:04:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.482 20:04:43 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.482 20:04:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.482 20:04:43 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.482 20:04:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.482 20:04:43 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.482 20:04:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.482 20:04:43 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.482 20:04:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.482 20:04:43 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.482 20:04:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.482 20:04:43 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.482 20:04:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.741 20:04:43 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.741 20:04:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.741 20:04:43 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.741 20:04:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.741 20:04:44 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:06.741 20:04:44 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:06.741 20:04:44 -- setup/devices.sh@68 -- # return 0 00:04:06.741 20:04:44 -- setup/devices.sh@187 -- # cleanup_dm 00:04:06.741 20:04:44 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:06.741 20:04:44 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:06.741 20:04:44 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:06.741 20:04:44 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:06.741 20:04:44 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:06.741 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:06.741 20:04:44 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:06.741 20:04:44 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:06.741 00:04:06.741 real 0m9.398s 00:04:06.741 user 0m2.251s 00:04:06.741 sys 0m4.023s 00:04:06.741 20:04:44 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:06.742 20:04:44 -- common/autotest_common.sh@10 -- # set +x 00:04:06.742 ************************************ 00:04:06.742 END TEST dm_mount 00:04:06.742 ************************************ 00:04:06.742 20:04:44 -- setup/devices.sh@1 -- # cleanup 00:04:06.742 20:04:44 -- setup/devices.sh@11 -- # cleanup_nvme 00:04:06.742 20:04:44 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:07.001 20:04:44 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:07.001 20:04:44 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:07.001 20:04:44 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:07.001 20:04:44 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:07.261 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:07.261 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:07.261 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:07.261 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:07.261 20:04:44 -- setup/devices.sh@12 -- # cleanup_dm 00:04:07.261 20:04:44 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:07.261 20:04:44 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:07.261 20:04:44 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:07.261 20:04:44 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:07.261 20:04:44 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:07.261 20:04:44 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:07.261 00:04:07.261 real 0m25.434s 00:04:07.261 user 0m7.364s 00:04:07.261 sys 0m12.621s 00:04:07.261 20:04:44 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:07.261 20:04:44 -- common/autotest_common.sh@10 -- # set +x 00:04:07.261 ************************************ 00:04:07.261 END TEST devices 00:04:07.261 ************************************ 00:04:07.261 00:04:07.261 real 1m25.146s 00:04:07.261 user 0m28.432s 00:04:07.261 sys 0m46.885s 00:04:07.261 20:04:44 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:07.261 20:04:44 -- common/autotest_common.sh@10 -- # set +x 00:04:07.261 ************************************ 00:04:07.261 END TEST setup.sh 00:04:07.261 ************************************ 00:04:07.261 20:04:44 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:10.592 Hugepages 00:04:10.592 node hugesize free / total 00:04:10.592 node0 1048576kB 0 / 0 00:04:10.592 node0 2048kB 2048 / 2048 00:04:10.592 node1 1048576kB 0 / 0 00:04:10.592 node1 2048kB 0 / 0 00:04:10.592 00:04:10.592 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:10.592 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:10.592 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:10.592 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:10.592 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:10.592 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:10.592 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:10.592 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:10.592 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:10.592 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:04:10.592 NVMe 0000:5f:00.0 1b96 2600 0 nvme nvme1 nvme1n1 nvme1n2 00:04:10.592 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:10.592 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:10.592 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:10.592 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:10.592 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:10.592 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:10.592 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:10.592 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:10.592 20:04:47 -- spdk/autotest.sh@141 -- # uname -s 00:04:10.593 20:04:47 -- spdk/autotest.sh@141 -- # [[ Linux == Linux ]] 00:04:10.593 20:04:47 -- spdk/autotest.sh@143 -- # nvme_namespace_revert 00:04:10.593 20:04:47 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:13.129 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:04:13.389 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:13.389 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:13.389 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:13.389 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:13.389 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:13.389 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:13.389 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:13.389 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:13.389 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:13.389 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:13.389 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:13.389 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:13.389 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:13.389 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:13.649 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:13.649 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:14.588 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:14.588 20:04:51 -- common/autotest_common.sh@1515 -- # sleep 1 00:04:15.527 20:04:52 -- common/autotest_common.sh@1516 -- # bdfs=() 00:04:15.527 20:04:52 -- common/autotest_common.sh@1516 -- # local bdfs 00:04:15.527 20:04:52 -- common/autotest_common.sh@1517 -- # bdfs=($(get_nvme_bdfs)) 00:04:15.527 20:04:52 -- common/autotest_common.sh@1517 -- # get_nvme_bdfs 00:04:15.527 20:04:52 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:15.527 20:04:52 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:15.527 20:04:52 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:15.527 20:04:52 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:15.527 20:04:52 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:15.527 20:04:52 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:04:15.527 20:04:52 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:5e:00.0 00:04:15.527 20:04:52 -- common/autotest_common.sh@1519 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:18.062 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:04:18.321 Waiting for block devices as requested 00:04:18.321 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:04:18.581 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:18.581 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:18.581 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:18.841 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:18.841 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:18.841 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:19.101 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:19.101 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:19.101 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:19.101 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:19.361 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:19.361 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:19.361 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:19.361 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:19.621 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:19.621 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:19.621 20:04:56 -- common/autotest_common.sh@1521 -- # for bdf in "${bdfs[@]}" 00:04:19.621 20:04:56 -- common/autotest_common.sh@1522 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:04:19.622 20:04:56 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:19.622 20:04:56 -- common/autotest_common.sh@1485 -- # grep 0000:5e:00.0/nvme/nvme 00:04:19.622 20:04:57 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:19.622 20:04:57 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:04:19.622 20:04:57 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:19.622 20:04:57 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:04:19.622 20:04:57 -- common/autotest_common.sh@1522 -- # nvme_ctrlr=/dev/nvme0 00:04:19.622 20:04:57 -- common/autotest_common.sh@1523 -- # [[ -z /dev/nvme0 ]] 00:04:19.622 20:04:57 -- common/autotest_common.sh@1528 -- # nvme id-ctrl /dev/nvme0 00:04:19.622 20:04:57 -- common/autotest_common.sh@1528 -- # grep oacs 00:04:19.622 20:04:57 -- common/autotest_common.sh@1528 -- # cut -d: -f2 00:04:19.622 20:04:57 -- common/autotest_common.sh@1528 -- # oacs=' 0xf' 00:04:19.622 20:04:57 -- common/autotest_common.sh@1529 -- # oacs_ns_manage=8 00:04:19.622 20:04:57 -- common/autotest_common.sh@1531 -- # [[ 8 -ne 0 ]] 00:04:19.622 20:04:57 -- common/autotest_common.sh@1537 -- # nvme id-ctrl /dev/nvme0 00:04:19.622 20:04:57 -- common/autotest_common.sh@1537 -- # grep unvmcap 00:04:19.622 20:04:57 -- common/autotest_common.sh@1537 -- # cut -d: -f2 00:04:19.622 20:04:57 -- common/autotest_common.sh@1537 -- # unvmcap=' 0' 00:04:19.622 20:04:57 -- common/autotest_common.sh@1538 -- # [[ 0 -eq 0 ]] 00:04:19.622 20:04:57 -- common/autotest_common.sh@1540 -- # continue 00:04:19.622 20:04:57 -- spdk/autotest.sh@146 -- # timing_exit pre_cleanup 00:04:19.622 20:04:57 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:19.622 20:04:57 -- common/autotest_common.sh@10 -- # set +x 00:04:19.881 20:04:57 -- spdk/autotest.sh@149 -- # timing_enter afterboot 00:04:19.881 20:04:57 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:19.881 20:04:57 -- common/autotest_common.sh@10 -- # set +x 00:04:19.881 20:04:57 -- spdk/autotest.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:23.178 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:04:23.178 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:23.178 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:23.178 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:23.178 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:23.178 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:23.178 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:23.178 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:23.178 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:23.178 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:23.178 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:23.178 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:23.178 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:23.178 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:23.178 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:23.178 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:23.178 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:24.118 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:24.118 20:05:01 -- spdk/autotest.sh@151 -- # timing_exit afterboot 00:04:24.118 20:05:01 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:24.118 20:05:01 -- common/autotest_common.sh@10 -- # set +x 00:04:24.118 20:05:01 -- spdk/autotest.sh@155 -- # opal_revert_cleanup 00:04:24.118 20:05:01 -- common/autotest_common.sh@1574 -- # mapfile -t bdfs 00:04:24.118 20:05:01 -- common/autotest_common.sh@1574 -- # get_nvme_bdfs_by_id 0x0a54 00:04:24.118 20:05:01 -- common/autotest_common.sh@1560 -- # bdfs=() 00:04:24.118 20:05:01 -- common/autotest_common.sh@1560 -- # local bdfs 00:04:24.118 20:05:01 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:04:24.118 20:05:01 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:24.118 20:05:01 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:24.118 20:05:01 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:24.118 20:05:01 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:24.118 20:05:01 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:24.118 20:05:01 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:04:24.118 20:05:01 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:5e:00.0 00:04:24.118 20:05:01 -- common/autotest_common.sh@1562 -- # for bdf in $(get_nvme_bdfs) 00:04:24.378 20:05:01 -- common/autotest_common.sh@1563 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:04:24.378 20:05:01 -- common/autotest_common.sh@1563 -- # device=0x0a54 00:04:24.378 20:05:01 -- common/autotest_common.sh@1564 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:24.378 20:05:01 -- common/autotest_common.sh@1565 -- # bdfs+=($bdf) 00:04:24.378 20:05:01 -- common/autotest_common.sh@1569 -- # printf '%s\n' 0000:5e:00.0 00:04:24.378 20:05:01 -- common/autotest_common.sh@1575 -- # [[ -z 0000:5e:00.0 ]] 00:04:24.378 20:05:01 -- common/autotest_common.sh@1580 -- # spdk_tgt_pid=1589009 00:04:24.378 20:05:01 -- common/autotest_common.sh@1581 -- # waitforlisten 1589009 00:04:24.378 20:05:01 -- common/autotest_common.sh@1579 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:24.378 20:05:01 -- common/autotest_common.sh@817 -- # '[' -z 1589009 ']' 00:04:24.378 20:05:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:24.378 20:05:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:24.378 20:05:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:24.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:24.378 20:05:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:24.378 20:05:01 -- common/autotest_common.sh@10 -- # set +x 00:04:24.378 [2024-02-14 20:05:01.591959] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:04:24.378 [2024-02-14 20:05:01.592008] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1589009 ] 00:04:24.378 EAL: No free 2048 kB hugepages reported on node 1 00:04:24.378 [2024-02-14 20:05:01.653848] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:24.378 [2024-02-14 20:05:01.728400] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:24.378 [2024-02-14 20:05:01.728518] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.317 20:05:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:25.317 20:05:02 -- common/autotest_common.sh@850 -- # return 0 00:04:25.317 20:05:02 -- common/autotest_common.sh@1583 -- # bdf_id=0 00:04:25.317 20:05:02 -- common/autotest_common.sh@1584 -- # for bdf in "${bdfs[@]}" 00:04:25.317 20:05:02 -- common/autotest_common.sh@1585 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:04:28.608 nvme0n1 00:04:28.608 20:05:05 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:28.608 [2024-02-14 20:05:05.507642] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:04:28.608 [2024-02-14 20:05:05.507678] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:04:28.608 request: 00:04:28.608 { 00:04:28.608 "nvme_ctrlr_name": "nvme0", 00:04:28.608 "password": "test", 00:04:28.608 "method": "bdev_nvme_opal_revert", 00:04:28.608 "req_id": 1 00:04:28.608 } 00:04:28.608 Got JSON-RPC error response 00:04:28.608 response: 00:04:28.608 { 00:04:28.608 "code": -32603, 00:04:28.608 "message": "Internal error" 00:04:28.608 } 00:04:28.608 20:05:05 -- common/autotest_common.sh@1587 -- # true 00:04:28.608 20:05:05 -- common/autotest_common.sh@1588 -- # (( ++bdf_id )) 00:04:28.608 20:05:05 -- common/autotest_common.sh@1591 -- # killprocess 1589009 00:04:28.608 20:05:05 -- common/autotest_common.sh@924 -- # '[' -z 1589009 ']' 00:04:28.608 20:05:05 -- common/autotest_common.sh@928 -- # kill -0 1589009 00:04:28.608 20:05:05 -- common/autotest_common.sh@929 -- # uname 00:04:28.608 20:05:05 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:04:28.608 20:05:05 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 1589009 00:04:28.608 20:05:05 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:04:28.608 20:05:05 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:04:28.608 20:05:05 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 1589009' 00:04:28.608 killing process with pid 1589009 00:04:28.608 20:05:05 -- common/autotest_common.sh@943 -- # kill 1589009 00:04:28.608 20:05:05 -- common/autotest_common.sh@948 -- # wait 1589009 00:04:29.987 20:05:07 -- spdk/autotest.sh@161 -- # '[' 0 -eq 1 ']' 00:04:29.987 20:05:07 -- spdk/autotest.sh@165 -- # '[' 1 -eq 1 ']' 00:04:29.987 20:05:07 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:04:29.987 20:05:07 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:04:29.987 20:05:07 -- spdk/autotest.sh@173 -- # timing_enter lib 00:04:29.987 20:05:07 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:29.987 20:05:07 -- common/autotest_common.sh@10 -- # set +x 00:04:29.987 20:05:07 -- spdk/autotest.sh@175 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:29.987 20:05:07 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:04:29.987 20:05:07 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:29.987 20:05:07 -- common/autotest_common.sh@10 -- # set +x 00:04:29.987 ************************************ 00:04:29.987 START TEST env 00:04:29.987 ************************************ 00:04:29.987 20:05:07 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:29.987 * Looking for test storage... 00:04:29.987 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:29.987 20:05:07 -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:29.987 20:05:07 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:04:29.987 20:05:07 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:29.987 20:05:07 -- common/autotest_common.sh@10 -- # set +x 00:04:29.987 ************************************ 00:04:29.987 START TEST env_memory 00:04:29.987 ************************************ 00:04:29.987 20:05:07 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:29.987 00:04:29.987 00:04:29.987 CUnit - A unit testing framework for C - Version 2.1-3 00:04:29.987 http://cunit.sourceforge.net/ 00:04:29.987 00:04:29.987 00:04:29.987 Suite: memory 00:04:29.987 Test: alloc and free memory map ...[2024-02-14 20:05:07.348055] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:29.987 passed 00:04:29.987 Test: mem map translation ...[2024-02-14 20:05:07.366035] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:29.987 [2024-02-14 20:05:07.366050] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:29.987 [2024-02-14 20:05:07.366084] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:29.987 [2024-02-14 20:05:07.366091] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:29.987 passed 00:04:29.987 Test: mem map registration ...[2024-02-14 20:05:07.402844] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:29.987 [2024-02-14 20:05:07.402860] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:30.247 passed 00:04:30.247 Test: mem map adjacent registrations ...passed 00:04:30.247 00:04:30.247 Run Summary: Type Total Ran Passed Failed Inactive 00:04:30.247 suites 1 1 n/a 0 0 00:04:30.247 tests 4 4 4 0 0 00:04:30.247 asserts 152 152 152 0 n/a 00:04:30.247 00:04:30.247 Elapsed time = 0.135 seconds 00:04:30.247 00:04:30.247 real 0m0.146s 00:04:30.247 user 0m0.136s 00:04:30.247 sys 0m0.009s 00:04:30.247 20:05:07 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:30.247 20:05:07 -- common/autotest_common.sh@10 -- # set +x 00:04:30.247 ************************************ 00:04:30.247 END TEST env_memory 00:04:30.247 ************************************ 00:04:30.247 20:05:07 -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:30.247 20:05:07 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:04:30.247 20:05:07 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:30.247 20:05:07 -- common/autotest_common.sh@10 -- # set +x 00:04:30.247 ************************************ 00:04:30.247 START TEST env_vtophys 00:04:30.247 ************************************ 00:04:30.247 20:05:07 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:30.247 EAL: lib.eal log level changed from notice to debug 00:04:30.247 EAL: Detected lcore 0 as core 0 on socket 0 00:04:30.247 EAL: Detected lcore 1 as core 1 on socket 0 00:04:30.247 EAL: Detected lcore 2 as core 2 on socket 0 00:04:30.247 EAL: Detected lcore 3 as core 3 on socket 0 00:04:30.247 EAL: Detected lcore 4 as core 4 on socket 0 00:04:30.247 EAL: Detected lcore 5 as core 5 on socket 0 00:04:30.247 EAL: Detected lcore 6 as core 6 on socket 0 00:04:30.247 EAL: Detected lcore 7 as core 8 on socket 0 00:04:30.247 EAL: Detected lcore 8 as core 9 on socket 0 00:04:30.247 EAL: Detected lcore 9 as core 10 on socket 0 00:04:30.247 EAL: Detected lcore 10 as core 11 on socket 0 00:04:30.247 EAL: Detected lcore 11 as core 12 on socket 0 00:04:30.247 EAL: Detected lcore 12 as core 13 on socket 0 00:04:30.247 EAL: Detected lcore 13 as core 16 on socket 0 00:04:30.247 EAL: Detected lcore 14 as core 17 on socket 0 00:04:30.247 EAL: Detected lcore 15 as core 18 on socket 0 00:04:30.247 EAL: Detected lcore 16 as core 19 on socket 0 00:04:30.247 EAL: Detected lcore 17 as core 20 on socket 0 00:04:30.247 EAL: Detected lcore 18 as core 21 on socket 0 00:04:30.247 EAL: Detected lcore 19 as core 25 on socket 0 00:04:30.247 EAL: Detected lcore 20 as core 26 on socket 0 00:04:30.247 EAL: Detected lcore 21 as core 27 on socket 0 00:04:30.247 EAL: Detected lcore 22 as core 28 on socket 0 00:04:30.247 EAL: Detected lcore 23 as core 29 on socket 0 00:04:30.247 EAL: Detected lcore 24 as core 0 on socket 1 00:04:30.247 EAL: Detected lcore 25 as core 1 on socket 1 00:04:30.247 EAL: Detected lcore 26 as core 2 on socket 1 00:04:30.247 EAL: Detected lcore 27 as core 3 on socket 1 00:04:30.247 EAL: Detected lcore 28 as core 4 on socket 1 00:04:30.247 EAL: Detected lcore 29 as core 5 on socket 1 00:04:30.247 EAL: Detected lcore 30 as core 6 on socket 1 00:04:30.247 EAL: Detected lcore 31 as core 8 on socket 1 00:04:30.247 EAL: Detected lcore 32 as core 9 on socket 1 00:04:30.247 EAL: Detected lcore 33 as core 10 on socket 1 00:04:30.247 EAL: Detected lcore 34 as core 11 on socket 1 00:04:30.247 EAL: Detected lcore 35 as core 12 on socket 1 00:04:30.247 EAL: Detected lcore 36 as core 13 on socket 1 00:04:30.247 EAL: Detected lcore 37 as core 16 on socket 1 00:04:30.247 EAL: Detected lcore 38 as core 17 on socket 1 00:04:30.247 EAL: Detected lcore 39 as core 18 on socket 1 00:04:30.247 EAL: Detected lcore 40 as core 19 on socket 1 00:04:30.247 EAL: Detected lcore 41 as core 20 on socket 1 00:04:30.247 EAL: Detected lcore 42 as core 21 on socket 1 00:04:30.247 EAL: Detected lcore 43 as core 25 on socket 1 00:04:30.247 EAL: Detected lcore 44 as core 26 on socket 1 00:04:30.247 EAL: Detected lcore 45 as core 27 on socket 1 00:04:30.247 EAL: Detected lcore 46 as core 28 on socket 1 00:04:30.247 EAL: Detected lcore 47 as core 29 on socket 1 00:04:30.247 EAL: Detected lcore 48 as core 0 on socket 0 00:04:30.247 EAL: Detected lcore 49 as core 1 on socket 0 00:04:30.247 EAL: Detected lcore 50 as core 2 on socket 0 00:04:30.247 EAL: Detected lcore 51 as core 3 on socket 0 00:04:30.247 EAL: Detected lcore 52 as core 4 on socket 0 00:04:30.247 EAL: Detected lcore 53 as core 5 on socket 0 00:04:30.247 EAL: Detected lcore 54 as core 6 on socket 0 00:04:30.247 EAL: Detected lcore 55 as core 8 on socket 0 00:04:30.247 EAL: Detected lcore 56 as core 9 on socket 0 00:04:30.247 EAL: Detected lcore 57 as core 10 on socket 0 00:04:30.247 EAL: Detected lcore 58 as core 11 on socket 0 00:04:30.247 EAL: Detected lcore 59 as core 12 on socket 0 00:04:30.247 EAL: Detected lcore 60 as core 13 on socket 0 00:04:30.247 EAL: Detected lcore 61 as core 16 on socket 0 00:04:30.247 EAL: Detected lcore 62 as core 17 on socket 0 00:04:30.247 EAL: Detected lcore 63 as core 18 on socket 0 00:04:30.247 EAL: Detected lcore 64 as core 19 on socket 0 00:04:30.247 EAL: Detected lcore 65 as core 20 on socket 0 00:04:30.247 EAL: Detected lcore 66 as core 21 on socket 0 00:04:30.247 EAL: Detected lcore 67 as core 25 on socket 0 00:04:30.247 EAL: Detected lcore 68 as core 26 on socket 0 00:04:30.247 EAL: Detected lcore 69 as core 27 on socket 0 00:04:30.247 EAL: Detected lcore 70 as core 28 on socket 0 00:04:30.247 EAL: Detected lcore 71 as core 29 on socket 0 00:04:30.247 EAL: Detected lcore 72 as core 0 on socket 1 00:04:30.248 EAL: Detected lcore 73 as core 1 on socket 1 00:04:30.248 EAL: Detected lcore 74 as core 2 on socket 1 00:04:30.248 EAL: Detected lcore 75 as core 3 on socket 1 00:04:30.248 EAL: Detected lcore 76 as core 4 on socket 1 00:04:30.248 EAL: Detected lcore 77 as core 5 on socket 1 00:04:30.248 EAL: Detected lcore 78 as core 6 on socket 1 00:04:30.248 EAL: Detected lcore 79 as core 8 on socket 1 00:04:30.248 EAL: Detected lcore 80 as core 9 on socket 1 00:04:30.248 EAL: Detected lcore 81 as core 10 on socket 1 00:04:30.248 EAL: Detected lcore 82 as core 11 on socket 1 00:04:30.248 EAL: Detected lcore 83 as core 12 on socket 1 00:04:30.248 EAL: Detected lcore 84 as core 13 on socket 1 00:04:30.248 EAL: Detected lcore 85 as core 16 on socket 1 00:04:30.248 EAL: Detected lcore 86 as core 17 on socket 1 00:04:30.248 EAL: Detected lcore 87 as core 18 on socket 1 00:04:30.248 EAL: Detected lcore 88 as core 19 on socket 1 00:04:30.248 EAL: Detected lcore 89 as core 20 on socket 1 00:04:30.248 EAL: Detected lcore 90 as core 21 on socket 1 00:04:30.248 EAL: Detected lcore 91 as core 25 on socket 1 00:04:30.248 EAL: Detected lcore 92 as core 26 on socket 1 00:04:30.248 EAL: Detected lcore 93 as core 27 on socket 1 00:04:30.248 EAL: Detected lcore 94 as core 28 on socket 1 00:04:30.248 EAL: Detected lcore 95 as core 29 on socket 1 00:04:30.248 EAL: Maximum logical cores by configuration: 128 00:04:30.248 EAL: Detected CPU lcores: 96 00:04:30.248 EAL: Detected NUMA nodes: 2 00:04:30.248 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:04:30.248 EAL: Detected shared linkage of DPDK 00:04:30.248 EAL: No shared files mode enabled, IPC will be disabled 00:04:30.248 EAL: Bus pci wants IOVA as 'DC' 00:04:30.248 EAL: Buses did not request a specific IOVA mode. 00:04:30.248 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:30.248 EAL: Selected IOVA mode 'VA' 00:04:30.248 EAL: No free 2048 kB hugepages reported on node 1 00:04:30.248 EAL: Probing VFIO support... 00:04:30.248 EAL: IOMMU type 1 (Type 1) is supported 00:04:30.248 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:30.248 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:30.248 EAL: VFIO support initialized 00:04:30.248 EAL: Ask a virtual area of 0x2e000 bytes 00:04:30.248 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:30.248 EAL: Setting up physically contiguous memory... 00:04:30.248 EAL: Setting maximum number of open files to 524288 00:04:30.248 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:30.248 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:30.248 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:30.248 EAL: Ask a virtual area of 0x61000 bytes 00:04:30.248 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:30.248 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:30.248 EAL: Ask a virtual area of 0x400000000 bytes 00:04:30.248 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:30.248 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:30.248 EAL: Ask a virtual area of 0x61000 bytes 00:04:30.248 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:30.248 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:30.248 EAL: Ask a virtual area of 0x400000000 bytes 00:04:30.248 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:30.248 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:30.248 EAL: Ask a virtual area of 0x61000 bytes 00:04:30.248 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:30.248 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:30.248 EAL: Ask a virtual area of 0x400000000 bytes 00:04:30.248 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:30.248 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:30.248 EAL: Ask a virtual area of 0x61000 bytes 00:04:30.248 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:30.248 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:30.248 EAL: Ask a virtual area of 0x400000000 bytes 00:04:30.248 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:30.248 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:30.248 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:30.248 EAL: Ask a virtual area of 0x61000 bytes 00:04:30.248 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:30.248 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:30.248 EAL: Ask a virtual area of 0x400000000 bytes 00:04:30.248 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:30.248 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:30.248 EAL: Ask a virtual area of 0x61000 bytes 00:04:30.248 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:30.248 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:30.248 EAL: Ask a virtual area of 0x400000000 bytes 00:04:30.248 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:30.248 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:30.248 EAL: Ask a virtual area of 0x61000 bytes 00:04:30.248 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:30.248 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:30.248 EAL: Ask a virtual area of 0x400000000 bytes 00:04:30.248 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:30.248 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:30.248 EAL: Ask a virtual area of 0x61000 bytes 00:04:30.248 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:30.248 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:30.248 EAL: Ask a virtual area of 0x400000000 bytes 00:04:30.248 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:30.248 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:30.248 EAL: Hugepages will be freed exactly as allocated. 00:04:30.248 EAL: No shared files mode enabled, IPC is disabled 00:04:30.248 EAL: No shared files mode enabled, IPC is disabled 00:04:30.248 EAL: TSC frequency is ~2100000 KHz 00:04:30.248 EAL: Main lcore 0 is ready (tid=7f4c4e55ba00;cpuset=[0]) 00:04:30.248 EAL: Trying to obtain current memory policy. 00:04:30.248 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.248 EAL: Restoring previous memory policy: 0 00:04:30.248 EAL: request: mp_malloc_sync 00:04:30.248 EAL: No shared files mode enabled, IPC is disabled 00:04:30.248 EAL: Heap on socket 0 was expanded by 2MB 00:04:30.248 EAL: No shared files mode enabled, IPC is disabled 00:04:30.248 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:30.248 EAL: Mem event callback 'spdk:(nil)' registered 00:04:30.248 00:04:30.248 00:04:30.248 CUnit - A unit testing framework for C - Version 2.1-3 00:04:30.248 http://cunit.sourceforge.net/ 00:04:30.248 00:04:30.248 00:04:30.248 Suite: components_suite 00:04:30.248 Test: vtophys_malloc_test ...passed 00:04:30.248 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:30.248 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.248 EAL: Restoring previous memory policy: 4 00:04:30.248 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.248 EAL: request: mp_malloc_sync 00:04:30.248 EAL: No shared files mode enabled, IPC is disabled 00:04:30.248 EAL: Heap on socket 0 was expanded by 4MB 00:04:30.248 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.248 EAL: request: mp_malloc_sync 00:04:30.248 EAL: No shared files mode enabled, IPC is disabled 00:04:30.248 EAL: Heap on socket 0 was shrunk by 4MB 00:04:30.248 EAL: Trying to obtain current memory policy. 00:04:30.248 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.248 EAL: Restoring previous memory policy: 4 00:04:30.248 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.248 EAL: request: mp_malloc_sync 00:04:30.248 EAL: No shared files mode enabled, IPC is disabled 00:04:30.248 EAL: Heap on socket 0 was expanded by 6MB 00:04:30.248 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.248 EAL: request: mp_malloc_sync 00:04:30.248 EAL: No shared files mode enabled, IPC is disabled 00:04:30.248 EAL: Heap on socket 0 was shrunk by 6MB 00:04:30.248 EAL: Trying to obtain current memory policy. 00:04:30.248 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.248 EAL: Restoring previous memory policy: 4 00:04:30.248 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.248 EAL: request: mp_malloc_sync 00:04:30.248 EAL: No shared files mode enabled, IPC is disabled 00:04:30.248 EAL: Heap on socket 0 was expanded by 10MB 00:04:30.248 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.248 EAL: request: mp_malloc_sync 00:04:30.248 EAL: No shared files mode enabled, IPC is disabled 00:04:30.248 EAL: Heap on socket 0 was shrunk by 10MB 00:04:30.248 EAL: Trying to obtain current memory policy. 00:04:30.248 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.248 EAL: Restoring previous memory policy: 4 00:04:30.248 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.248 EAL: request: mp_malloc_sync 00:04:30.248 EAL: No shared files mode enabled, IPC is disabled 00:04:30.248 EAL: Heap on socket 0 was expanded by 18MB 00:04:30.248 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.248 EAL: request: mp_malloc_sync 00:04:30.248 EAL: No shared files mode enabled, IPC is disabled 00:04:30.248 EAL: Heap on socket 0 was shrunk by 18MB 00:04:30.248 EAL: Trying to obtain current memory policy. 00:04:30.248 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.248 EAL: Restoring previous memory policy: 4 00:04:30.248 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.248 EAL: request: mp_malloc_sync 00:04:30.248 EAL: No shared files mode enabled, IPC is disabled 00:04:30.248 EAL: Heap on socket 0 was expanded by 34MB 00:04:30.248 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.248 EAL: request: mp_malloc_sync 00:04:30.248 EAL: No shared files mode enabled, IPC is disabled 00:04:30.248 EAL: Heap on socket 0 was shrunk by 34MB 00:04:30.248 EAL: Trying to obtain current memory policy. 00:04:30.248 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.248 EAL: Restoring previous memory policy: 4 00:04:30.248 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.248 EAL: request: mp_malloc_sync 00:04:30.248 EAL: No shared files mode enabled, IPC is disabled 00:04:30.248 EAL: Heap on socket 0 was expanded by 66MB 00:04:30.248 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.249 EAL: request: mp_malloc_sync 00:04:30.249 EAL: No shared files mode enabled, IPC is disabled 00:04:30.249 EAL: Heap on socket 0 was shrunk by 66MB 00:04:30.249 EAL: Trying to obtain current memory policy. 00:04:30.249 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.249 EAL: Restoring previous memory policy: 4 00:04:30.249 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.249 EAL: request: mp_malloc_sync 00:04:30.249 EAL: No shared files mode enabled, IPC is disabled 00:04:30.249 EAL: Heap on socket 0 was expanded by 130MB 00:04:30.249 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.508 EAL: request: mp_malloc_sync 00:04:30.508 EAL: No shared files mode enabled, IPC is disabled 00:04:30.508 EAL: Heap on socket 0 was shrunk by 130MB 00:04:30.508 EAL: Trying to obtain current memory policy. 00:04:30.508 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.508 EAL: Restoring previous memory policy: 4 00:04:30.508 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.508 EAL: request: mp_malloc_sync 00:04:30.508 EAL: No shared files mode enabled, IPC is disabled 00:04:30.508 EAL: Heap on socket 0 was expanded by 258MB 00:04:30.508 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.508 EAL: request: mp_malloc_sync 00:04:30.508 EAL: No shared files mode enabled, IPC is disabled 00:04:30.508 EAL: Heap on socket 0 was shrunk by 258MB 00:04:30.508 EAL: Trying to obtain current memory policy. 00:04:30.508 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.508 EAL: Restoring previous memory policy: 4 00:04:30.508 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.508 EAL: request: mp_malloc_sync 00:04:30.508 EAL: No shared files mode enabled, IPC is disabled 00:04:30.508 EAL: Heap on socket 0 was expanded by 514MB 00:04:30.768 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.768 EAL: request: mp_malloc_sync 00:04:30.768 EAL: No shared files mode enabled, IPC is disabled 00:04:30.768 EAL: Heap on socket 0 was shrunk by 514MB 00:04:30.768 EAL: Trying to obtain current memory policy. 00:04:30.768 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.028 EAL: Restoring previous memory policy: 4 00:04:31.028 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.028 EAL: request: mp_malloc_sync 00:04:31.028 EAL: No shared files mode enabled, IPC is disabled 00:04:31.028 EAL: Heap on socket 0 was expanded by 1026MB 00:04:31.028 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.313 EAL: request: mp_malloc_sync 00:04:31.313 EAL: No shared files mode enabled, IPC is disabled 00:04:31.313 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:31.313 passed 00:04:31.313 00:04:31.313 Run Summary: Type Total Ran Passed Failed Inactive 00:04:31.313 suites 1 1 n/a 0 0 00:04:31.313 tests 2 2 2 0 0 00:04:31.313 asserts 497 497 497 0 n/a 00:04:31.313 00:04:31.313 Elapsed time = 0.961 seconds 00:04:31.313 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.313 EAL: request: mp_malloc_sync 00:04:31.313 EAL: No shared files mode enabled, IPC is disabled 00:04:31.313 EAL: Heap on socket 0 was shrunk by 2MB 00:04:31.313 EAL: No shared files mode enabled, IPC is disabled 00:04:31.313 EAL: No shared files mode enabled, IPC is disabled 00:04:31.313 EAL: No shared files mode enabled, IPC is disabled 00:04:31.313 00:04:31.313 real 0m1.077s 00:04:31.313 user 0m0.631s 00:04:31.313 sys 0m0.416s 00:04:31.313 20:05:08 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:31.313 20:05:08 -- common/autotest_common.sh@10 -- # set +x 00:04:31.313 ************************************ 00:04:31.313 END TEST env_vtophys 00:04:31.313 ************************************ 00:04:31.313 20:05:08 -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:31.313 20:05:08 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:04:31.313 20:05:08 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:31.313 20:05:08 -- common/autotest_common.sh@10 -- # set +x 00:04:31.313 ************************************ 00:04:31.313 START TEST env_pci 00:04:31.313 ************************************ 00:04:31.313 20:05:08 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:31.313 00:04:31.313 00:04:31.313 CUnit - A unit testing framework for C - Version 2.1-3 00:04:31.313 http://cunit.sourceforge.net/ 00:04:31.313 00:04:31.313 00:04:31.313 Suite: pci 00:04:31.313 Test: pci_hook ...[2024-02-14 20:05:08.625160] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1590375 has claimed it 00:04:31.313 EAL: Cannot find device (10000:00:01.0) 00:04:31.313 EAL: Failed to attach device on primary process 00:04:31.313 passed 00:04:31.313 00:04:31.313 Run Summary: Type Total Ran Passed Failed Inactive 00:04:31.313 suites 1 1 n/a 0 0 00:04:31.313 tests 1 1 1 0 0 00:04:31.313 asserts 25 25 25 0 n/a 00:04:31.313 00:04:31.313 Elapsed time = 0.030 seconds 00:04:31.313 00:04:31.313 real 0m0.049s 00:04:31.313 user 0m0.012s 00:04:31.313 sys 0m0.037s 00:04:31.313 20:05:08 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:31.314 20:05:08 -- common/autotest_common.sh@10 -- # set +x 00:04:31.314 ************************************ 00:04:31.314 END TEST env_pci 00:04:31.314 ************************************ 00:04:31.314 20:05:08 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:31.314 20:05:08 -- env/env.sh@15 -- # uname 00:04:31.314 20:05:08 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:31.314 20:05:08 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:31.314 20:05:08 -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:31.314 20:05:08 -- common/autotest_common.sh@1075 -- # '[' 5 -le 1 ']' 00:04:31.314 20:05:08 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:31.314 20:05:08 -- common/autotest_common.sh@10 -- # set +x 00:04:31.314 ************************************ 00:04:31.314 START TEST env_dpdk_post_init 00:04:31.314 ************************************ 00:04:31.314 20:05:08 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:31.573 EAL: Detected CPU lcores: 96 00:04:31.573 EAL: Detected NUMA nodes: 2 00:04:31.573 EAL: Detected shared linkage of DPDK 00:04:31.573 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:31.573 EAL: Selected IOVA mode 'VA' 00:04:31.573 EAL: No free 2048 kB hugepages reported on node 1 00:04:31.573 EAL: VFIO support initialized 00:04:31.573 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:31.573 EAL: Using IOMMU type 1 (Type 1) 00:04:31.573 EAL: Ignore mapping IO port bar(1) 00:04:31.573 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:04:31.573 EAL: Ignore mapping IO port bar(1) 00:04:31.573 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:04:31.573 EAL: Ignore mapping IO port bar(1) 00:04:31.573 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:04:31.573 EAL: Ignore mapping IO port bar(1) 00:04:31.573 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:04:31.573 EAL: Ignore mapping IO port bar(1) 00:04:31.573 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:04:31.573 EAL: Ignore mapping IO port bar(1) 00:04:31.573 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:04:31.573 EAL: Ignore mapping IO port bar(1) 00:04:31.573 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:04:31.573 EAL: Ignore mapping IO port bar(1) 00:04:31.573 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:04:32.511 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:04:32.511 EAL: Ignore mapping IO port bar(1) 00:04:32.511 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:04:32.511 EAL: Ignore mapping IO port bar(1) 00:04:32.511 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:04:32.511 EAL: Ignore mapping IO port bar(1) 00:04:32.511 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:04:32.511 EAL: Ignore mapping IO port bar(1) 00:04:32.511 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:04:32.511 EAL: Ignore mapping IO port bar(1) 00:04:32.511 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:04:32.511 EAL: Ignore mapping IO port bar(1) 00:04:32.511 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:04:32.511 EAL: Ignore mapping IO port bar(1) 00:04:32.511 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:04:32.511 EAL: Ignore mapping IO port bar(1) 00:04:32.511 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:04:35.799 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:04:35.799 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:04:35.799 Starting DPDK initialization... 00:04:35.799 Starting SPDK post initialization... 00:04:35.799 SPDK NVMe probe 00:04:35.799 Attaching to 0000:5e:00.0 00:04:35.799 Attached to 0000:5e:00.0 00:04:35.799 Cleaning up... 00:04:35.799 00:04:35.799 real 0m4.319s 00:04:35.799 user 0m3.254s 00:04:35.799 sys 0m0.133s 00:04:35.799 20:05:13 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:35.799 20:05:13 -- common/autotest_common.sh@10 -- # set +x 00:04:35.799 ************************************ 00:04:35.799 END TEST env_dpdk_post_init 00:04:35.799 ************************************ 00:04:35.799 20:05:13 -- env/env.sh@26 -- # uname 00:04:35.799 20:05:13 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:35.799 20:05:13 -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:35.799 20:05:13 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:04:35.799 20:05:13 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:35.799 20:05:13 -- common/autotest_common.sh@10 -- # set +x 00:04:35.799 ************************************ 00:04:35.799 START TEST env_mem_callbacks 00:04:35.799 ************************************ 00:04:35.799 20:05:13 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:35.799 EAL: Detected CPU lcores: 96 00:04:35.799 EAL: Detected NUMA nodes: 2 00:04:35.799 EAL: Detected shared linkage of DPDK 00:04:35.799 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:35.799 EAL: Selected IOVA mode 'VA' 00:04:35.799 EAL: No free 2048 kB hugepages reported on node 1 00:04:35.799 EAL: VFIO support initialized 00:04:35.799 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:35.799 00:04:35.799 00:04:35.799 CUnit - A unit testing framework for C - Version 2.1-3 00:04:35.799 http://cunit.sourceforge.net/ 00:04:35.799 00:04:35.799 00:04:35.799 Suite: memory 00:04:35.799 Test: test ... 00:04:35.799 register 0x200000200000 2097152 00:04:35.799 malloc 3145728 00:04:35.799 register 0x200000400000 4194304 00:04:35.799 buf 0x200000500000 len 3145728 PASSED 00:04:35.799 malloc 64 00:04:35.799 buf 0x2000004fff40 len 64 PASSED 00:04:35.799 malloc 4194304 00:04:35.799 register 0x200000800000 6291456 00:04:35.799 buf 0x200000a00000 len 4194304 PASSED 00:04:35.799 free 0x200000500000 3145728 00:04:35.799 free 0x2000004fff40 64 00:04:35.799 unregister 0x200000400000 4194304 PASSED 00:04:35.799 free 0x200000a00000 4194304 00:04:35.799 unregister 0x200000800000 6291456 PASSED 00:04:35.799 malloc 8388608 00:04:35.799 register 0x200000400000 10485760 00:04:35.799 buf 0x200000600000 len 8388608 PASSED 00:04:35.799 free 0x200000600000 8388608 00:04:35.799 unregister 0x200000400000 10485760 PASSED 00:04:35.799 passed 00:04:35.799 00:04:35.799 Run Summary: Type Total Ran Passed Failed Inactive 00:04:35.799 suites 1 1 n/a 0 0 00:04:35.799 tests 1 1 1 0 0 00:04:35.799 asserts 15 15 15 0 n/a 00:04:35.799 00:04:35.799 Elapsed time = 0.005 seconds 00:04:35.799 00:04:35.799 real 0m0.055s 00:04:35.799 user 0m0.017s 00:04:35.799 sys 0m0.038s 00:04:35.799 20:05:13 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:35.799 20:05:13 -- common/autotest_common.sh@10 -- # set +x 00:04:35.799 ************************************ 00:04:35.799 END TEST env_mem_callbacks 00:04:35.799 ************************************ 00:04:35.799 00:04:35.799 real 0m5.935s 00:04:35.799 user 0m4.157s 00:04:35.799 sys 0m0.852s 00:04:35.799 20:05:13 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:35.799 20:05:13 -- common/autotest_common.sh@10 -- # set +x 00:04:35.799 ************************************ 00:04:35.799 END TEST env 00:04:35.799 ************************************ 00:04:35.799 20:05:13 -- spdk/autotest.sh@176 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:35.799 20:05:13 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:04:35.799 20:05:13 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:35.799 20:05:13 -- common/autotest_common.sh@10 -- # set +x 00:04:35.799 ************************************ 00:04:35.799 START TEST rpc 00:04:35.799 ************************************ 00:04:35.799 20:05:13 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:36.067 * Looking for test storage... 00:04:36.067 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:36.067 20:05:13 -- rpc/rpc.sh@65 -- # spdk_pid=1591191 00:04:36.067 20:05:13 -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:36.067 20:05:13 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:36.067 20:05:13 -- rpc/rpc.sh@67 -- # waitforlisten 1591191 00:04:36.067 20:05:13 -- common/autotest_common.sh@817 -- # '[' -z 1591191 ']' 00:04:36.067 20:05:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:36.067 20:05:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:36.067 20:05:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:36.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:36.067 20:05:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:36.067 20:05:13 -- common/autotest_common.sh@10 -- # set +x 00:04:36.067 [2024-02-14 20:05:13.318347] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:04:36.067 [2024-02-14 20:05:13.318395] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1591191 ] 00:04:36.067 EAL: No free 2048 kB hugepages reported on node 1 00:04:36.067 [2024-02-14 20:05:13.378611] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.067 [2024-02-14 20:05:13.448100] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:36.067 [2024-02-14 20:05:13.448211] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:36.067 [2024-02-14 20:05:13.448219] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1591191' to capture a snapshot of events at runtime. 00:04:36.067 [2024-02-14 20:05:13.448225] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1591191 for offline analysis/debug. 00:04:36.067 [2024-02-14 20:05:13.448243] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.005 20:05:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:37.005 20:05:14 -- common/autotest_common.sh@850 -- # return 0 00:04:37.006 20:05:14 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:37.006 20:05:14 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:37.006 20:05:14 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:37.006 20:05:14 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:37.006 20:05:14 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:04:37.006 20:05:14 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:37.006 20:05:14 -- common/autotest_common.sh@10 -- # set +x 00:04:37.006 ************************************ 00:04:37.006 START TEST rpc_integrity 00:04:37.006 ************************************ 00:04:37.006 20:05:14 -- common/autotest_common.sh@1102 -- # rpc_integrity 00:04:37.006 20:05:14 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:37.006 20:05:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:37.006 20:05:14 -- common/autotest_common.sh@10 -- # set +x 00:04:37.006 20:05:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:37.006 20:05:14 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:37.006 20:05:14 -- rpc/rpc.sh@13 -- # jq length 00:04:37.006 20:05:14 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:37.006 20:05:14 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:37.006 20:05:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:37.006 20:05:14 -- common/autotest_common.sh@10 -- # set +x 00:04:37.006 20:05:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:37.006 20:05:14 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:37.006 20:05:14 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:37.006 20:05:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:37.006 20:05:14 -- common/autotest_common.sh@10 -- # set +x 00:04:37.006 20:05:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:37.006 20:05:14 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:37.006 { 00:04:37.006 "name": "Malloc0", 00:04:37.006 "aliases": [ 00:04:37.006 "6baf0bf7-c2e8-4cd2-b4bf-a7e39125af30" 00:04:37.006 ], 00:04:37.006 "product_name": "Malloc disk", 00:04:37.006 "block_size": 512, 00:04:37.006 "num_blocks": 16384, 00:04:37.006 "uuid": "6baf0bf7-c2e8-4cd2-b4bf-a7e39125af30", 00:04:37.006 "assigned_rate_limits": { 00:04:37.006 "rw_ios_per_sec": 0, 00:04:37.006 "rw_mbytes_per_sec": 0, 00:04:37.006 "r_mbytes_per_sec": 0, 00:04:37.006 "w_mbytes_per_sec": 0 00:04:37.006 }, 00:04:37.006 "claimed": false, 00:04:37.006 "zoned": false, 00:04:37.006 "supported_io_types": { 00:04:37.006 "read": true, 00:04:37.006 "write": true, 00:04:37.006 "unmap": true, 00:04:37.006 "write_zeroes": true, 00:04:37.006 "flush": true, 00:04:37.006 "reset": true, 00:04:37.006 "compare": false, 00:04:37.006 "compare_and_write": false, 00:04:37.006 "abort": true, 00:04:37.006 "nvme_admin": false, 00:04:37.006 "nvme_io": false 00:04:37.006 }, 00:04:37.006 "memory_domains": [ 00:04:37.006 { 00:04:37.006 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:37.006 "dma_device_type": 2 00:04:37.006 } 00:04:37.006 ], 00:04:37.006 "driver_specific": {} 00:04:37.006 } 00:04:37.006 ]' 00:04:37.006 20:05:14 -- rpc/rpc.sh@17 -- # jq length 00:04:37.006 20:05:14 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:37.006 20:05:14 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:37.006 20:05:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:37.006 20:05:14 -- common/autotest_common.sh@10 -- # set +x 00:04:37.006 [2024-02-14 20:05:14.242077] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:37.006 [2024-02-14 20:05:14.242112] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:37.006 [2024-02-14 20:05:14.242124] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x23492b0 00:04:37.006 [2024-02-14 20:05:14.242129] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:37.006 [2024-02-14 20:05:14.243174] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:37.006 [2024-02-14 20:05:14.243196] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:37.006 Passthru0 00:04:37.006 20:05:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:37.006 20:05:14 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:37.006 20:05:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:37.006 20:05:14 -- common/autotest_common.sh@10 -- # set +x 00:04:37.006 20:05:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:37.006 20:05:14 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:37.006 { 00:04:37.006 "name": "Malloc0", 00:04:37.006 "aliases": [ 00:04:37.006 "6baf0bf7-c2e8-4cd2-b4bf-a7e39125af30" 00:04:37.006 ], 00:04:37.006 "product_name": "Malloc disk", 00:04:37.006 "block_size": 512, 00:04:37.006 "num_blocks": 16384, 00:04:37.006 "uuid": "6baf0bf7-c2e8-4cd2-b4bf-a7e39125af30", 00:04:37.006 "assigned_rate_limits": { 00:04:37.006 "rw_ios_per_sec": 0, 00:04:37.006 "rw_mbytes_per_sec": 0, 00:04:37.006 "r_mbytes_per_sec": 0, 00:04:37.006 "w_mbytes_per_sec": 0 00:04:37.006 }, 00:04:37.006 "claimed": true, 00:04:37.006 "claim_type": "exclusive_write", 00:04:37.006 "zoned": false, 00:04:37.006 "supported_io_types": { 00:04:37.006 "read": true, 00:04:37.006 "write": true, 00:04:37.006 "unmap": true, 00:04:37.006 "write_zeroes": true, 00:04:37.006 "flush": true, 00:04:37.006 "reset": true, 00:04:37.006 "compare": false, 00:04:37.006 "compare_and_write": false, 00:04:37.006 "abort": true, 00:04:37.006 "nvme_admin": false, 00:04:37.006 "nvme_io": false 00:04:37.006 }, 00:04:37.006 "memory_domains": [ 00:04:37.006 { 00:04:37.006 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:37.006 "dma_device_type": 2 00:04:37.006 } 00:04:37.006 ], 00:04:37.006 "driver_specific": {} 00:04:37.006 }, 00:04:37.006 { 00:04:37.006 "name": "Passthru0", 00:04:37.006 "aliases": [ 00:04:37.006 "560fc5b4-b926-5ddc-9494-341b487472c9" 00:04:37.006 ], 00:04:37.006 "product_name": "passthru", 00:04:37.006 "block_size": 512, 00:04:37.006 "num_blocks": 16384, 00:04:37.006 "uuid": "560fc5b4-b926-5ddc-9494-341b487472c9", 00:04:37.006 "assigned_rate_limits": { 00:04:37.006 "rw_ios_per_sec": 0, 00:04:37.006 "rw_mbytes_per_sec": 0, 00:04:37.006 "r_mbytes_per_sec": 0, 00:04:37.006 "w_mbytes_per_sec": 0 00:04:37.006 }, 00:04:37.006 "claimed": false, 00:04:37.006 "zoned": false, 00:04:37.006 "supported_io_types": { 00:04:37.006 "read": true, 00:04:37.006 "write": true, 00:04:37.006 "unmap": true, 00:04:37.006 "write_zeroes": true, 00:04:37.006 "flush": true, 00:04:37.006 "reset": true, 00:04:37.006 "compare": false, 00:04:37.006 "compare_and_write": false, 00:04:37.006 "abort": true, 00:04:37.006 "nvme_admin": false, 00:04:37.006 "nvme_io": false 00:04:37.006 }, 00:04:37.006 "memory_domains": [ 00:04:37.006 { 00:04:37.006 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:37.006 "dma_device_type": 2 00:04:37.006 } 00:04:37.006 ], 00:04:37.006 "driver_specific": { 00:04:37.006 "passthru": { 00:04:37.006 "name": "Passthru0", 00:04:37.006 "base_bdev_name": "Malloc0" 00:04:37.006 } 00:04:37.006 } 00:04:37.006 } 00:04:37.006 ]' 00:04:37.006 20:05:14 -- rpc/rpc.sh@21 -- # jq length 00:04:37.006 20:05:14 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:37.006 20:05:14 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:37.006 20:05:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:37.006 20:05:14 -- common/autotest_common.sh@10 -- # set +x 00:04:37.006 20:05:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:37.006 20:05:14 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:37.006 20:05:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:37.006 20:05:14 -- common/autotest_common.sh@10 -- # set +x 00:04:37.006 20:05:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:37.006 20:05:14 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:37.006 20:05:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:37.006 20:05:14 -- common/autotest_common.sh@10 -- # set +x 00:04:37.006 20:05:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:37.006 20:05:14 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:37.006 20:05:14 -- rpc/rpc.sh@26 -- # jq length 00:04:37.006 20:05:14 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:37.006 00:04:37.006 real 0m0.271s 00:04:37.006 user 0m0.173s 00:04:37.006 sys 0m0.036s 00:04:37.006 20:05:14 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:37.006 20:05:14 -- common/autotest_common.sh@10 -- # set +x 00:04:37.006 ************************************ 00:04:37.006 END TEST rpc_integrity 00:04:37.006 ************************************ 00:04:37.006 20:05:14 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:37.006 20:05:14 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:04:37.006 20:05:14 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:37.006 20:05:14 -- common/autotest_common.sh@10 -- # set +x 00:04:37.006 ************************************ 00:04:37.006 START TEST rpc_plugins 00:04:37.006 ************************************ 00:04:37.006 20:05:14 -- common/autotest_common.sh@1102 -- # rpc_plugins 00:04:37.266 20:05:14 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:37.266 20:05:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:37.266 20:05:14 -- common/autotest_common.sh@10 -- # set +x 00:04:37.266 20:05:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:37.266 20:05:14 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:37.266 20:05:14 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:37.266 20:05:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:37.266 20:05:14 -- common/autotest_common.sh@10 -- # set +x 00:04:37.266 20:05:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:37.266 20:05:14 -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:37.266 { 00:04:37.266 "name": "Malloc1", 00:04:37.266 "aliases": [ 00:04:37.266 "16a6eab2-8f4c-4d0e-926a-6970a0a39d53" 00:04:37.266 ], 00:04:37.266 "product_name": "Malloc disk", 00:04:37.266 "block_size": 4096, 00:04:37.266 "num_blocks": 256, 00:04:37.266 "uuid": "16a6eab2-8f4c-4d0e-926a-6970a0a39d53", 00:04:37.266 "assigned_rate_limits": { 00:04:37.266 "rw_ios_per_sec": 0, 00:04:37.266 "rw_mbytes_per_sec": 0, 00:04:37.266 "r_mbytes_per_sec": 0, 00:04:37.266 "w_mbytes_per_sec": 0 00:04:37.266 }, 00:04:37.266 "claimed": false, 00:04:37.266 "zoned": false, 00:04:37.266 "supported_io_types": { 00:04:37.266 "read": true, 00:04:37.266 "write": true, 00:04:37.266 "unmap": true, 00:04:37.266 "write_zeroes": true, 00:04:37.266 "flush": true, 00:04:37.266 "reset": true, 00:04:37.266 "compare": false, 00:04:37.266 "compare_and_write": false, 00:04:37.266 "abort": true, 00:04:37.266 "nvme_admin": false, 00:04:37.266 "nvme_io": false 00:04:37.266 }, 00:04:37.266 "memory_domains": [ 00:04:37.266 { 00:04:37.266 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:37.266 "dma_device_type": 2 00:04:37.266 } 00:04:37.266 ], 00:04:37.266 "driver_specific": {} 00:04:37.266 } 00:04:37.266 ]' 00:04:37.266 20:05:14 -- rpc/rpc.sh@32 -- # jq length 00:04:37.266 20:05:14 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:37.266 20:05:14 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:37.266 20:05:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:37.266 20:05:14 -- common/autotest_common.sh@10 -- # set +x 00:04:37.266 20:05:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:37.266 20:05:14 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:37.266 20:05:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:37.266 20:05:14 -- common/autotest_common.sh@10 -- # set +x 00:04:37.266 20:05:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:37.266 20:05:14 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:37.266 20:05:14 -- rpc/rpc.sh@36 -- # jq length 00:04:37.266 20:05:14 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:37.266 00:04:37.266 real 0m0.142s 00:04:37.266 user 0m0.087s 00:04:37.266 sys 0m0.017s 00:04:37.266 20:05:14 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:37.266 20:05:14 -- common/autotest_common.sh@10 -- # set +x 00:04:37.266 ************************************ 00:04:37.266 END TEST rpc_plugins 00:04:37.266 ************************************ 00:04:37.266 20:05:14 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:37.266 20:05:14 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:04:37.266 20:05:14 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:37.266 20:05:14 -- common/autotest_common.sh@10 -- # set +x 00:04:37.266 ************************************ 00:04:37.266 START TEST rpc_trace_cmd_test 00:04:37.266 ************************************ 00:04:37.266 20:05:14 -- common/autotest_common.sh@1102 -- # rpc_trace_cmd_test 00:04:37.266 20:05:14 -- rpc/rpc.sh@40 -- # local info 00:04:37.266 20:05:14 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:37.266 20:05:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:37.266 20:05:14 -- common/autotest_common.sh@10 -- # set +x 00:04:37.266 20:05:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:37.266 20:05:14 -- rpc/rpc.sh@42 -- # info='{ 00:04:37.266 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1591191", 00:04:37.266 "tpoint_group_mask": "0x8", 00:04:37.266 "iscsi_conn": { 00:04:37.266 "mask": "0x2", 00:04:37.266 "tpoint_mask": "0x0" 00:04:37.266 }, 00:04:37.266 "scsi": { 00:04:37.266 "mask": "0x4", 00:04:37.266 "tpoint_mask": "0x0" 00:04:37.266 }, 00:04:37.266 "bdev": { 00:04:37.266 "mask": "0x8", 00:04:37.266 "tpoint_mask": "0xffffffffffffffff" 00:04:37.266 }, 00:04:37.266 "nvmf_rdma": { 00:04:37.266 "mask": "0x10", 00:04:37.266 "tpoint_mask": "0x0" 00:04:37.266 }, 00:04:37.266 "nvmf_tcp": { 00:04:37.266 "mask": "0x20", 00:04:37.266 "tpoint_mask": "0x0" 00:04:37.266 }, 00:04:37.266 "ftl": { 00:04:37.266 "mask": "0x40", 00:04:37.266 "tpoint_mask": "0x0" 00:04:37.266 }, 00:04:37.266 "blobfs": { 00:04:37.266 "mask": "0x80", 00:04:37.266 "tpoint_mask": "0x0" 00:04:37.266 }, 00:04:37.266 "dsa": { 00:04:37.266 "mask": "0x200", 00:04:37.266 "tpoint_mask": "0x0" 00:04:37.266 }, 00:04:37.266 "thread": { 00:04:37.266 "mask": "0x400", 00:04:37.266 "tpoint_mask": "0x0" 00:04:37.266 }, 00:04:37.266 "nvme_pcie": { 00:04:37.267 "mask": "0x800", 00:04:37.267 "tpoint_mask": "0x0" 00:04:37.267 }, 00:04:37.267 "iaa": { 00:04:37.267 "mask": "0x1000", 00:04:37.267 "tpoint_mask": "0x0" 00:04:37.267 }, 00:04:37.267 "nvme_tcp": { 00:04:37.267 "mask": "0x2000", 00:04:37.267 "tpoint_mask": "0x0" 00:04:37.267 }, 00:04:37.267 "bdev_nvme": { 00:04:37.267 "mask": "0x4000", 00:04:37.267 "tpoint_mask": "0x0" 00:04:37.267 } 00:04:37.267 }' 00:04:37.267 20:05:14 -- rpc/rpc.sh@43 -- # jq length 00:04:37.267 20:05:14 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:04:37.267 20:05:14 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:37.526 20:05:14 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:37.526 20:05:14 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:37.526 20:05:14 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:37.526 20:05:14 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:37.526 20:05:14 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:37.526 20:05:14 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:37.526 20:05:14 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:37.526 00:04:37.526 real 0m0.197s 00:04:37.526 user 0m0.166s 00:04:37.526 sys 0m0.024s 00:04:37.526 20:05:14 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:37.526 20:05:14 -- common/autotest_common.sh@10 -- # set +x 00:04:37.526 ************************************ 00:04:37.526 END TEST rpc_trace_cmd_test 00:04:37.526 ************************************ 00:04:37.526 20:05:14 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:37.526 20:05:14 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:37.526 20:05:14 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:37.526 20:05:14 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:04:37.526 20:05:14 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:37.526 20:05:14 -- common/autotest_common.sh@10 -- # set +x 00:04:37.526 ************************************ 00:04:37.526 START TEST rpc_daemon_integrity 00:04:37.526 ************************************ 00:04:37.526 20:05:14 -- common/autotest_common.sh@1102 -- # rpc_integrity 00:04:37.526 20:05:14 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:37.526 20:05:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:37.526 20:05:14 -- common/autotest_common.sh@10 -- # set +x 00:04:37.526 20:05:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:37.526 20:05:14 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:37.526 20:05:14 -- rpc/rpc.sh@13 -- # jq length 00:04:37.526 20:05:14 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:37.526 20:05:14 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:37.526 20:05:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:37.526 20:05:14 -- common/autotest_common.sh@10 -- # set +x 00:04:37.526 20:05:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:37.526 20:05:14 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:37.526 20:05:14 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:37.526 20:05:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:37.526 20:05:14 -- common/autotest_common.sh@10 -- # set +x 00:04:37.526 20:05:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:37.526 20:05:14 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:37.526 { 00:04:37.526 "name": "Malloc2", 00:04:37.526 "aliases": [ 00:04:37.526 "70ebf6b2-eae9-4090-8bd8-c513391c99ca" 00:04:37.526 ], 00:04:37.526 "product_name": "Malloc disk", 00:04:37.526 "block_size": 512, 00:04:37.526 "num_blocks": 16384, 00:04:37.526 "uuid": "70ebf6b2-eae9-4090-8bd8-c513391c99ca", 00:04:37.526 "assigned_rate_limits": { 00:04:37.526 "rw_ios_per_sec": 0, 00:04:37.526 "rw_mbytes_per_sec": 0, 00:04:37.526 "r_mbytes_per_sec": 0, 00:04:37.526 "w_mbytes_per_sec": 0 00:04:37.526 }, 00:04:37.526 "claimed": false, 00:04:37.526 "zoned": false, 00:04:37.526 "supported_io_types": { 00:04:37.526 "read": true, 00:04:37.526 "write": true, 00:04:37.526 "unmap": true, 00:04:37.526 "write_zeroes": true, 00:04:37.526 "flush": true, 00:04:37.526 "reset": true, 00:04:37.526 "compare": false, 00:04:37.526 "compare_and_write": false, 00:04:37.526 "abort": true, 00:04:37.526 "nvme_admin": false, 00:04:37.526 "nvme_io": false 00:04:37.526 }, 00:04:37.526 "memory_domains": [ 00:04:37.526 { 00:04:37.526 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:37.526 "dma_device_type": 2 00:04:37.526 } 00:04:37.526 ], 00:04:37.526 "driver_specific": {} 00:04:37.526 } 00:04:37.526 ]' 00:04:37.526 20:05:14 -- rpc/rpc.sh@17 -- # jq length 00:04:37.785 20:05:14 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:37.785 20:05:14 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:37.785 20:05:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:37.785 20:05:14 -- common/autotest_common.sh@10 -- # set +x 00:04:37.785 [2024-02-14 20:05:14.968057] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:37.785 [2024-02-14 20:05:14.968087] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:37.785 [2024-02-14 20:05:14.968100] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2348dc0 00:04:37.785 [2024-02-14 20:05:14.968109] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:37.785 [2024-02-14 20:05:14.969078] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:37.785 [2024-02-14 20:05:14.969101] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:37.785 Passthru0 00:04:37.785 20:05:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:37.785 20:05:14 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:37.785 20:05:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:37.785 20:05:14 -- common/autotest_common.sh@10 -- # set +x 00:04:37.785 20:05:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:37.785 20:05:14 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:37.785 { 00:04:37.785 "name": "Malloc2", 00:04:37.785 "aliases": [ 00:04:37.785 "70ebf6b2-eae9-4090-8bd8-c513391c99ca" 00:04:37.785 ], 00:04:37.785 "product_name": "Malloc disk", 00:04:37.785 "block_size": 512, 00:04:37.785 "num_blocks": 16384, 00:04:37.785 "uuid": "70ebf6b2-eae9-4090-8bd8-c513391c99ca", 00:04:37.785 "assigned_rate_limits": { 00:04:37.785 "rw_ios_per_sec": 0, 00:04:37.785 "rw_mbytes_per_sec": 0, 00:04:37.785 "r_mbytes_per_sec": 0, 00:04:37.785 "w_mbytes_per_sec": 0 00:04:37.785 }, 00:04:37.785 "claimed": true, 00:04:37.785 "claim_type": "exclusive_write", 00:04:37.785 "zoned": false, 00:04:37.785 "supported_io_types": { 00:04:37.785 "read": true, 00:04:37.785 "write": true, 00:04:37.785 "unmap": true, 00:04:37.785 "write_zeroes": true, 00:04:37.785 "flush": true, 00:04:37.785 "reset": true, 00:04:37.785 "compare": false, 00:04:37.785 "compare_and_write": false, 00:04:37.785 "abort": true, 00:04:37.785 "nvme_admin": false, 00:04:37.785 "nvme_io": false 00:04:37.785 }, 00:04:37.785 "memory_domains": [ 00:04:37.785 { 00:04:37.785 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:37.785 "dma_device_type": 2 00:04:37.785 } 00:04:37.785 ], 00:04:37.785 "driver_specific": {} 00:04:37.785 }, 00:04:37.785 { 00:04:37.785 "name": "Passthru0", 00:04:37.785 "aliases": [ 00:04:37.785 "a199e497-b8ce-5a3c-970c-37e263d29d53" 00:04:37.785 ], 00:04:37.785 "product_name": "passthru", 00:04:37.785 "block_size": 512, 00:04:37.785 "num_blocks": 16384, 00:04:37.785 "uuid": "a199e497-b8ce-5a3c-970c-37e263d29d53", 00:04:37.785 "assigned_rate_limits": { 00:04:37.785 "rw_ios_per_sec": 0, 00:04:37.785 "rw_mbytes_per_sec": 0, 00:04:37.785 "r_mbytes_per_sec": 0, 00:04:37.785 "w_mbytes_per_sec": 0 00:04:37.785 }, 00:04:37.785 "claimed": false, 00:04:37.785 "zoned": false, 00:04:37.785 "supported_io_types": { 00:04:37.785 "read": true, 00:04:37.785 "write": true, 00:04:37.785 "unmap": true, 00:04:37.785 "write_zeroes": true, 00:04:37.785 "flush": true, 00:04:37.785 "reset": true, 00:04:37.785 "compare": false, 00:04:37.785 "compare_and_write": false, 00:04:37.785 "abort": true, 00:04:37.785 "nvme_admin": false, 00:04:37.785 "nvme_io": false 00:04:37.785 }, 00:04:37.785 "memory_domains": [ 00:04:37.785 { 00:04:37.785 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:37.785 "dma_device_type": 2 00:04:37.785 } 00:04:37.785 ], 00:04:37.785 "driver_specific": { 00:04:37.785 "passthru": { 00:04:37.785 "name": "Passthru0", 00:04:37.785 "base_bdev_name": "Malloc2" 00:04:37.785 } 00:04:37.785 } 00:04:37.785 } 00:04:37.785 ]' 00:04:37.785 20:05:14 -- rpc/rpc.sh@21 -- # jq length 00:04:37.785 20:05:15 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:37.785 20:05:15 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:37.785 20:05:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:37.785 20:05:15 -- common/autotest_common.sh@10 -- # set +x 00:04:37.785 20:05:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:37.785 20:05:15 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:37.785 20:05:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:37.785 20:05:15 -- common/autotest_common.sh@10 -- # set +x 00:04:37.785 20:05:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:37.785 20:05:15 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:37.785 20:05:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:37.785 20:05:15 -- common/autotest_common.sh@10 -- # set +x 00:04:37.785 20:05:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:37.785 20:05:15 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:37.785 20:05:15 -- rpc/rpc.sh@26 -- # jq length 00:04:37.785 20:05:15 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:37.785 00:04:37.785 real 0m0.278s 00:04:37.785 user 0m0.177s 00:04:37.785 sys 0m0.034s 00:04:37.785 20:05:15 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:37.785 20:05:15 -- common/autotest_common.sh@10 -- # set +x 00:04:37.786 ************************************ 00:04:37.786 END TEST rpc_daemon_integrity 00:04:37.786 ************************************ 00:04:37.786 20:05:15 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:37.786 20:05:15 -- rpc/rpc.sh@84 -- # killprocess 1591191 00:04:37.786 20:05:15 -- common/autotest_common.sh@924 -- # '[' -z 1591191 ']' 00:04:37.786 20:05:15 -- common/autotest_common.sh@928 -- # kill -0 1591191 00:04:37.786 20:05:15 -- common/autotest_common.sh@929 -- # uname 00:04:37.786 20:05:15 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:04:37.786 20:05:15 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 1591191 00:04:37.786 20:05:15 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:04:37.786 20:05:15 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:04:37.786 20:05:15 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 1591191' 00:04:37.786 killing process with pid 1591191 00:04:37.786 20:05:15 -- common/autotest_common.sh@943 -- # kill 1591191 00:04:37.786 20:05:15 -- common/autotest_common.sh@948 -- # wait 1591191 00:04:38.355 00:04:38.355 real 0m2.325s 00:04:38.355 user 0m2.974s 00:04:38.355 sys 0m0.578s 00:04:38.355 20:05:15 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:38.355 20:05:15 -- common/autotest_common.sh@10 -- # set +x 00:04:38.355 ************************************ 00:04:38.355 END TEST rpc 00:04:38.355 ************************************ 00:04:38.355 20:05:15 -- spdk/autotest.sh@177 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:38.355 20:05:15 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:04:38.355 20:05:15 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:38.355 20:05:15 -- common/autotest_common.sh@10 -- # set +x 00:04:38.355 ************************************ 00:04:38.355 START TEST rpc_client 00:04:38.355 ************************************ 00:04:38.355 20:05:15 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:38.355 * Looking for test storage... 00:04:38.355 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:38.355 20:05:15 -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:38.355 OK 00:04:38.355 20:05:15 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:38.355 00:04:38.355 real 0m0.095s 00:04:38.355 user 0m0.039s 00:04:38.355 sys 0m0.062s 00:04:38.355 20:05:15 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:38.355 20:05:15 -- common/autotest_common.sh@10 -- # set +x 00:04:38.355 ************************************ 00:04:38.355 END TEST rpc_client 00:04:38.355 ************************************ 00:04:38.355 20:05:15 -- spdk/autotest.sh@178 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:38.355 20:05:15 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:04:38.355 20:05:15 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:38.355 20:05:15 -- common/autotest_common.sh@10 -- # set +x 00:04:38.355 ************************************ 00:04:38.355 START TEST json_config 00:04:38.355 ************************************ 00:04:38.355 20:05:15 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:38.355 20:05:15 -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:38.355 20:05:15 -- nvmf/common.sh@7 -- # uname -s 00:04:38.355 20:05:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:38.355 20:05:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:38.355 20:05:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:38.355 20:05:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:38.355 20:05:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:38.355 20:05:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:38.355 20:05:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:38.355 20:05:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:38.355 20:05:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:38.355 20:05:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:38.355 20:05:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:04:38.355 20:05:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:04:38.355 20:05:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:38.355 20:05:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:38.355 20:05:15 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:38.355 20:05:15 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:38.355 20:05:15 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:38.355 20:05:15 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:38.355 20:05:15 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:38.355 20:05:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:38.355 20:05:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:38.355 20:05:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:38.355 20:05:15 -- paths/export.sh@5 -- # export PATH 00:04:38.355 20:05:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:38.355 20:05:15 -- nvmf/common.sh@46 -- # : 0 00:04:38.355 20:05:15 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:38.355 20:05:15 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:38.355 20:05:15 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:38.355 20:05:15 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:38.355 20:05:15 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:38.355 20:05:15 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:38.355 20:05:15 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:38.355 20:05:15 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:38.355 20:05:15 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:04:38.355 20:05:15 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:04:38.355 20:05:15 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:04:38.355 20:05:15 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:38.355 20:05:15 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:04:38.355 20:05:15 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:04:38.355 20:05:15 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:38.356 20:05:15 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:04:38.356 20:05:15 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:38.356 20:05:15 -- json_config/json_config.sh@32 -- # declare -A app_params 00:04:38.356 20:05:15 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:38.356 20:05:15 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:04:38.356 20:05:15 -- json_config/json_config.sh@43 -- # last_event_id=0 00:04:38.356 20:05:15 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:38.356 20:05:15 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:04:38.356 INFO: JSON configuration test init 00:04:38.356 20:05:15 -- json_config/json_config.sh@420 -- # json_config_test_init 00:04:38.356 20:05:15 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:04:38.356 20:05:15 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:38.356 20:05:15 -- common/autotest_common.sh@10 -- # set +x 00:04:38.615 20:05:15 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:04:38.615 20:05:15 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:38.615 20:05:15 -- common/autotest_common.sh@10 -- # set +x 00:04:38.615 20:05:15 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:04:38.615 20:05:15 -- json_config/json_config.sh@98 -- # local app=target 00:04:38.615 20:05:15 -- json_config/json_config.sh@99 -- # shift 00:04:38.615 20:05:15 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:04:38.615 20:05:15 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:04:38.615 20:05:15 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:04:38.615 20:05:15 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:38.615 20:05:15 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:38.615 20:05:15 -- json_config/json_config.sh@111 -- # app_pid[$app]=1591851 00:04:38.615 20:05:15 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:04:38.615 Waiting for target to run... 00:04:38.615 20:05:15 -- json_config/json_config.sh@114 -- # waitforlisten 1591851 /var/tmp/spdk_tgt.sock 00:04:38.615 20:05:15 -- common/autotest_common.sh@817 -- # '[' -z 1591851 ']' 00:04:38.615 20:05:15 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:38.615 20:05:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:38.615 20:05:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:38.615 20:05:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:38.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:38.615 20:05:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:38.615 20:05:15 -- common/autotest_common.sh@10 -- # set +x 00:04:38.615 [2024-02-14 20:05:15.826870] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:04:38.615 [2024-02-14 20:05:15.826924] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1591851 ] 00:04:38.615 EAL: No free 2048 kB hugepages reported on node 1 00:04:38.874 [2024-02-14 20:05:16.100126] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.874 [2024-02-14 20:05:16.163924] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:38.874 [2024-02-14 20:05:16.164020] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.443 20:05:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:39.443 20:05:16 -- common/autotest_common.sh@850 -- # return 0 00:04:39.443 20:05:16 -- json_config/json_config.sh@115 -- # echo '' 00:04:39.443 00:04:39.443 20:05:16 -- json_config/json_config.sh@322 -- # create_accel_config 00:04:39.443 20:05:16 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:04:39.443 20:05:16 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:39.443 20:05:16 -- common/autotest_common.sh@10 -- # set +x 00:04:39.443 20:05:16 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:04:39.443 20:05:16 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:04:39.443 20:05:16 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:39.443 20:05:16 -- common/autotest_common.sh@10 -- # set +x 00:04:39.443 20:05:16 -- json_config/json_config.sh@326 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:39.443 20:05:16 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:04:39.443 20:05:16 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:42.734 20:05:19 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:04:42.734 20:05:19 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:04:42.734 20:05:19 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:42.734 20:05:19 -- common/autotest_common.sh@10 -- # set +x 00:04:42.734 20:05:19 -- json_config/json_config.sh@48 -- # local ret=0 00:04:42.734 20:05:19 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:42.734 20:05:19 -- json_config/json_config.sh@49 -- # local enabled_types 00:04:42.734 20:05:19 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:42.734 20:05:19 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:42.734 20:05:19 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:42.734 20:05:19 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:42.734 20:05:19 -- json_config/json_config.sh@51 -- # local get_types 00:04:42.734 20:05:19 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:42.734 20:05:19 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:04:42.734 20:05:19 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:42.734 20:05:19 -- common/autotest_common.sh@10 -- # set +x 00:04:42.734 20:05:19 -- json_config/json_config.sh@58 -- # return 0 00:04:42.734 20:05:19 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:04:42.734 20:05:19 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:04:42.734 20:05:19 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:04:42.734 20:05:19 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:04:42.734 20:05:19 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:04:42.734 20:05:19 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:04:42.734 20:05:19 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:42.734 20:05:19 -- common/autotest_common.sh@10 -- # set +x 00:04:42.734 20:05:19 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:42.734 20:05:19 -- json_config/json_config.sh@286 -- # [[ tcp == \r\d\m\a ]] 00:04:42.734 20:05:19 -- json_config/json_config.sh@290 -- # [[ -z 127.0.0.1 ]] 00:04:42.734 20:05:19 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:42.735 20:05:19 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:42.735 MallocForNvmf0 00:04:42.735 20:05:20 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:42.735 20:05:20 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:42.994 MallocForNvmf1 00:04:42.994 20:05:20 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:42.994 20:05:20 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:42.994 [2024-02-14 20:05:20.387389] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:42.994 20:05:20 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:42.994 20:05:20 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:43.253 20:05:20 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:43.253 20:05:20 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:43.512 20:05:20 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:43.512 20:05:20 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:43.512 20:05:20 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:43.512 20:05:20 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:43.772 [2024-02-14 20:05:21.049524] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:43.772 20:05:21 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:04:43.772 20:05:21 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:43.772 20:05:21 -- common/autotest_common.sh@10 -- # set +x 00:04:43.772 20:05:21 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:04:43.772 20:05:21 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:43.772 20:05:21 -- common/autotest_common.sh@10 -- # set +x 00:04:43.772 20:05:21 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:04:43.772 20:05:21 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:43.772 20:05:21 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:44.030 MallocBdevForConfigChangeCheck 00:04:44.030 20:05:21 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:04:44.030 20:05:21 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:44.030 20:05:21 -- common/autotest_common.sh@10 -- # set +x 00:04:44.030 20:05:21 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:04:44.030 20:05:21 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:44.290 20:05:21 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:04:44.290 INFO: shutting down applications... 00:04:44.290 20:05:21 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:04:44.290 20:05:21 -- json_config/json_config.sh@431 -- # json_config_clear target 00:04:44.290 20:05:21 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:04:44.290 20:05:21 -- json_config/json_config.sh@386 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:46.195 Calling clear_iscsi_subsystem 00:04:46.195 Calling clear_nvmf_subsystem 00:04:46.195 Calling clear_nbd_subsystem 00:04:46.195 Calling clear_ublk_subsystem 00:04:46.195 Calling clear_vhost_blk_subsystem 00:04:46.195 Calling clear_vhost_scsi_subsystem 00:04:46.195 Calling clear_scheduler_subsystem 00:04:46.195 Calling clear_bdev_subsystem 00:04:46.195 Calling clear_accel_subsystem 00:04:46.195 Calling clear_vmd_subsystem 00:04:46.195 Calling clear_sock_subsystem 00:04:46.195 Calling clear_iobuf_subsystem 00:04:46.195 20:05:23 -- json_config/json_config.sh@390 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:46.195 20:05:23 -- json_config/json_config.sh@396 -- # count=100 00:04:46.195 20:05:23 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:04:46.195 20:05:23 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:46.195 20:05:23 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:46.195 20:05:23 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:46.195 20:05:23 -- json_config/json_config.sh@398 -- # break 00:04:46.195 20:05:23 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:04:46.195 20:05:23 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:04:46.195 20:05:23 -- json_config/json_config.sh@120 -- # local app=target 00:04:46.195 20:05:23 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:04:46.195 20:05:23 -- json_config/json_config.sh@124 -- # [[ -n 1591851 ]] 00:04:46.195 20:05:23 -- json_config/json_config.sh@127 -- # kill -SIGINT 1591851 00:04:46.195 20:05:23 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:04:46.195 20:05:23 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:04:46.195 20:05:23 -- json_config/json_config.sh@130 -- # kill -0 1591851 00:04:46.195 20:05:23 -- json_config/json_config.sh@134 -- # sleep 0.5 00:04:46.765 20:05:23 -- json_config/json_config.sh@129 -- # (( i++ )) 00:04:46.765 20:05:23 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:04:46.765 20:05:23 -- json_config/json_config.sh@130 -- # kill -0 1591851 00:04:46.765 20:05:23 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:04:46.765 20:05:23 -- json_config/json_config.sh@132 -- # break 00:04:46.765 20:05:23 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:04:46.765 20:05:23 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:04:46.765 SPDK target shutdown done 00:04:46.765 20:05:23 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:04:46.765 INFO: relaunching applications... 00:04:46.765 20:05:23 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:46.765 20:05:23 -- json_config/json_config.sh@98 -- # local app=target 00:04:46.765 20:05:23 -- json_config/json_config.sh@99 -- # shift 00:04:46.765 20:05:23 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:04:46.765 20:05:23 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:04:46.765 20:05:23 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:04:46.766 20:05:23 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:46.766 20:05:23 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:46.766 20:05:23 -- json_config/json_config.sh@111 -- # app_pid[$app]=1593363 00:04:46.766 20:05:23 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:04:46.766 Waiting for target to run... 00:04:46.766 20:05:23 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:46.766 20:05:23 -- json_config/json_config.sh@114 -- # waitforlisten 1593363 /var/tmp/spdk_tgt.sock 00:04:46.766 20:05:23 -- common/autotest_common.sh@817 -- # '[' -z 1593363 ']' 00:04:46.766 20:05:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:46.766 20:05:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:46.766 20:05:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:46.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:46.766 20:05:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:46.766 20:05:23 -- common/autotest_common.sh@10 -- # set +x 00:04:46.766 [2024-02-14 20:05:24.005322] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:04:46.766 [2024-02-14 20:05:24.005378] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1593363 ] 00:04:46.766 EAL: No free 2048 kB hugepages reported on node 1 00:04:47.334 [2024-02-14 20:05:24.452446] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:47.334 [2024-02-14 20:05:24.531745] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:47.334 [2024-02-14 20:05:24.531862] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.334 [2024-02-14 20:05:24.531883] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:04:50.628 [2024-02-14 20:05:27.537441] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:50.628 [2024-02-14 20:05:27.569703] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:50.886 20:05:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:50.886 20:05:28 -- common/autotest_common.sh@850 -- # return 0 00:04:50.886 20:05:28 -- json_config/json_config.sh@115 -- # echo '' 00:04:50.886 00:04:50.887 20:05:28 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:04:50.887 20:05:28 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:50.887 INFO: Checking if target configuration is the same... 00:04:50.887 20:05:28 -- json_config/json_config.sh@441 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:50.887 20:05:28 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:04:50.887 20:05:28 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:50.887 + '[' 2 -ne 2 ']' 00:04:50.887 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:50.887 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:50.887 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:50.887 +++ basename /dev/fd/62 00:04:50.887 ++ mktemp /tmp/62.XXX 00:04:50.887 + tmp_file_1=/tmp/62.O34 00:04:50.887 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:50.887 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:50.887 + tmp_file_2=/tmp/spdk_tgt_config.json.9X6 00:04:50.887 + ret=0 00:04:50.887 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:51.145 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:51.145 + diff -u /tmp/62.O34 /tmp/spdk_tgt_config.json.9X6 00:04:51.145 + echo 'INFO: JSON config files are the same' 00:04:51.145 INFO: JSON config files are the same 00:04:51.145 + rm /tmp/62.O34 /tmp/spdk_tgt_config.json.9X6 00:04:51.145 + exit 0 00:04:51.145 20:05:28 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:04:51.145 20:05:28 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:51.145 INFO: changing configuration and checking if this can be detected... 00:04:51.145 20:05:28 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:51.145 20:05:28 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:51.403 20:05:28 -- json_config/json_config.sh@450 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:51.403 20:05:28 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:04:51.403 20:05:28 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:51.403 + '[' 2 -ne 2 ']' 00:04:51.404 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:51.404 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:51.404 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:51.404 +++ basename /dev/fd/62 00:04:51.404 ++ mktemp /tmp/62.XXX 00:04:51.404 + tmp_file_1=/tmp/62.YsK 00:04:51.404 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:51.404 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:51.404 + tmp_file_2=/tmp/spdk_tgt_config.json.vcX 00:04:51.404 + ret=0 00:04:51.404 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:51.663 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:51.663 + diff -u /tmp/62.YsK /tmp/spdk_tgt_config.json.vcX 00:04:51.663 + ret=1 00:04:51.663 + echo '=== Start of file: /tmp/62.YsK ===' 00:04:51.663 + cat /tmp/62.YsK 00:04:51.663 + echo '=== End of file: /tmp/62.YsK ===' 00:04:51.663 + echo '' 00:04:51.663 + echo '=== Start of file: /tmp/spdk_tgt_config.json.vcX ===' 00:04:51.663 + cat /tmp/spdk_tgt_config.json.vcX 00:04:51.663 + echo '=== End of file: /tmp/spdk_tgt_config.json.vcX ===' 00:04:51.663 + echo '' 00:04:51.663 + rm /tmp/62.YsK /tmp/spdk_tgt_config.json.vcX 00:04:51.663 + exit 1 00:04:51.663 20:05:28 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:04:51.663 INFO: configuration change detected. 00:04:51.663 20:05:28 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:04:51.663 20:05:28 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:04:51.663 20:05:28 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:51.663 20:05:28 -- common/autotest_common.sh@10 -- # set +x 00:04:51.663 20:05:28 -- json_config/json_config.sh@360 -- # local ret=0 00:04:51.663 20:05:28 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:04:51.663 20:05:28 -- json_config/json_config.sh@370 -- # [[ -n 1593363 ]] 00:04:51.663 20:05:28 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:04:51.663 20:05:28 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:04:51.663 20:05:28 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:51.663 20:05:28 -- common/autotest_common.sh@10 -- # set +x 00:04:51.663 20:05:28 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:04:51.663 20:05:28 -- json_config/json_config.sh@246 -- # uname -s 00:04:51.663 20:05:28 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:04:51.663 20:05:28 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:04:51.663 20:05:28 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:04:51.663 20:05:28 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:04:51.663 20:05:28 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:51.663 20:05:28 -- common/autotest_common.sh@10 -- # set +x 00:04:51.663 20:05:28 -- json_config/json_config.sh@376 -- # killprocess 1593363 00:04:51.663 20:05:28 -- common/autotest_common.sh@924 -- # '[' -z 1593363 ']' 00:04:51.663 20:05:28 -- common/autotest_common.sh@928 -- # kill -0 1593363 00:04:51.663 20:05:28 -- common/autotest_common.sh@929 -- # uname 00:04:51.663 20:05:28 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:04:51.663 20:05:28 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 1593363 00:04:51.663 20:05:29 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:04:51.663 20:05:29 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:04:51.663 20:05:29 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 1593363' 00:04:51.663 killing process with pid 1593363 00:04:51.663 20:05:29 -- common/autotest_common.sh@943 -- # kill 1593363 00:04:51.663 [2024-02-14 20:05:29.034965] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:04:51.663 20:05:29 -- common/autotest_common.sh@948 -- # wait 1593363 00:04:53.621 20:05:30 -- json_config/json_config.sh@379 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:53.621 20:05:30 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:04:53.621 20:05:30 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:53.621 20:05:30 -- common/autotest_common.sh@10 -- # set +x 00:04:53.621 20:05:30 -- json_config/json_config.sh@381 -- # return 0 00:04:53.621 20:05:30 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:04:53.621 INFO: Success 00:04:53.621 00:04:53.621 real 0m14.914s 00:04:53.621 user 0m15.954s 00:04:53.621 sys 0m1.933s 00:04:53.621 20:05:30 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:53.621 20:05:30 -- common/autotest_common.sh@10 -- # set +x 00:04:53.621 ************************************ 00:04:53.621 END TEST json_config 00:04:53.621 ************************************ 00:04:53.621 20:05:30 -- spdk/autotest.sh@179 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:53.621 20:05:30 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:04:53.621 20:05:30 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:53.621 20:05:30 -- common/autotest_common.sh@10 -- # set +x 00:04:53.621 ************************************ 00:04:53.621 START TEST json_config_extra_key 00:04:53.621 ************************************ 00:04:53.621 20:05:30 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:53.621 20:05:30 -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:53.621 20:05:30 -- nvmf/common.sh@7 -- # uname -s 00:04:53.621 20:05:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:53.621 20:05:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:53.621 20:05:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:53.621 20:05:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:53.621 20:05:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:53.621 20:05:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:53.621 20:05:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:53.621 20:05:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:53.621 20:05:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:53.621 20:05:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:53.621 20:05:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:04:53.621 20:05:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:04:53.621 20:05:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:53.621 20:05:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:53.621 20:05:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:53.621 20:05:30 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:53.621 20:05:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:53.621 20:05:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:53.621 20:05:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:53.621 20:05:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.621 20:05:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.621 20:05:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.621 20:05:30 -- paths/export.sh@5 -- # export PATH 00:04:53.621 20:05:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.621 20:05:30 -- nvmf/common.sh@46 -- # : 0 00:04:53.621 20:05:30 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:53.621 20:05:30 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:53.621 20:05:30 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:53.621 20:05:30 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:53.621 20:05:30 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:53.621 20:05:30 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:53.621 20:05:30 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:53.621 20:05:30 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:53.621 20:05:30 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:04:53.621 20:05:30 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:04:53.621 20:05:30 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:53.621 20:05:30 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:04:53.621 20:05:30 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:53.621 20:05:30 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:04:53.621 20:05:30 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:53.621 20:05:30 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:04:53.621 20:05:30 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:53.621 20:05:30 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:04:53.621 INFO: launching applications... 00:04:53.621 20:05:30 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:53.621 20:05:30 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:04:53.621 20:05:30 -- json_config/json_config_extra_key.sh@25 -- # shift 00:04:53.621 20:05:30 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:04:53.621 20:05:30 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:04:53.621 20:05:30 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=1594635 00:04:53.621 20:05:30 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:04:53.622 Waiting for target to run... 00:04:53.622 20:05:30 -- json_config/json_config_extra_key.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:53.622 20:05:30 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 1594635 /var/tmp/spdk_tgt.sock 00:04:53.622 20:05:30 -- common/autotest_common.sh@817 -- # '[' -z 1594635 ']' 00:04:53.622 20:05:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:53.622 20:05:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:53.622 20:05:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:53.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:53.622 20:05:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:53.622 20:05:30 -- common/autotest_common.sh@10 -- # set +x 00:04:53.622 [2024-02-14 20:05:30.732741] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:04:53.622 [2024-02-14 20:05:30.732793] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1594635 ] 00:04:53.622 EAL: No free 2048 kB hugepages reported on node 1 00:04:53.622 [2024-02-14 20:05:30.995059] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.904 [2024-02-14 20:05:31.062079] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:53.904 [2024-02-14 20:05:31.062171] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.904 [2024-02-14 20:05:31.062190] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:04:54.164 20:05:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:54.164 20:05:31 -- common/autotest_common.sh@850 -- # return 0 00:04:54.164 20:05:31 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:04:54.164 00:04:54.164 20:05:31 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:04:54.164 INFO: shutting down applications... 00:04:54.164 20:05:31 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:04:54.164 20:05:31 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:04:54.164 20:05:31 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:04:54.164 20:05:31 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 1594635 ]] 00:04:54.164 20:05:31 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 1594635 00:04:54.164 [2024-02-14 20:05:31.504706] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:04:54.164 20:05:31 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:04:54.164 20:05:31 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:54.164 20:05:31 -- json_config/json_config_extra_key.sh@50 -- # kill -0 1594635 00:04:54.164 20:05:31 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:04:54.733 20:05:32 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:04:54.733 20:05:32 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:54.733 20:05:32 -- json_config/json_config_extra_key.sh@50 -- # kill -0 1594635 00:04:54.733 20:05:32 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:04:54.733 20:05:32 -- json_config/json_config_extra_key.sh@52 -- # break 00:04:54.733 20:05:32 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:04:54.733 20:05:32 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:04:54.733 SPDK target shutdown done 00:04:54.733 20:05:32 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:04:54.733 Success 00:04:54.733 00:04:54.733 real 0m1.377s 00:04:54.733 user 0m1.195s 00:04:54.733 sys 0m0.324s 00:04:54.733 20:05:32 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:54.733 20:05:32 -- common/autotest_common.sh@10 -- # set +x 00:04:54.733 ************************************ 00:04:54.733 END TEST json_config_extra_key 00:04:54.733 ************************************ 00:04:54.733 20:05:32 -- spdk/autotest.sh@180 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:54.733 20:05:32 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:04:54.733 20:05:32 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:54.733 20:05:32 -- common/autotest_common.sh@10 -- # set +x 00:04:54.733 ************************************ 00:04:54.733 START TEST alias_rpc 00:04:54.733 ************************************ 00:04:54.733 20:05:32 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:54.733 * Looking for test storage... 00:04:54.733 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:54.733 20:05:32 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:54.733 20:05:32 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1594913 00:04:54.733 20:05:32 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1594913 00:04:54.733 20:05:32 -- common/autotest_common.sh@817 -- # '[' -z 1594913 ']' 00:04:54.733 20:05:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:54.733 20:05:32 -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:54.733 20:05:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:54.733 20:05:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:54.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:54.733 20:05:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:54.733 20:05:32 -- common/autotest_common.sh@10 -- # set +x 00:04:54.992 [2024-02-14 20:05:32.161182] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:04:54.993 [2024-02-14 20:05:32.161236] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1594913 ] 00:04:54.993 EAL: No free 2048 kB hugepages reported on node 1 00:04:54.993 [2024-02-14 20:05:32.219591] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.993 [2024-02-14 20:05:32.295478] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:54.993 [2024-02-14 20:05:32.295592] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.562 20:05:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:55.562 20:05:32 -- common/autotest_common.sh@850 -- # return 0 00:04:55.562 20:05:32 -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:55.822 20:05:33 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1594913 00:04:55.822 20:05:33 -- common/autotest_common.sh@924 -- # '[' -z 1594913 ']' 00:04:55.822 20:05:33 -- common/autotest_common.sh@928 -- # kill -0 1594913 00:04:55.822 20:05:33 -- common/autotest_common.sh@929 -- # uname 00:04:55.822 20:05:33 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:04:55.822 20:05:33 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 1594913 00:04:55.822 20:05:33 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:04:55.822 20:05:33 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:04:55.822 20:05:33 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 1594913' 00:04:55.822 killing process with pid 1594913 00:04:55.822 20:05:33 -- common/autotest_common.sh@943 -- # kill 1594913 00:04:55.822 20:05:33 -- common/autotest_common.sh@948 -- # wait 1594913 00:04:56.391 00:04:56.391 real 0m1.468s 00:04:56.391 user 0m1.601s 00:04:56.391 sys 0m0.374s 00:04:56.391 20:05:33 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:56.391 20:05:33 -- common/autotest_common.sh@10 -- # set +x 00:04:56.391 ************************************ 00:04:56.391 END TEST alias_rpc 00:04:56.391 ************************************ 00:04:56.391 20:05:33 -- spdk/autotest.sh@182 -- # [[ 0 -eq 0 ]] 00:04:56.391 20:05:33 -- spdk/autotest.sh@183 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:56.391 20:05:33 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:04:56.391 20:05:33 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:56.391 20:05:33 -- common/autotest_common.sh@10 -- # set +x 00:04:56.391 ************************************ 00:04:56.391 START TEST spdkcli_tcp 00:04:56.391 ************************************ 00:04:56.391 20:05:33 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:56.391 * Looking for test storage... 00:04:56.391 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:56.391 20:05:33 -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:56.391 20:05:33 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:56.391 20:05:33 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:56.391 20:05:33 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:56.391 20:05:33 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:56.391 20:05:33 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:56.391 20:05:33 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:56.391 20:05:33 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:56.391 20:05:33 -- common/autotest_common.sh@10 -- # set +x 00:04:56.391 20:05:33 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1595204 00:04:56.391 20:05:33 -- spdkcli/tcp.sh@27 -- # waitforlisten 1595204 00:04:56.391 20:05:33 -- common/autotest_common.sh@817 -- # '[' -z 1595204 ']' 00:04:56.391 20:05:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:56.391 20:05:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:56.391 20:05:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:56.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:56.391 20:05:33 -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:56.391 20:05:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:56.391 20:05:33 -- common/autotest_common.sh@10 -- # set +x 00:04:56.391 [2024-02-14 20:05:33.665186] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:04:56.391 [2024-02-14 20:05:33.665239] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1595204 ] 00:04:56.391 EAL: No free 2048 kB hugepages reported on node 1 00:04:56.391 [2024-02-14 20:05:33.724096] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:56.391 [2024-02-14 20:05:33.800865] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:56.391 [2024-02-14 20:05:33.801021] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:56.391 [2024-02-14 20:05:33.801023] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.329 20:05:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:57.329 20:05:34 -- common/autotest_common.sh@850 -- # return 0 00:04:57.329 20:05:34 -- spdkcli/tcp.sh@31 -- # socat_pid=1595428 00:04:57.329 20:05:34 -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:57.329 20:05:34 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:57.329 [ 00:04:57.329 "bdev_malloc_delete", 00:04:57.329 "bdev_malloc_create", 00:04:57.329 "bdev_null_resize", 00:04:57.329 "bdev_null_delete", 00:04:57.329 "bdev_null_create", 00:04:57.329 "bdev_nvme_cuse_unregister", 00:04:57.329 "bdev_nvme_cuse_register", 00:04:57.329 "bdev_opal_new_user", 00:04:57.329 "bdev_opal_set_lock_state", 00:04:57.329 "bdev_opal_delete", 00:04:57.329 "bdev_opal_get_info", 00:04:57.329 "bdev_opal_create", 00:04:57.329 "bdev_nvme_opal_revert", 00:04:57.329 "bdev_nvme_opal_init", 00:04:57.329 "bdev_nvme_send_cmd", 00:04:57.329 "bdev_nvme_get_path_iostat", 00:04:57.329 "bdev_nvme_get_mdns_discovery_info", 00:04:57.329 "bdev_nvme_stop_mdns_discovery", 00:04:57.329 "bdev_nvme_start_mdns_discovery", 00:04:57.329 "bdev_nvme_set_multipath_policy", 00:04:57.329 "bdev_nvme_set_preferred_path", 00:04:57.329 "bdev_nvme_get_io_paths", 00:04:57.329 "bdev_nvme_remove_error_injection", 00:04:57.329 "bdev_nvme_add_error_injection", 00:04:57.329 "bdev_nvme_get_discovery_info", 00:04:57.329 "bdev_nvme_stop_discovery", 00:04:57.329 "bdev_nvme_start_discovery", 00:04:57.329 "bdev_nvme_get_controller_health_info", 00:04:57.329 "bdev_nvme_disable_controller", 00:04:57.329 "bdev_nvme_enable_controller", 00:04:57.329 "bdev_nvme_reset_controller", 00:04:57.329 "bdev_nvme_get_transport_statistics", 00:04:57.329 "bdev_nvme_apply_firmware", 00:04:57.329 "bdev_nvme_detach_controller", 00:04:57.329 "bdev_nvme_get_controllers", 00:04:57.329 "bdev_nvme_attach_controller", 00:04:57.329 "bdev_nvme_set_hotplug", 00:04:57.329 "bdev_nvme_set_options", 00:04:57.329 "bdev_passthru_delete", 00:04:57.329 "bdev_passthru_create", 00:04:57.329 "bdev_lvol_grow_lvstore", 00:04:57.329 "bdev_lvol_get_lvols", 00:04:57.329 "bdev_lvol_get_lvstores", 00:04:57.329 "bdev_lvol_delete", 00:04:57.329 "bdev_lvol_set_read_only", 00:04:57.329 "bdev_lvol_resize", 00:04:57.329 "bdev_lvol_decouple_parent", 00:04:57.329 "bdev_lvol_inflate", 00:04:57.329 "bdev_lvol_rename", 00:04:57.329 "bdev_lvol_clone_bdev", 00:04:57.329 "bdev_lvol_clone", 00:04:57.329 "bdev_lvol_snapshot", 00:04:57.329 "bdev_lvol_create", 00:04:57.329 "bdev_lvol_delete_lvstore", 00:04:57.329 "bdev_lvol_rename_lvstore", 00:04:57.329 "bdev_lvol_create_lvstore", 00:04:57.329 "bdev_raid_set_options", 00:04:57.329 "bdev_raid_remove_base_bdev", 00:04:57.329 "bdev_raid_add_base_bdev", 00:04:57.329 "bdev_raid_delete", 00:04:57.329 "bdev_raid_create", 00:04:57.329 "bdev_raid_get_bdevs", 00:04:57.329 "bdev_error_inject_error", 00:04:57.329 "bdev_error_delete", 00:04:57.329 "bdev_error_create", 00:04:57.329 "bdev_split_delete", 00:04:57.329 "bdev_split_create", 00:04:57.329 "bdev_delay_delete", 00:04:57.329 "bdev_delay_create", 00:04:57.329 "bdev_delay_update_latency", 00:04:57.329 "bdev_zone_block_delete", 00:04:57.329 "bdev_zone_block_create", 00:04:57.329 "blobfs_create", 00:04:57.329 "blobfs_detect", 00:04:57.329 "blobfs_set_cache_size", 00:04:57.329 "bdev_aio_delete", 00:04:57.329 "bdev_aio_rescan", 00:04:57.329 "bdev_aio_create", 00:04:57.329 "bdev_ftl_set_property", 00:04:57.329 "bdev_ftl_get_properties", 00:04:57.329 "bdev_ftl_get_stats", 00:04:57.329 "bdev_ftl_unmap", 00:04:57.329 "bdev_ftl_unload", 00:04:57.329 "bdev_ftl_delete", 00:04:57.329 "bdev_ftl_load", 00:04:57.329 "bdev_ftl_create", 00:04:57.329 "bdev_virtio_attach_controller", 00:04:57.329 "bdev_virtio_scsi_get_devices", 00:04:57.329 "bdev_virtio_detach_controller", 00:04:57.329 "bdev_virtio_blk_set_hotplug", 00:04:57.329 "bdev_iscsi_delete", 00:04:57.329 "bdev_iscsi_create", 00:04:57.329 "bdev_iscsi_set_options", 00:04:57.329 "accel_error_inject_error", 00:04:57.329 "ioat_scan_accel_module", 00:04:57.329 "dsa_scan_accel_module", 00:04:57.329 "iaa_scan_accel_module", 00:04:57.329 "iscsi_set_options", 00:04:57.329 "iscsi_get_auth_groups", 00:04:57.329 "iscsi_auth_group_remove_secret", 00:04:57.329 "iscsi_auth_group_add_secret", 00:04:57.329 "iscsi_delete_auth_group", 00:04:57.329 "iscsi_create_auth_group", 00:04:57.329 "iscsi_set_discovery_auth", 00:04:57.329 "iscsi_get_options", 00:04:57.329 "iscsi_target_node_request_logout", 00:04:57.329 "iscsi_target_node_set_redirect", 00:04:57.329 "iscsi_target_node_set_auth", 00:04:57.329 "iscsi_target_node_add_lun", 00:04:57.329 "iscsi_get_connections", 00:04:57.329 "iscsi_portal_group_set_auth", 00:04:57.329 "iscsi_start_portal_group", 00:04:57.329 "iscsi_delete_portal_group", 00:04:57.329 "iscsi_create_portal_group", 00:04:57.329 "iscsi_get_portal_groups", 00:04:57.329 "iscsi_delete_target_node", 00:04:57.329 "iscsi_target_node_remove_pg_ig_maps", 00:04:57.329 "iscsi_target_node_add_pg_ig_maps", 00:04:57.329 "iscsi_create_target_node", 00:04:57.329 "iscsi_get_target_nodes", 00:04:57.329 "iscsi_delete_initiator_group", 00:04:57.329 "iscsi_initiator_group_remove_initiators", 00:04:57.329 "iscsi_initiator_group_add_initiators", 00:04:57.329 "iscsi_create_initiator_group", 00:04:57.330 "iscsi_get_initiator_groups", 00:04:57.330 "nvmf_set_crdt", 00:04:57.330 "nvmf_set_config", 00:04:57.330 "nvmf_set_max_subsystems", 00:04:57.330 "nvmf_subsystem_get_listeners", 00:04:57.330 "nvmf_subsystem_get_qpairs", 00:04:57.330 "nvmf_subsystem_get_controllers", 00:04:57.330 "nvmf_get_stats", 00:04:57.330 "nvmf_get_transports", 00:04:57.330 "nvmf_create_transport", 00:04:57.330 "nvmf_get_targets", 00:04:57.330 "nvmf_delete_target", 00:04:57.330 "nvmf_create_target", 00:04:57.330 "nvmf_subsystem_allow_any_host", 00:04:57.330 "nvmf_subsystem_remove_host", 00:04:57.330 "nvmf_subsystem_add_host", 00:04:57.330 "nvmf_subsystem_remove_ns", 00:04:57.330 "nvmf_subsystem_add_ns", 00:04:57.330 "nvmf_subsystem_listener_set_ana_state", 00:04:57.330 "nvmf_discovery_get_referrals", 00:04:57.330 "nvmf_discovery_remove_referral", 00:04:57.330 "nvmf_discovery_add_referral", 00:04:57.330 "nvmf_subsystem_remove_listener", 00:04:57.330 "nvmf_subsystem_add_listener", 00:04:57.330 "nvmf_delete_subsystem", 00:04:57.330 "nvmf_create_subsystem", 00:04:57.330 "nvmf_get_subsystems", 00:04:57.330 "env_dpdk_get_mem_stats", 00:04:57.330 "nbd_get_disks", 00:04:57.330 "nbd_stop_disk", 00:04:57.330 "nbd_start_disk", 00:04:57.330 "ublk_recover_disk", 00:04:57.330 "ublk_get_disks", 00:04:57.330 "ublk_stop_disk", 00:04:57.330 "ublk_start_disk", 00:04:57.330 "ublk_destroy_target", 00:04:57.330 "ublk_create_target", 00:04:57.330 "virtio_blk_create_transport", 00:04:57.330 "virtio_blk_get_transports", 00:04:57.330 "vhost_controller_set_coalescing", 00:04:57.330 "vhost_get_controllers", 00:04:57.330 "vhost_delete_controller", 00:04:57.330 "vhost_create_blk_controller", 00:04:57.330 "vhost_scsi_controller_remove_target", 00:04:57.330 "vhost_scsi_controller_add_target", 00:04:57.330 "vhost_start_scsi_controller", 00:04:57.330 "vhost_create_scsi_controller", 00:04:57.330 "thread_set_cpumask", 00:04:57.330 "framework_get_scheduler", 00:04:57.330 "framework_set_scheduler", 00:04:57.330 "framework_get_reactors", 00:04:57.330 "thread_get_io_channels", 00:04:57.330 "thread_get_pollers", 00:04:57.330 "thread_get_stats", 00:04:57.330 "framework_monitor_context_switch", 00:04:57.330 "spdk_kill_instance", 00:04:57.330 "log_enable_timestamps", 00:04:57.330 "log_get_flags", 00:04:57.330 "log_clear_flag", 00:04:57.330 "log_set_flag", 00:04:57.330 "log_get_level", 00:04:57.330 "log_set_level", 00:04:57.330 "log_get_print_level", 00:04:57.330 "log_set_print_level", 00:04:57.330 "framework_enable_cpumask_locks", 00:04:57.330 "framework_disable_cpumask_locks", 00:04:57.330 "framework_wait_init", 00:04:57.330 "framework_start_init", 00:04:57.330 "scsi_get_devices", 00:04:57.330 "bdev_get_histogram", 00:04:57.330 "bdev_enable_histogram", 00:04:57.330 "bdev_set_qos_limit", 00:04:57.330 "bdev_set_qd_sampling_period", 00:04:57.330 "bdev_get_bdevs", 00:04:57.330 "bdev_reset_iostat", 00:04:57.330 "bdev_get_iostat", 00:04:57.330 "bdev_examine", 00:04:57.330 "bdev_wait_for_examine", 00:04:57.330 "bdev_set_options", 00:04:57.330 "notify_get_notifications", 00:04:57.330 "notify_get_types", 00:04:57.330 "accel_get_stats", 00:04:57.330 "accel_set_options", 00:04:57.330 "accel_set_driver", 00:04:57.330 "accel_crypto_key_destroy", 00:04:57.330 "accel_crypto_keys_get", 00:04:57.330 "accel_crypto_key_create", 00:04:57.330 "accel_assign_opc", 00:04:57.330 "accel_get_module_info", 00:04:57.330 "accel_get_opc_assignments", 00:04:57.330 "vmd_rescan", 00:04:57.330 "vmd_remove_device", 00:04:57.330 "vmd_enable", 00:04:57.330 "sock_set_default_impl", 00:04:57.330 "sock_impl_set_options", 00:04:57.330 "sock_impl_get_options", 00:04:57.330 "iobuf_get_stats", 00:04:57.330 "iobuf_set_options", 00:04:57.330 "framework_get_pci_devices", 00:04:57.330 "framework_get_config", 00:04:57.330 "framework_get_subsystems", 00:04:57.330 "trace_get_info", 00:04:57.330 "trace_get_tpoint_group_mask", 00:04:57.330 "trace_disable_tpoint_group", 00:04:57.330 "trace_enable_tpoint_group", 00:04:57.330 "trace_clear_tpoint_mask", 00:04:57.330 "trace_set_tpoint_mask", 00:04:57.330 "spdk_get_version", 00:04:57.330 "rpc_get_methods" 00:04:57.330 ] 00:04:57.330 20:05:34 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:57.330 20:05:34 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:57.330 20:05:34 -- common/autotest_common.sh@10 -- # set +x 00:04:57.330 20:05:34 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:57.330 20:05:34 -- spdkcli/tcp.sh@38 -- # killprocess 1595204 00:04:57.330 20:05:34 -- common/autotest_common.sh@924 -- # '[' -z 1595204 ']' 00:04:57.330 20:05:34 -- common/autotest_common.sh@928 -- # kill -0 1595204 00:04:57.330 20:05:34 -- common/autotest_common.sh@929 -- # uname 00:04:57.330 20:05:34 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:04:57.330 20:05:34 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 1595204 00:04:57.330 20:05:34 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:04:57.330 20:05:34 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:04:57.330 20:05:34 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 1595204' 00:04:57.330 killing process with pid 1595204 00:04:57.330 20:05:34 -- common/autotest_common.sh@943 -- # kill 1595204 00:04:57.330 20:05:34 -- common/autotest_common.sh@948 -- # wait 1595204 00:04:57.899 00:04:57.899 real 0m1.475s 00:04:57.899 user 0m2.746s 00:04:57.899 sys 0m0.403s 00:04:57.899 20:05:35 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:57.899 20:05:35 -- common/autotest_common.sh@10 -- # set +x 00:04:57.899 ************************************ 00:04:57.899 END TEST spdkcli_tcp 00:04:57.899 ************************************ 00:04:57.899 20:05:35 -- spdk/autotest.sh@186 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:57.899 20:05:35 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:04:57.899 20:05:35 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:57.899 20:05:35 -- common/autotest_common.sh@10 -- # set +x 00:04:57.899 ************************************ 00:04:57.899 START TEST dpdk_mem_utility 00:04:57.899 ************************************ 00:04:57.899 20:05:35 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:57.899 * Looking for test storage... 00:04:57.899 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:57.899 20:05:35 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:57.899 20:05:35 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1595507 00:04:57.899 20:05:35 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:57.899 20:05:35 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1595507 00:04:57.899 20:05:35 -- common/autotest_common.sh@817 -- # '[' -z 1595507 ']' 00:04:57.899 20:05:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:57.899 20:05:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:57.899 20:05:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:57.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:57.899 20:05:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:57.899 20:05:35 -- common/autotest_common.sh@10 -- # set +x 00:04:57.899 [2024-02-14 20:05:35.192296] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:04:57.899 [2024-02-14 20:05:35.192349] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1595507 ] 00:04:57.899 EAL: No free 2048 kB hugepages reported on node 1 00:04:57.899 [2024-02-14 20:05:35.253282] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.159 [2024-02-14 20:05:35.327837] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:58.159 [2024-02-14 20:05:35.327956] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.727 20:05:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:58.727 20:05:35 -- common/autotest_common.sh@850 -- # return 0 00:04:58.727 20:05:35 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:58.727 20:05:35 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:58.727 20:05:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:58.727 20:05:35 -- common/autotest_common.sh@10 -- # set +x 00:04:58.727 { 00:04:58.727 "filename": "/tmp/spdk_mem_dump.txt" 00:04:58.727 } 00:04:58.727 20:05:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:58.727 20:05:35 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:58.727 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:58.727 1 heaps totaling size 814.000000 MiB 00:04:58.727 size: 814.000000 MiB heap id: 0 00:04:58.727 end heaps---------- 00:04:58.727 8 mempools totaling size 598.116089 MiB 00:04:58.727 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:58.727 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:58.727 size: 84.521057 MiB name: bdev_io_1595507 00:04:58.727 size: 51.011292 MiB name: evtpool_1595507 00:04:58.727 size: 50.003479 MiB name: msgpool_1595507 00:04:58.727 size: 21.763794 MiB name: PDU_Pool 00:04:58.727 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:58.727 size: 0.026123 MiB name: Session_Pool 00:04:58.727 end mempools------- 00:04:58.727 6 memzones totaling size 4.142822 MiB 00:04:58.727 size: 1.000366 MiB name: RG_ring_0_1595507 00:04:58.727 size: 1.000366 MiB name: RG_ring_1_1595507 00:04:58.728 size: 1.000366 MiB name: RG_ring_4_1595507 00:04:58.728 size: 1.000366 MiB name: RG_ring_5_1595507 00:04:58.728 size: 0.125366 MiB name: RG_ring_2_1595507 00:04:58.728 size: 0.015991 MiB name: RG_ring_3_1595507 00:04:58.728 end memzones------- 00:04:58.728 20:05:36 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:58.728 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:04:58.728 list of free elements. size: 12.519348 MiB 00:04:58.728 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:58.728 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:58.728 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:58.728 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:58.728 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:58.728 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:58.728 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:58.728 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:58.728 element at address: 0x200000200000 with size: 0.841614 MiB 00:04:58.728 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:04:58.728 element at address: 0x20000b200000 with size: 0.490723 MiB 00:04:58.728 element at address: 0x200000800000 with size: 0.487793 MiB 00:04:58.728 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:58.728 element at address: 0x200027e00000 with size: 0.410034 MiB 00:04:58.728 element at address: 0x200003a00000 with size: 0.355530 MiB 00:04:58.728 list of standard malloc elements. size: 199.218079 MiB 00:04:58.728 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:58.728 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:58.728 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:58.728 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:58.728 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:58.728 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:58.728 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:58.728 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:58.728 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:58.728 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:04:58.728 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:04:58.728 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:04:58.728 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:58.728 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:58.728 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:58.728 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:58.728 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:58.728 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:58.728 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:58.728 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:58.728 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:58.728 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:58.728 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:58.728 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:58.728 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:58.728 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:58.728 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:58.728 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:58.728 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:58.728 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:58.728 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:58.728 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:58.728 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:58.728 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:58.728 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:58.728 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:58.728 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:04:58.728 element at address: 0x200027e69040 with size: 0.000183 MiB 00:04:58.728 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:04:58.728 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:58.728 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:58.728 list of memzone associated elements. size: 602.262573 MiB 00:04:58.728 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:58.728 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:58.728 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:58.728 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:58.728 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:58.728 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1595507_0 00:04:58.728 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:58.728 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1595507_0 00:04:58.728 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:58.728 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1595507_0 00:04:58.728 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:58.728 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:58.728 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:58.728 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:58.728 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:58.728 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1595507 00:04:58.728 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:58.728 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1595507 00:04:58.728 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:58.728 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1595507 00:04:58.728 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:58.728 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:58.728 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:58.728 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:58.728 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:58.728 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:58.728 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:58.728 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:58.728 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:58.728 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1595507 00:04:58.728 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:58.728 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1595507 00:04:58.728 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:58.728 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1595507 00:04:58.728 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:58.728 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1595507 00:04:58.728 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:58.728 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1595507 00:04:58.728 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:58.728 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:58.728 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:58.728 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:58.728 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:58.728 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:58.728 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:58.728 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1595507 00:04:58.728 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:58.728 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:58.728 element at address: 0x200027e69100 with size: 0.023743 MiB 00:04:58.728 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:58.728 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:58.728 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1595507 00:04:58.728 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:04:58.728 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:58.728 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:04:58.728 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1595507 00:04:58.728 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:58.728 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1595507 00:04:58.728 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:04:58.728 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:58.728 20:05:36 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:58.728 20:05:36 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1595507 00:04:58.728 20:05:36 -- common/autotest_common.sh@924 -- # '[' -z 1595507 ']' 00:04:58.728 20:05:36 -- common/autotest_common.sh@928 -- # kill -0 1595507 00:04:58.728 20:05:36 -- common/autotest_common.sh@929 -- # uname 00:04:58.728 20:05:36 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:04:58.728 20:05:36 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 1595507 00:04:58.728 20:05:36 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:04:58.728 20:05:36 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:04:58.728 20:05:36 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 1595507' 00:04:58.728 killing process with pid 1595507 00:04:58.728 20:05:36 -- common/autotest_common.sh@943 -- # kill 1595507 00:04:58.728 20:05:36 -- common/autotest_common.sh@948 -- # wait 1595507 00:04:59.298 00:04:59.298 real 0m1.388s 00:04:59.298 user 0m1.461s 00:04:59.298 sys 0m0.370s 00:04:59.298 20:05:36 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:59.298 20:05:36 -- common/autotest_common.sh@10 -- # set +x 00:04:59.298 ************************************ 00:04:59.298 END TEST dpdk_mem_utility 00:04:59.298 ************************************ 00:04:59.298 20:05:36 -- spdk/autotest.sh@187 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:59.298 20:05:36 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:04:59.298 20:05:36 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:59.298 20:05:36 -- common/autotest_common.sh@10 -- # set +x 00:04:59.298 ************************************ 00:04:59.298 START TEST event 00:04:59.298 ************************************ 00:04:59.298 20:05:36 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:59.298 * Looking for test storage... 00:04:59.298 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:59.298 20:05:36 -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:59.298 20:05:36 -- bdev/nbd_common.sh@6 -- # set -e 00:04:59.298 20:05:36 -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:59.298 20:05:36 -- common/autotest_common.sh@1075 -- # '[' 6 -le 1 ']' 00:04:59.298 20:05:36 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:59.298 20:05:36 -- common/autotest_common.sh@10 -- # set +x 00:04:59.298 ************************************ 00:04:59.298 START TEST event_perf 00:04:59.298 ************************************ 00:04:59.298 20:05:36 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:59.298 Running I/O for 1 seconds...[2024-02-14 20:05:36.599284] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:04:59.298 [2024-02-14 20:05:36.599365] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1595827 ] 00:04:59.298 EAL: No free 2048 kB hugepages reported on node 1 00:04:59.298 [2024-02-14 20:05:36.664224] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:59.557 [2024-02-14 20:05:36.738480] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:59.557 [2024-02-14 20:05:36.738580] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:59.557 [2024-02-14 20:05:36.738672] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:59.557 [2024-02-14 20:05:36.738675] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.495 Running I/O for 1 seconds... 00:05:00.495 lcore 0: 204240 00:05:00.495 lcore 1: 204241 00:05:00.495 lcore 2: 204241 00:05:00.495 lcore 3: 204241 00:05:00.495 done. 00:05:00.495 00:05:00.495 real 0m1.248s 00:05:00.495 user 0m4.160s 00:05:00.495 sys 0m0.083s 00:05:00.495 20:05:37 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:00.495 20:05:37 -- common/autotest_common.sh@10 -- # set +x 00:05:00.495 ************************************ 00:05:00.495 END TEST event_perf 00:05:00.495 ************************************ 00:05:00.495 20:05:37 -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:00.495 20:05:37 -- common/autotest_common.sh@1075 -- # '[' 4 -le 1 ']' 00:05:00.495 20:05:37 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:00.495 20:05:37 -- common/autotest_common.sh@10 -- # set +x 00:05:00.495 ************************************ 00:05:00.495 START TEST event_reactor 00:05:00.495 ************************************ 00:05:00.495 20:05:37 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:00.495 [2024-02-14 20:05:37.875834] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:00.495 [2024-02-14 20:05:37.875900] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1596046 ] 00:05:00.495 EAL: No free 2048 kB hugepages reported on node 1 00:05:00.755 [2024-02-14 20:05:37.936179] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.755 [2024-02-14 20:05:38.003507] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.693 test_start 00:05:01.693 oneshot 00:05:01.693 tick 100 00:05:01.693 tick 100 00:05:01.693 tick 250 00:05:01.693 tick 100 00:05:01.693 tick 100 00:05:01.693 tick 100 00:05:01.693 tick 250 00:05:01.693 tick 500 00:05:01.693 tick 100 00:05:01.693 tick 100 00:05:01.693 tick 250 00:05:01.693 tick 100 00:05:01.693 tick 100 00:05:01.693 test_end 00:05:01.693 00:05:01.693 real 0m1.222s 00:05:01.693 user 0m1.153s 00:05:01.693 sys 0m0.065s 00:05:01.693 20:05:39 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:01.693 20:05:39 -- common/autotest_common.sh@10 -- # set +x 00:05:01.693 ************************************ 00:05:01.693 END TEST event_reactor 00:05:01.693 ************************************ 00:05:01.953 20:05:39 -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:01.953 20:05:39 -- common/autotest_common.sh@1075 -- # '[' 4 -le 1 ']' 00:05:01.953 20:05:39 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:01.953 20:05:39 -- common/autotest_common.sh@10 -- # set +x 00:05:01.953 ************************************ 00:05:01.953 START TEST event_reactor_perf 00:05:01.953 ************************************ 00:05:01.953 20:05:39 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:01.953 [2024-02-14 20:05:39.139379] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:01.953 [2024-02-14 20:05:39.139447] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1596285 ] 00:05:01.953 EAL: No free 2048 kB hugepages reported on node 1 00:05:01.953 [2024-02-14 20:05:39.201618] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.953 [2024-02-14 20:05:39.268061] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.331 test_start 00:05:03.331 test_end 00:05:03.331 Performance: 503877 events per second 00:05:03.331 00:05:03.331 real 0m1.236s 00:05:03.331 user 0m1.159s 00:05:03.331 sys 0m0.072s 00:05:03.331 20:05:40 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:03.331 20:05:40 -- common/autotest_common.sh@10 -- # set +x 00:05:03.331 ************************************ 00:05:03.331 END TEST event_reactor_perf 00:05:03.331 ************************************ 00:05:03.331 20:05:40 -- event/event.sh@49 -- # uname -s 00:05:03.331 20:05:40 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:03.331 20:05:40 -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:03.331 20:05:40 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:05:03.331 20:05:40 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:03.331 20:05:40 -- common/autotest_common.sh@10 -- # set +x 00:05:03.331 ************************************ 00:05:03.331 START TEST event_scheduler 00:05:03.331 ************************************ 00:05:03.331 20:05:40 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:03.331 * Looking for test storage... 00:05:03.331 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:03.331 20:05:40 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:03.331 20:05:40 -- scheduler/scheduler.sh@35 -- # scheduler_pid=1596557 00:05:03.331 20:05:40 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:03.331 20:05:40 -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:03.331 20:05:40 -- scheduler/scheduler.sh@37 -- # waitforlisten 1596557 00:05:03.331 20:05:40 -- common/autotest_common.sh@817 -- # '[' -z 1596557 ']' 00:05:03.331 20:05:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:03.331 20:05:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:03.331 20:05:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:03.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:03.331 20:05:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:03.331 20:05:40 -- common/autotest_common.sh@10 -- # set +x 00:05:03.331 [2024-02-14 20:05:40.531738] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:03.331 [2024-02-14 20:05:40.531792] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1596557 ] 00:05:03.331 EAL: No free 2048 kB hugepages reported on node 1 00:05:03.331 [2024-02-14 20:05:40.585849] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:03.331 [2024-02-14 20:05:40.662435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.331 [2024-02-14 20:05:40.662522] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:03.331 [2024-02-14 20:05:40.662608] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:03.331 [2024-02-14 20:05:40.662610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:04.268 20:05:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:04.268 20:05:41 -- common/autotest_common.sh@850 -- # return 0 00:05:04.268 20:05:41 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:04.268 20:05:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:04.268 20:05:41 -- common/autotest_common.sh@10 -- # set +x 00:05:04.268 POWER: Env isn't set yet! 00:05:04.268 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:04.268 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:04.268 POWER: Cannot set governor of lcore 0 to userspace 00:05:04.268 POWER: Attempting to initialise PSTAT power management... 00:05:04.268 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:05:04.268 POWER: Initialized successfully for lcore 0 power management 00:05:04.268 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:05:04.268 POWER: Initialized successfully for lcore 1 power management 00:05:04.268 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:05:04.268 POWER: Initialized successfully for lcore 2 power management 00:05:04.268 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:05:04.268 POWER: Initialized successfully for lcore 3 power management 00:05:04.268 20:05:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:04.268 20:05:41 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:04.268 20:05:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:04.268 20:05:41 -- common/autotest_common.sh@10 -- # set +x 00:05:04.268 [2024-02-14 20:05:41.447141] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:04.268 20:05:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:04.268 20:05:41 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:04.268 20:05:41 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:05:04.268 20:05:41 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:04.268 20:05:41 -- common/autotest_common.sh@10 -- # set +x 00:05:04.268 ************************************ 00:05:04.268 START TEST scheduler_create_thread 00:05:04.268 ************************************ 00:05:04.268 20:05:41 -- common/autotest_common.sh@1102 -- # scheduler_create_thread 00:05:04.268 20:05:41 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:04.268 20:05:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:04.268 20:05:41 -- common/autotest_common.sh@10 -- # set +x 00:05:04.268 2 00:05:04.268 20:05:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:04.268 20:05:41 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:04.268 20:05:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:04.268 20:05:41 -- common/autotest_common.sh@10 -- # set +x 00:05:04.268 3 00:05:04.268 20:05:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:04.268 20:05:41 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:04.268 20:05:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:04.268 20:05:41 -- common/autotest_common.sh@10 -- # set +x 00:05:04.268 4 00:05:04.268 20:05:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:04.268 20:05:41 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:04.268 20:05:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:04.268 20:05:41 -- common/autotest_common.sh@10 -- # set +x 00:05:04.268 5 00:05:04.268 20:05:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:04.268 20:05:41 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:04.268 20:05:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:04.268 20:05:41 -- common/autotest_common.sh@10 -- # set +x 00:05:04.268 6 00:05:04.268 20:05:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:04.268 20:05:41 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:04.268 20:05:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:04.268 20:05:41 -- common/autotest_common.sh@10 -- # set +x 00:05:04.268 7 00:05:04.268 20:05:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:04.268 20:05:41 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:04.268 20:05:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:04.268 20:05:41 -- common/autotest_common.sh@10 -- # set +x 00:05:04.268 8 00:05:04.268 20:05:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:04.268 20:05:41 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:04.268 20:05:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:04.268 20:05:41 -- common/autotest_common.sh@10 -- # set +x 00:05:04.268 9 00:05:04.268 20:05:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:04.268 20:05:41 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:04.268 20:05:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:04.268 20:05:41 -- common/autotest_common.sh@10 -- # set +x 00:05:04.268 10 00:05:04.268 20:05:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:04.269 20:05:41 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:04.269 20:05:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:04.269 20:05:41 -- common/autotest_common.sh@10 -- # set +x 00:05:04.269 20:05:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:04.269 20:05:41 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:04.269 20:05:41 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:04.269 20:05:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:04.269 20:05:41 -- common/autotest_common.sh@10 -- # set +x 00:05:05.206 20:05:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:05.206 20:05:42 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:05.206 20:05:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:05.206 20:05:42 -- common/autotest_common.sh@10 -- # set +x 00:05:06.585 20:05:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:06.585 20:05:43 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:06.585 20:05:43 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:06.585 20:05:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:06.585 20:05:43 -- common/autotest_common.sh@10 -- # set +x 00:05:07.522 20:05:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:07.522 00:05:07.522 real 0m3.380s 00:05:07.522 user 0m0.021s 00:05:07.522 sys 0m0.007s 00:05:07.522 20:05:44 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:07.522 20:05:44 -- common/autotest_common.sh@10 -- # set +x 00:05:07.522 ************************************ 00:05:07.522 END TEST scheduler_create_thread 00:05:07.522 ************************************ 00:05:07.522 20:05:44 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:07.522 20:05:44 -- scheduler/scheduler.sh@46 -- # killprocess 1596557 00:05:07.522 20:05:44 -- common/autotest_common.sh@924 -- # '[' -z 1596557 ']' 00:05:07.522 20:05:44 -- common/autotest_common.sh@928 -- # kill -0 1596557 00:05:07.522 20:05:44 -- common/autotest_common.sh@929 -- # uname 00:05:07.522 20:05:44 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:05:07.522 20:05:44 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 1596557 00:05:07.522 20:05:44 -- common/autotest_common.sh@930 -- # process_name=reactor_2 00:05:07.522 20:05:44 -- common/autotest_common.sh@934 -- # '[' reactor_2 = sudo ']' 00:05:07.522 20:05:44 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 1596557' 00:05:07.522 killing process with pid 1596557 00:05:07.522 20:05:44 -- common/autotest_common.sh@943 -- # kill 1596557 00:05:07.522 20:05:44 -- common/autotest_common.sh@948 -- # wait 1596557 00:05:08.090 [2024-02-14 20:05:45.215027] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:08.090 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:05:08.090 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:05:08.090 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:05:08.090 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:05:08.090 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:05:08.090 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:05:08.090 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:05:08.090 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:05:08.090 00:05:08.090 real 0m5.059s 00:05:08.090 user 0m10.440s 00:05:08.090 sys 0m0.332s 00:05:08.090 20:05:45 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:08.090 20:05:45 -- common/autotest_common.sh@10 -- # set +x 00:05:08.090 ************************************ 00:05:08.090 END TEST event_scheduler 00:05:08.090 ************************************ 00:05:08.090 20:05:45 -- event/event.sh@51 -- # modprobe -n nbd 00:05:08.090 20:05:45 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:08.090 20:05:45 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:05:08.090 20:05:45 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:08.090 20:05:45 -- common/autotest_common.sh@10 -- # set +x 00:05:08.350 ************************************ 00:05:08.350 START TEST app_repeat 00:05:08.350 ************************************ 00:05:08.350 20:05:45 -- common/autotest_common.sh@1102 -- # app_repeat_test 00:05:08.350 20:05:45 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:08.350 20:05:45 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:08.350 20:05:45 -- event/event.sh@13 -- # local nbd_list 00:05:08.350 20:05:45 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:08.350 20:05:45 -- event/event.sh@14 -- # local bdev_list 00:05:08.350 20:05:45 -- event/event.sh@15 -- # local repeat_times=4 00:05:08.350 20:05:45 -- event/event.sh@17 -- # modprobe nbd 00:05:08.350 20:05:45 -- event/event.sh@19 -- # repeat_pid=1597526 00:05:08.350 20:05:45 -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:08.350 20:05:45 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:08.350 20:05:45 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1597526' 00:05:08.350 Process app_repeat pid: 1597526 00:05:08.350 20:05:45 -- event/event.sh@23 -- # for i in {0..2} 00:05:08.350 20:05:45 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:08.350 spdk_app_start Round 0 00:05:08.350 20:05:45 -- event/event.sh@25 -- # waitforlisten 1597526 /var/tmp/spdk-nbd.sock 00:05:08.350 20:05:45 -- common/autotest_common.sh@817 -- # '[' -z 1597526 ']' 00:05:08.350 20:05:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:08.350 20:05:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:08.350 20:05:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:08.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:08.350 20:05:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:08.350 20:05:45 -- common/autotest_common.sh@10 -- # set +x 00:05:08.350 [2024-02-14 20:05:45.543191] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:08.350 [2024-02-14 20:05:45.543263] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1597526 ] 00:05:08.350 EAL: No free 2048 kB hugepages reported on node 1 00:05:08.350 [2024-02-14 20:05:45.604225] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:08.350 [2024-02-14 20:05:45.680695] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:08.350 [2024-02-14 20:05:45.680698] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.287 20:05:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:09.287 20:05:46 -- common/autotest_common.sh@850 -- # return 0 00:05:09.287 20:05:46 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:09.287 Malloc0 00:05:09.287 20:05:46 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:09.287 Malloc1 00:05:09.546 20:05:46 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:09.546 20:05:46 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.546 20:05:46 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:09.546 20:05:46 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:09.546 20:05:46 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.546 20:05:46 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:09.546 20:05:46 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:09.546 20:05:46 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.546 20:05:46 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:09.546 20:05:46 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:09.546 20:05:46 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.546 20:05:46 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:09.546 20:05:46 -- bdev/nbd_common.sh@12 -- # local i 00:05:09.546 20:05:46 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:09.546 20:05:46 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:09.546 20:05:46 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:09.546 /dev/nbd0 00:05:09.546 20:05:46 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:09.546 20:05:46 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:09.546 20:05:46 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:05:09.547 20:05:46 -- common/autotest_common.sh@855 -- # local i 00:05:09.547 20:05:46 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:09.547 20:05:46 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:09.547 20:05:46 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:05:09.547 20:05:46 -- common/autotest_common.sh@859 -- # break 00:05:09.547 20:05:46 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:09.547 20:05:46 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:09.547 20:05:46 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:09.547 1+0 records in 00:05:09.547 1+0 records out 00:05:09.547 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000185347 s, 22.1 MB/s 00:05:09.547 20:05:46 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:09.547 20:05:46 -- common/autotest_common.sh@872 -- # size=4096 00:05:09.547 20:05:46 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:09.547 20:05:46 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:09.547 20:05:46 -- common/autotest_common.sh@875 -- # return 0 00:05:09.547 20:05:46 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:09.547 20:05:46 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:09.547 20:05:46 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:09.806 /dev/nbd1 00:05:09.806 20:05:47 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:09.806 20:05:47 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:09.806 20:05:47 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:05:09.806 20:05:47 -- common/autotest_common.sh@855 -- # local i 00:05:09.806 20:05:47 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:09.806 20:05:47 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:09.806 20:05:47 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:05:09.806 20:05:47 -- common/autotest_common.sh@859 -- # break 00:05:09.806 20:05:47 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:09.806 20:05:47 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:09.806 20:05:47 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:09.806 1+0 records in 00:05:09.806 1+0 records out 00:05:09.806 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000232924 s, 17.6 MB/s 00:05:09.806 20:05:47 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:09.806 20:05:47 -- common/autotest_common.sh@872 -- # size=4096 00:05:09.806 20:05:47 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:09.806 20:05:47 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:09.806 20:05:47 -- common/autotest_common.sh@875 -- # return 0 00:05:09.806 20:05:47 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:09.806 20:05:47 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:09.806 20:05:47 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:09.806 20:05:47 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.806 20:05:47 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:10.065 20:05:47 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:10.066 { 00:05:10.066 "nbd_device": "/dev/nbd0", 00:05:10.066 "bdev_name": "Malloc0" 00:05:10.066 }, 00:05:10.066 { 00:05:10.066 "nbd_device": "/dev/nbd1", 00:05:10.066 "bdev_name": "Malloc1" 00:05:10.066 } 00:05:10.066 ]' 00:05:10.066 20:05:47 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:10.066 { 00:05:10.066 "nbd_device": "/dev/nbd0", 00:05:10.066 "bdev_name": "Malloc0" 00:05:10.066 }, 00:05:10.066 { 00:05:10.066 "nbd_device": "/dev/nbd1", 00:05:10.066 "bdev_name": "Malloc1" 00:05:10.066 } 00:05:10.066 ]' 00:05:10.066 20:05:47 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:10.066 20:05:47 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:10.066 /dev/nbd1' 00:05:10.066 20:05:47 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:10.066 /dev/nbd1' 00:05:10.066 20:05:47 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:10.066 20:05:47 -- bdev/nbd_common.sh@65 -- # count=2 00:05:10.066 20:05:47 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:10.066 20:05:47 -- bdev/nbd_common.sh@95 -- # count=2 00:05:10.066 20:05:47 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:10.066 20:05:47 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:10.066 20:05:47 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.066 20:05:47 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:10.066 20:05:47 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:10.066 20:05:47 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:10.066 20:05:47 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:10.066 20:05:47 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:10.066 256+0 records in 00:05:10.066 256+0 records out 00:05:10.066 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0034511 s, 304 MB/s 00:05:10.066 20:05:47 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:10.066 20:05:47 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:10.066 256+0 records in 00:05:10.066 256+0 records out 00:05:10.066 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0138645 s, 75.6 MB/s 00:05:10.066 20:05:47 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:10.066 20:05:47 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:10.066 256+0 records in 00:05:10.066 256+0 records out 00:05:10.066 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0145243 s, 72.2 MB/s 00:05:10.066 20:05:47 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:10.066 20:05:47 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.066 20:05:47 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:10.066 20:05:47 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:10.066 20:05:47 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:10.066 20:05:47 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:10.066 20:05:47 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:10.066 20:05:47 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:10.066 20:05:47 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:10.066 20:05:47 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:10.066 20:05:47 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:10.066 20:05:47 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:10.066 20:05:47 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:10.066 20:05:47 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.066 20:05:47 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.066 20:05:47 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:10.066 20:05:47 -- bdev/nbd_common.sh@51 -- # local i 00:05:10.066 20:05:47 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:10.066 20:05:47 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:10.326 20:05:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:10.326 20:05:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:10.326 20:05:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:10.326 20:05:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:10.326 20:05:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:10.326 20:05:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:10.326 20:05:47 -- bdev/nbd_common.sh@41 -- # break 00:05:10.326 20:05:47 -- bdev/nbd_common.sh@45 -- # return 0 00:05:10.326 20:05:47 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:10.326 20:05:47 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:10.590 20:05:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:10.590 20:05:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:10.590 20:05:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:10.590 20:05:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:10.590 20:05:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:10.590 20:05:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:10.590 20:05:47 -- bdev/nbd_common.sh@41 -- # break 00:05:10.590 20:05:47 -- bdev/nbd_common.sh@45 -- # return 0 00:05:10.590 20:05:47 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:10.590 20:05:47 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.590 20:05:47 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:10.590 20:05:47 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:10.590 20:05:47 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:10.590 20:05:47 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:10.590 20:05:47 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:10.590 20:05:47 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:10.590 20:05:47 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:10.590 20:05:47 -- bdev/nbd_common.sh@65 -- # true 00:05:10.590 20:05:47 -- bdev/nbd_common.sh@65 -- # count=0 00:05:10.590 20:05:47 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:10.590 20:05:47 -- bdev/nbd_common.sh@104 -- # count=0 00:05:10.590 20:05:47 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:10.590 20:05:47 -- bdev/nbd_common.sh@109 -- # return 0 00:05:10.590 20:05:47 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:10.849 20:05:48 -- event/event.sh@35 -- # sleep 3 00:05:11.109 [2024-02-14 20:05:48.376978] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:11.109 [2024-02-14 20:05:48.439653] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.109 [2024-02-14 20:05:48.439656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:11.109 [2024-02-14 20:05:48.480261] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:11.109 [2024-02-14 20:05:48.480301] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:14.423 20:05:51 -- event/event.sh@23 -- # for i in {0..2} 00:05:14.423 20:05:51 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:14.423 spdk_app_start Round 1 00:05:14.423 20:05:51 -- event/event.sh@25 -- # waitforlisten 1597526 /var/tmp/spdk-nbd.sock 00:05:14.423 20:05:51 -- common/autotest_common.sh@817 -- # '[' -z 1597526 ']' 00:05:14.423 20:05:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:14.423 20:05:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:14.423 20:05:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:14.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:14.423 20:05:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:14.423 20:05:51 -- common/autotest_common.sh@10 -- # set +x 00:05:14.423 20:05:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:14.423 20:05:51 -- common/autotest_common.sh@850 -- # return 0 00:05:14.423 20:05:51 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:14.423 Malloc0 00:05:14.423 20:05:51 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:14.423 Malloc1 00:05:14.423 20:05:51 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:14.423 20:05:51 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.423 20:05:51 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:14.423 20:05:51 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:14.423 20:05:51 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.423 20:05:51 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:14.423 20:05:51 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:14.423 20:05:51 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.423 20:05:51 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:14.423 20:05:51 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:14.423 20:05:51 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.423 20:05:51 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:14.423 20:05:51 -- bdev/nbd_common.sh@12 -- # local i 00:05:14.423 20:05:51 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:14.423 20:05:51 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:14.423 20:05:51 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:14.682 /dev/nbd0 00:05:14.682 20:05:51 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:14.682 20:05:51 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:14.682 20:05:51 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:05:14.683 20:05:51 -- common/autotest_common.sh@855 -- # local i 00:05:14.683 20:05:51 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:14.683 20:05:51 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:14.683 20:05:51 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:05:14.683 20:05:51 -- common/autotest_common.sh@859 -- # break 00:05:14.683 20:05:51 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:14.683 20:05:51 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:14.683 20:05:51 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:14.683 1+0 records in 00:05:14.683 1+0 records out 00:05:14.683 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000205351 s, 19.9 MB/s 00:05:14.683 20:05:51 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:14.683 20:05:51 -- common/autotest_common.sh@872 -- # size=4096 00:05:14.683 20:05:51 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:14.683 20:05:51 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:14.683 20:05:51 -- common/autotest_common.sh@875 -- # return 0 00:05:14.683 20:05:51 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:14.683 20:05:51 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:14.683 20:05:51 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:14.683 /dev/nbd1 00:05:14.683 20:05:52 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:14.683 20:05:52 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:14.683 20:05:52 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:05:14.683 20:05:52 -- common/autotest_common.sh@855 -- # local i 00:05:14.683 20:05:52 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:14.683 20:05:52 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:14.683 20:05:52 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:05:14.683 20:05:52 -- common/autotest_common.sh@859 -- # break 00:05:14.683 20:05:52 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:14.683 20:05:52 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:14.683 20:05:52 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:14.683 1+0 records in 00:05:14.683 1+0 records out 00:05:14.683 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000181008 s, 22.6 MB/s 00:05:14.683 20:05:52 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:14.683 20:05:52 -- common/autotest_common.sh@872 -- # size=4096 00:05:14.683 20:05:52 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:14.683 20:05:52 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:14.683 20:05:52 -- common/autotest_common.sh@875 -- # return 0 00:05:14.683 20:05:52 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:14.683 20:05:52 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:14.683 20:05:52 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:14.683 20:05:52 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.683 20:05:52 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:14.942 20:05:52 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:14.942 { 00:05:14.942 "nbd_device": "/dev/nbd0", 00:05:14.942 "bdev_name": "Malloc0" 00:05:14.942 }, 00:05:14.942 { 00:05:14.942 "nbd_device": "/dev/nbd1", 00:05:14.942 "bdev_name": "Malloc1" 00:05:14.942 } 00:05:14.942 ]' 00:05:14.942 20:05:52 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:14.942 { 00:05:14.942 "nbd_device": "/dev/nbd0", 00:05:14.942 "bdev_name": "Malloc0" 00:05:14.942 }, 00:05:14.942 { 00:05:14.942 "nbd_device": "/dev/nbd1", 00:05:14.942 "bdev_name": "Malloc1" 00:05:14.942 } 00:05:14.942 ]' 00:05:14.942 20:05:52 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:14.942 20:05:52 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:14.942 /dev/nbd1' 00:05:14.942 20:05:52 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:14.942 20:05:52 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:14.942 /dev/nbd1' 00:05:14.942 20:05:52 -- bdev/nbd_common.sh@65 -- # count=2 00:05:14.942 20:05:52 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:14.942 20:05:52 -- bdev/nbd_common.sh@95 -- # count=2 00:05:14.942 20:05:52 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:14.942 20:05:52 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:14.942 20:05:52 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.942 20:05:52 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:14.942 20:05:52 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:14.942 20:05:52 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:14.942 20:05:52 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:14.942 20:05:52 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:14.942 256+0 records in 00:05:14.942 256+0 records out 00:05:14.942 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0104042 s, 101 MB/s 00:05:14.942 20:05:52 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:14.942 20:05:52 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:14.943 256+0 records in 00:05:14.943 256+0 records out 00:05:14.943 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0140121 s, 74.8 MB/s 00:05:14.943 20:05:52 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:14.943 20:05:52 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:14.943 256+0 records in 00:05:14.943 256+0 records out 00:05:14.943 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0143344 s, 73.2 MB/s 00:05:14.943 20:05:52 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:14.943 20:05:52 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.943 20:05:52 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:14.943 20:05:52 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:14.943 20:05:52 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:14.943 20:05:52 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:15.202 20:05:52 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:15.202 20:05:52 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:15.202 20:05:52 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:15.202 20:05:52 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:15.202 20:05:52 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:15.202 20:05:52 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:15.202 20:05:52 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:15.202 20:05:52 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.202 20:05:52 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.202 20:05:52 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:15.202 20:05:52 -- bdev/nbd_common.sh@51 -- # local i 00:05:15.202 20:05:52 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:15.202 20:05:52 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:15.202 20:05:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:15.202 20:05:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:15.202 20:05:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:15.202 20:05:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:15.202 20:05:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:15.202 20:05:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:15.202 20:05:52 -- bdev/nbd_common.sh@41 -- # break 00:05:15.202 20:05:52 -- bdev/nbd_common.sh@45 -- # return 0 00:05:15.202 20:05:52 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:15.202 20:05:52 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:15.462 20:05:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:15.462 20:05:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:15.462 20:05:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:15.462 20:05:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:15.462 20:05:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:15.462 20:05:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:15.462 20:05:52 -- bdev/nbd_common.sh@41 -- # break 00:05:15.462 20:05:52 -- bdev/nbd_common.sh@45 -- # return 0 00:05:15.462 20:05:52 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:15.462 20:05:52 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.462 20:05:52 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:15.722 20:05:52 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:15.722 20:05:52 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:15.722 20:05:52 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:15.722 20:05:52 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:15.722 20:05:52 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:15.722 20:05:52 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:15.722 20:05:52 -- bdev/nbd_common.sh@65 -- # true 00:05:15.722 20:05:52 -- bdev/nbd_common.sh@65 -- # count=0 00:05:15.722 20:05:52 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:15.722 20:05:52 -- bdev/nbd_common.sh@104 -- # count=0 00:05:15.722 20:05:52 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:15.722 20:05:52 -- bdev/nbd_common.sh@109 -- # return 0 00:05:15.722 20:05:52 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:15.982 20:05:53 -- event/event.sh@35 -- # sleep 3 00:05:15.982 [2024-02-14 20:05:53.360826] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:16.241 [2024-02-14 20:05:53.427399] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:16.241 [2024-02-14 20:05:53.427402] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.241 [2024-02-14 20:05:53.468050] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:16.241 [2024-02-14 20:05:53.468089] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:18.777 20:05:56 -- event/event.sh@23 -- # for i in {0..2} 00:05:18.777 20:05:56 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:18.777 spdk_app_start Round 2 00:05:18.777 20:05:56 -- event/event.sh@25 -- # waitforlisten 1597526 /var/tmp/spdk-nbd.sock 00:05:18.777 20:05:56 -- common/autotest_common.sh@817 -- # '[' -z 1597526 ']' 00:05:18.777 20:05:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:18.777 20:05:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:18.777 20:05:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:18.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:18.777 20:05:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:18.777 20:05:56 -- common/autotest_common.sh@10 -- # set +x 00:05:19.036 20:05:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:19.036 20:05:56 -- common/autotest_common.sh@850 -- # return 0 00:05:19.036 20:05:56 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:19.296 Malloc0 00:05:19.296 20:05:56 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:19.296 Malloc1 00:05:19.296 20:05:56 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:19.297 20:05:56 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:19.297 20:05:56 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:19.297 20:05:56 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:19.297 20:05:56 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:19.297 20:05:56 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:19.297 20:05:56 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:19.297 20:05:56 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:19.297 20:05:56 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:19.297 20:05:56 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:19.297 20:05:56 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:19.297 20:05:56 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:19.297 20:05:56 -- bdev/nbd_common.sh@12 -- # local i 00:05:19.297 20:05:56 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:19.297 20:05:56 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:19.297 20:05:56 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:19.556 /dev/nbd0 00:05:19.556 20:05:56 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:19.556 20:05:56 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:19.556 20:05:56 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:05:19.556 20:05:56 -- common/autotest_common.sh@855 -- # local i 00:05:19.556 20:05:56 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:19.556 20:05:56 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:19.556 20:05:56 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:05:19.556 20:05:56 -- common/autotest_common.sh@859 -- # break 00:05:19.556 20:05:56 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:19.556 20:05:56 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:19.556 20:05:56 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:19.556 1+0 records in 00:05:19.556 1+0 records out 00:05:19.556 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000197936 s, 20.7 MB/s 00:05:19.556 20:05:56 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:19.556 20:05:56 -- common/autotest_common.sh@872 -- # size=4096 00:05:19.556 20:05:56 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:19.556 20:05:56 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:19.556 20:05:56 -- common/autotest_common.sh@875 -- # return 0 00:05:19.556 20:05:56 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:19.557 20:05:56 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:19.557 20:05:56 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:19.816 /dev/nbd1 00:05:19.816 20:05:57 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:19.816 20:05:57 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:19.816 20:05:57 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:05:19.816 20:05:57 -- common/autotest_common.sh@855 -- # local i 00:05:19.816 20:05:57 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:19.816 20:05:57 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:19.816 20:05:57 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:05:19.816 20:05:57 -- common/autotest_common.sh@859 -- # break 00:05:19.816 20:05:57 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:19.816 20:05:57 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:19.816 20:05:57 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:19.816 1+0 records in 00:05:19.816 1+0 records out 00:05:19.816 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00011145 s, 36.8 MB/s 00:05:19.816 20:05:57 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:19.816 20:05:57 -- common/autotest_common.sh@872 -- # size=4096 00:05:19.816 20:05:57 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:19.816 20:05:57 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:19.816 20:05:57 -- common/autotest_common.sh@875 -- # return 0 00:05:19.816 20:05:57 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:19.816 20:05:57 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:19.816 20:05:57 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:19.816 20:05:57 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:19.816 20:05:57 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:19.816 20:05:57 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:19.816 { 00:05:19.816 "nbd_device": "/dev/nbd0", 00:05:19.816 "bdev_name": "Malloc0" 00:05:19.816 }, 00:05:19.816 { 00:05:19.816 "nbd_device": "/dev/nbd1", 00:05:19.816 "bdev_name": "Malloc1" 00:05:19.816 } 00:05:19.816 ]' 00:05:19.816 20:05:57 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:19.816 { 00:05:19.816 "nbd_device": "/dev/nbd0", 00:05:19.816 "bdev_name": "Malloc0" 00:05:19.816 }, 00:05:19.816 { 00:05:19.816 "nbd_device": "/dev/nbd1", 00:05:19.816 "bdev_name": "Malloc1" 00:05:19.816 } 00:05:19.816 ]' 00:05:19.816 20:05:57 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:20.076 20:05:57 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:20.076 /dev/nbd1' 00:05:20.076 20:05:57 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:20.076 20:05:57 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:20.076 /dev/nbd1' 00:05:20.076 20:05:57 -- bdev/nbd_common.sh@65 -- # count=2 00:05:20.076 20:05:57 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:20.076 20:05:57 -- bdev/nbd_common.sh@95 -- # count=2 00:05:20.076 20:05:57 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:20.076 20:05:57 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:20.076 20:05:57 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:20.076 20:05:57 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:20.076 20:05:57 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:20.076 20:05:57 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:20.076 20:05:57 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:20.076 20:05:57 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:20.076 256+0 records in 00:05:20.076 256+0 records out 00:05:20.076 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103305 s, 102 MB/s 00:05:20.076 20:05:57 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:20.076 20:05:57 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:20.076 256+0 records in 00:05:20.076 256+0 records out 00:05:20.076 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0141708 s, 74.0 MB/s 00:05:20.076 20:05:57 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:20.076 20:05:57 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:20.076 256+0 records in 00:05:20.076 256+0 records out 00:05:20.076 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0147278 s, 71.2 MB/s 00:05:20.076 20:05:57 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:20.076 20:05:57 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:20.076 20:05:57 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:20.076 20:05:57 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:20.076 20:05:57 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:20.076 20:05:57 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:20.076 20:05:57 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:20.076 20:05:57 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:20.076 20:05:57 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:20.076 20:05:57 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:20.076 20:05:57 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:20.076 20:05:57 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:20.076 20:05:57 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:20.076 20:05:57 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:20.076 20:05:57 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:20.076 20:05:57 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:20.076 20:05:57 -- bdev/nbd_common.sh@51 -- # local i 00:05:20.076 20:05:57 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:20.076 20:05:57 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:20.336 20:05:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:20.336 20:05:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:20.336 20:05:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:20.336 20:05:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:20.336 20:05:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:20.336 20:05:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:20.336 20:05:57 -- bdev/nbd_common.sh@41 -- # break 00:05:20.336 20:05:57 -- bdev/nbd_common.sh@45 -- # return 0 00:05:20.336 20:05:57 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:20.336 20:05:57 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:20.336 20:05:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:20.336 20:05:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:20.336 20:05:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:20.336 20:05:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:20.336 20:05:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:20.336 20:05:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:20.336 20:05:57 -- bdev/nbd_common.sh@41 -- # break 00:05:20.336 20:05:57 -- bdev/nbd_common.sh@45 -- # return 0 00:05:20.336 20:05:57 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:20.336 20:05:57 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:20.336 20:05:57 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:20.596 20:05:57 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:20.596 20:05:57 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:20.596 20:05:57 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:20.596 20:05:57 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:20.596 20:05:57 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:20.596 20:05:57 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:20.596 20:05:57 -- bdev/nbd_common.sh@65 -- # true 00:05:20.596 20:05:57 -- bdev/nbd_common.sh@65 -- # count=0 00:05:20.596 20:05:57 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:20.596 20:05:57 -- bdev/nbd_common.sh@104 -- # count=0 00:05:20.596 20:05:57 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:20.596 20:05:57 -- bdev/nbd_common.sh@109 -- # return 0 00:05:20.596 20:05:57 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:20.855 20:05:58 -- event/event.sh@35 -- # sleep 3 00:05:21.115 [2024-02-14 20:05:58.323544] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:21.116 [2024-02-14 20:05:58.388557] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:21.116 [2024-02-14 20:05:58.388559] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.116 [2024-02-14 20:05:58.429346] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:21.116 [2024-02-14 20:05:58.429388] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:24.410 20:06:01 -- event/event.sh@38 -- # waitforlisten 1597526 /var/tmp/spdk-nbd.sock 00:05:24.410 20:06:01 -- common/autotest_common.sh@817 -- # '[' -z 1597526 ']' 00:05:24.410 20:06:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:24.410 20:06:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:24.410 20:06:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:24.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:24.410 20:06:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:24.410 20:06:01 -- common/autotest_common.sh@10 -- # set +x 00:05:24.410 20:06:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:24.410 20:06:01 -- common/autotest_common.sh@850 -- # return 0 00:05:24.410 20:06:01 -- event/event.sh@39 -- # killprocess 1597526 00:05:24.410 20:06:01 -- common/autotest_common.sh@924 -- # '[' -z 1597526 ']' 00:05:24.410 20:06:01 -- common/autotest_common.sh@928 -- # kill -0 1597526 00:05:24.410 20:06:01 -- common/autotest_common.sh@929 -- # uname 00:05:24.410 20:06:01 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:05:24.410 20:06:01 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 1597526 00:05:24.410 20:06:01 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:05:24.410 20:06:01 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:05:24.410 20:06:01 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 1597526' 00:05:24.410 killing process with pid 1597526 00:05:24.410 20:06:01 -- common/autotest_common.sh@943 -- # kill 1597526 00:05:24.410 20:06:01 -- common/autotest_common.sh@948 -- # wait 1597526 00:05:24.410 spdk_app_start is called in Round 0. 00:05:24.410 Shutdown signal received, stop current app iteration 00:05:24.410 Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 reinitialization... 00:05:24.410 spdk_app_start is called in Round 1. 00:05:24.410 Shutdown signal received, stop current app iteration 00:05:24.410 Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 reinitialization... 00:05:24.410 spdk_app_start is called in Round 2. 00:05:24.410 Shutdown signal received, stop current app iteration 00:05:24.410 Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 reinitialization... 00:05:24.410 spdk_app_start is called in Round 3. 00:05:24.410 Shutdown signal received, stop current app iteration 00:05:24.410 20:06:01 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:24.410 20:06:01 -- event/event.sh@42 -- # return 0 00:05:24.410 00:05:24.410 real 0m16.013s 00:05:24.410 user 0m34.445s 00:05:24.410 sys 0m2.315s 00:05:24.410 20:06:01 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:24.410 20:06:01 -- common/autotest_common.sh@10 -- # set +x 00:05:24.410 ************************************ 00:05:24.410 END TEST app_repeat 00:05:24.410 ************************************ 00:05:24.410 20:06:01 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:24.410 20:06:01 -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:24.410 20:06:01 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:05:24.410 20:06:01 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:24.410 20:06:01 -- common/autotest_common.sh@10 -- # set +x 00:05:24.410 ************************************ 00:05:24.410 START TEST cpu_locks 00:05:24.410 ************************************ 00:05:24.410 20:06:01 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:24.410 * Looking for test storage... 00:05:24.410 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:24.410 20:06:01 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:24.410 20:06:01 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:24.410 20:06:01 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:24.410 20:06:01 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:24.410 20:06:01 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:05:24.410 20:06:01 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:24.410 20:06:01 -- common/autotest_common.sh@10 -- # set +x 00:05:24.410 ************************************ 00:05:24.410 START TEST default_locks 00:05:24.410 ************************************ 00:05:24.410 20:06:01 -- common/autotest_common.sh@1102 -- # default_locks 00:05:24.410 20:06:01 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1600629 00:05:24.410 20:06:01 -- event/cpu_locks.sh@47 -- # waitforlisten 1600629 00:05:24.410 20:06:01 -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:24.410 20:06:01 -- common/autotest_common.sh@817 -- # '[' -z 1600629 ']' 00:05:24.410 20:06:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.410 20:06:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:24.410 20:06:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.410 20:06:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:24.410 20:06:01 -- common/autotest_common.sh@10 -- # set +x 00:05:24.410 [2024-02-14 20:06:01.691462] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:24.410 [2024-02-14 20:06:01.691518] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1600629 ] 00:05:24.410 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.410 [2024-02-14 20:06:01.756709] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.670 [2024-02-14 20:06:01.831390] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:24.670 [2024-02-14 20:06:01.831508] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.239 20:06:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:25.240 20:06:02 -- common/autotest_common.sh@850 -- # return 0 00:05:25.240 20:06:02 -- event/cpu_locks.sh@49 -- # locks_exist 1600629 00:05:25.240 20:06:02 -- event/cpu_locks.sh@22 -- # lslocks -p 1600629 00:05:25.240 20:06:02 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:25.240 lslocks: write error 00:05:25.240 20:06:02 -- event/cpu_locks.sh@50 -- # killprocess 1600629 00:05:25.240 20:06:02 -- common/autotest_common.sh@924 -- # '[' -z 1600629 ']' 00:05:25.240 20:06:02 -- common/autotest_common.sh@928 -- # kill -0 1600629 00:05:25.240 20:06:02 -- common/autotest_common.sh@929 -- # uname 00:05:25.240 20:06:02 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:05:25.240 20:06:02 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 1600629 00:05:25.240 20:06:02 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:05:25.240 20:06:02 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:05:25.240 20:06:02 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 1600629' 00:05:25.240 killing process with pid 1600629 00:05:25.240 20:06:02 -- common/autotest_common.sh@943 -- # kill 1600629 00:05:25.240 20:06:02 -- common/autotest_common.sh@948 -- # wait 1600629 00:05:25.810 20:06:02 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1600629 00:05:25.810 20:06:02 -- common/autotest_common.sh@638 -- # local es=0 00:05:25.810 20:06:02 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 1600629 00:05:25.810 20:06:02 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:05:25.810 20:06:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:25.810 20:06:02 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:05:25.810 20:06:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:25.810 20:06:02 -- common/autotest_common.sh@641 -- # waitforlisten 1600629 00:05:25.810 20:06:02 -- common/autotest_common.sh@817 -- # '[' -z 1600629 ']' 00:05:25.810 20:06:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.810 20:06:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:25.810 20:06:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.810 20:06:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:25.810 20:06:02 -- common/autotest_common.sh@10 -- # set +x 00:05:25.810 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (1600629) - No such process 00:05:25.810 ERROR: process (pid: 1600629) is no longer running 00:05:25.810 20:06:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:25.810 20:06:02 -- common/autotest_common.sh@850 -- # return 1 00:05:25.810 20:06:02 -- common/autotest_common.sh@641 -- # es=1 00:05:25.810 20:06:02 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:25.810 20:06:02 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:25.810 20:06:02 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:25.810 20:06:02 -- event/cpu_locks.sh@54 -- # no_locks 00:05:25.810 20:06:02 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:25.810 20:06:02 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:25.810 20:06:02 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:25.810 00:05:25.810 real 0m1.338s 00:05:25.810 user 0m1.393s 00:05:25.810 sys 0m0.400s 00:05:25.810 20:06:02 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:25.810 20:06:02 -- common/autotest_common.sh@10 -- # set +x 00:05:25.810 ************************************ 00:05:25.810 END TEST default_locks 00:05:25.810 ************************************ 00:05:25.810 20:06:03 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:25.810 20:06:03 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:05:25.810 20:06:03 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:25.810 20:06:03 -- common/autotest_common.sh@10 -- # set +x 00:05:25.810 ************************************ 00:05:25.810 START TEST default_locks_via_rpc 00:05:25.810 ************************************ 00:05:25.810 20:06:03 -- common/autotest_common.sh@1102 -- # default_locks_via_rpc 00:05:25.810 20:06:03 -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:25.810 20:06:03 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1600899 00:05:25.810 20:06:03 -- event/cpu_locks.sh@63 -- # waitforlisten 1600899 00:05:25.810 20:06:03 -- common/autotest_common.sh@817 -- # '[' -z 1600899 ']' 00:05:25.810 20:06:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.810 20:06:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:25.810 20:06:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.810 20:06:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:25.810 20:06:03 -- common/autotest_common.sh@10 -- # set +x 00:05:25.810 [2024-02-14 20:06:03.053389] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:25.810 [2024-02-14 20:06:03.053440] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1600899 ] 00:05:25.810 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.810 [2024-02-14 20:06:03.112924] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.810 [2024-02-14 20:06:03.190620] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:25.810 [2024-02-14 20:06:03.190741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.749 20:06:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:26.749 20:06:03 -- common/autotest_common.sh@850 -- # return 0 00:05:26.749 20:06:03 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:26.749 20:06:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:26.749 20:06:03 -- common/autotest_common.sh@10 -- # set +x 00:05:26.749 20:06:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:26.749 20:06:03 -- event/cpu_locks.sh@67 -- # no_locks 00:05:26.749 20:06:03 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:26.749 20:06:03 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:26.749 20:06:03 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:26.749 20:06:03 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:26.749 20:06:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:26.749 20:06:03 -- common/autotest_common.sh@10 -- # set +x 00:05:26.749 20:06:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:26.749 20:06:03 -- event/cpu_locks.sh@71 -- # locks_exist 1600899 00:05:26.749 20:06:03 -- event/cpu_locks.sh@22 -- # lslocks -p 1600899 00:05:26.749 20:06:03 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:27.009 20:06:04 -- event/cpu_locks.sh@73 -- # killprocess 1600899 00:05:27.009 20:06:04 -- common/autotest_common.sh@924 -- # '[' -z 1600899 ']' 00:05:27.009 20:06:04 -- common/autotest_common.sh@928 -- # kill -0 1600899 00:05:27.009 20:06:04 -- common/autotest_common.sh@929 -- # uname 00:05:27.009 20:06:04 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:05:27.009 20:06:04 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 1600899 00:05:27.009 20:06:04 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:05:27.009 20:06:04 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:05:27.009 20:06:04 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 1600899' 00:05:27.009 killing process with pid 1600899 00:05:27.009 20:06:04 -- common/autotest_common.sh@943 -- # kill 1600899 00:05:27.009 20:06:04 -- common/autotest_common.sh@948 -- # wait 1600899 00:05:27.270 00:05:27.270 real 0m1.591s 00:05:27.270 user 0m1.675s 00:05:27.270 sys 0m0.485s 00:05:27.270 20:06:04 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:27.270 20:06:04 -- common/autotest_common.sh@10 -- # set +x 00:05:27.270 ************************************ 00:05:27.270 END TEST default_locks_via_rpc 00:05:27.270 ************************************ 00:05:27.270 20:06:04 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:27.270 20:06:04 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:05:27.270 20:06:04 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:27.270 20:06:04 -- common/autotest_common.sh@10 -- # set +x 00:05:27.270 ************************************ 00:05:27.270 START TEST non_locking_app_on_locked_coremask 00:05:27.270 ************************************ 00:05:27.270 20:06:04 -- common/autotest_common.sh@1102 -- # non_locking_app_on_locked_coremask 00:05:27.270 20:06:04 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1601165 00:05:27.270 20:06:04 -- event/cpu_locks.sh@81 -- # waitforlisten 1601165 /var/tmp/spdk.sock 00:05:27.270 20:06:04 -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:27.270 20:06:04 -- common/autotest_common.sh@817 -- # '[' -z 1601165 ']' 00:05:27.270 20:06:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.270 20:06:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:27.270 20:06:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.270 20:06:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:27.270 20:06:04 -- common/autotest_common.sh@10 -- # set +x 00:05:27.530 [2024-02-14 20:06:04.694678] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:27.530 [2024-02-14 20:06:04.694731] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1601165 ] 00:05:27.530 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.530 [2024-02-14 20:06:04.754659] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.530 [2024-02-14 20:06:04.820307] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:27.530 [2024-02-14 20:06:04.820430] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.099 20:06:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:28.099 20:06:05 -- common/autotest_common.sh@850 -- # return 0 00:05:28.099 20:06:05 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1601556 00:05:28.099 20:06:05 -- event/cpu_locks.sh@85 -- # waitforlisten 1601556 /var/tmp/spdk2.sock 00:05:28.099 20:06:05 -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:28.099 20:06:05 -- common/autotest_common.sh@817 -- # '[' -z 1601556 ']' 00:05:28.099 20:06:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:28.099 20:06:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:28.099 20:06:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:28.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:28.099 20:06:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:28.099 20:06:05 -- common/autotest_common.sh@10 -- # set +x 00:05:28.359 [2024-02-14 20:06:05.532383] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:28.359 [2024-02-14 20:06:05.532433] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1601556 ] 00:05:28.359 EAL: No free 2048 kB hugepages reported on node 1 00:05:28.359 [2024-02-14 20:06:05.612447] app.c: 793:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:28.359 [2024-02-14 20:06:05.612476] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.359 [2024-02-14 20:06:05.754334] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:28.359 [2024-02-14 20:06:05.754478] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.929 20:06:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:28.929 20:06:06 -- common/autotest_common.sh@850 -- # return 0 00:05:28.929 20:06:06 -- event/cpu_locks.sh@87 -- # locks_exist 1601165 00:05:28.929 20:06:06 -- event/cpu_locks.sh@22 -- # lslocks -p 1601165 00:05:28.929 20:06:06 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:29.189 lslocks: write error 00:05:29.189 20:06:06 -- event/cpu_locks.sh@89 -- # killprocess 1601165 00:05:29.189 20:06:06 -- common/autotest_common.sh@924 -- # '[' -z 1601165 ']' 00:05:29.189 20:06:06 -- common/autotest_common.sh@928 -- # kill -0 1601165 00:05:29.189 20:06:06 -- common/autotest_common.sh@929 -- # uname 00:05:29.449 20:06:06 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:05:29.449 20:06:06 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 1601165 00:05:29.449 20:06:06 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:05:29.449 20:06:06 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:05:29.449 20:06:06 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 1601165' 00:05:29.449 killing process with pid 1601165 00:05:29.449 20:06:06 -- common/autotest_common.sh@943 -- # kill 1601165 00:05:29.449 20:06:06 -- common/autotest_common.sh@948 -- # wait 1601165 00:05:30.019 20:06:07 -- event/cpu_locks.sh@90 -- # killprocess 1601556 00:05:30.019 20:06:07 -- common/autotest_common.sh@924 -- # '[' -z 1601556 ']' 00:05:30.019 20:06:07 -- common/autotest_common.sh@928 -- # kill -0 1601556 00:05:30.019 20:06:07 -- common/autotest_common.sh@929 -- # uname 00:05:30.019 20:06:07 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:05:30.019 20:06:07 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 1601556 00:05:30.019 20:06:07 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:05:30.019 20:06:07 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:05:30.019 20:06:07 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 1601556' 00:05:30.019 killing process with pid 1601556 00:05:30.019 20:06:07 -- common/autotest_common.sh@943 -- # kill 1601556 00:05:30.019 20:06:07 -- common/autotest_common.sh@948 -- # wait 1601556 00:05:30.278 00:05:30.278 real 0m3.027s 00:05:30.279 user 0m3.238s 00:05:30.279 sys 0m0.830s 00:05:30.279 20:06:07 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:30.279 20:06:07 -- common/autotest_common.sh@10 -- # set +x 00:05:30.279 ************************************ 00:05:30.279 END TEST non_locking_app_on_locked_coremask 00:05:30.279 ************************************ 00:05:30.539 20:06:07 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:30.539 20:06:07 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:05:30.539 20:06:07 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:30.539 20:06:07 -- common/autotest_common.sh@10 -- # set +x 00:05:30.539 ************************************ 00:05:30.539 START TEST locking_app_on_unlocked_coremask 00:05:30.539 ************************************ 00:05:30.539 20:06:07 -- common/autotest_common.sh@1102 -- # locking_app_on_unlocked_coremask 00:05:30.539 20:06:07 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1602057 00:05:30.539 20:06:07 -- event/cpu_locks.sh@99 -- # waitforlisten 1602057 /var/tmp/spdk.sock 00:05:30.539 20:06:07 -- common/autotest_common.sh@817 -- # '[' -z 1602057 ']' 00:05:30.539 20:06:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.539 20:06:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:30.539 20:06:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.539 20:06:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:30.539 20:06:07 -- common/autotest_common.sh@10 -- # set +x 00:05:30.539 20:06:07 -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:30.539 [2024-02-14 20:06:07.757139] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:30.539 [2024-02-14 20:06:07.757192] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1602057 ] 00:05:30.539 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.539 [2024-02-14 20:06:07.816744] app.c: 793:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:30.539 [2024-02-14 20:06:07.816770] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.539 [2024-02-14 20:06:07.892075] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:30.539 [2024-02-14 20:06:07.892201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.478 20:06:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:31.478 20:06:08 -- common/autotest_common.sh@850 -- # return 0 00:05:31.478 20:06:08 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1602283 00:05:31.478 20:06:08 -- event/cpu_locks.sh@103 -- # waitforlisten 1602283 /var/tmp/spdk2.sock 00:05:31.478 20:06:08 -- common/autotest_common.sh@817 -- # '[' -z 1602283 ']' 00:05:31.478 20:06:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:31.478 20:06:08 -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:31.478 20:06:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:31.478 20:06:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:31.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:31.478 20:06:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:31.478 20:06:08 -- common/autotest_common.sh@10 -- # set +x 00:05:31.478 [2024-02-14 20:06:08.570860] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:31.478 [2024-02-14 20:06:08.570910] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1602283 ] 00:05:31.478 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.478 [2024-02-14 20:06:08.649319] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.478 [2024-02-14 20:06:08.793701] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:31.478 [2024-02-14 20:06:08.793831] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.048 20:06:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:32.048 20:06:09 -- common/autotest_common.sh@850 -- # return 0 00:05:32.048 20:06:09 -- event/cpu_locks.sh@105 -- # locks_exist 1602283 00:05:32.048 20:06:09 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:32.048 20:06:09 -- event/cpu_locks.sh@22 -- # lslocks -p 1602283 00:05:32.308 lslocks: write error 00:05:32.308 20:06:09 -- event/cpu_locks.sh@107 -- # killprocess 1602057 00:05:32.308 20:06:09 -- common/autotest_common.sh@924 -- # '[' -z 1602057 ']' 00:05:32.308 20:06:09 -- common/autotest_common.sh@928 -- # kill -0 1602057 00:05:32.308 20:06:09 -- common/autotest_common.sh@929 -- # uname 00:05:32.308 20:06:09 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:05:32.308 20:06:09 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 1602057 00:05:32.308 20:06:09 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:05:32.308 20:06:09 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:05:32.308 20:06:09 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 1602057' 00:05:32.308 killing process with pid 1602057 00:05:32.308 20:06:09 -- common/autotest_common.sh@943 -- # kill 1602057 00:05:32.308 20:06:09 -- common/autotest_common.sh@948 -- # wait 1602057 00:05:33.248 20:06:10 -- event/cpu_locks.sh@108 -- # killprocess 1602283 00:05:33.248 20:06:10 -- common/autotest_common.sh@924 -- # '[' -z 1602283 ']' 00:05:33.248 20:06:10 -- common/autotest_common.sh@928 -- # kill -0 1602283 00:05:33.248 20:06:10 -- common/autotest_common.sh@929 -- # uname 00:05:33.248 20:06:10 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:05:33.248 20:06:10 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 1602283 00:05:33.248 20:06:10 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:05:33.248 20:06:10 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:05:33.248 20:06:10 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 1602283' 00:05:33.248 killing process with pid 1602283 00:05:33.248 20:06:10 -- common/autotest_common.sh@943 -- # kill 1602283 00:05:33.248 20:06:10 -- common/autotest_common.sh@948 -- # wait 1602283 00:05:33.508 00:05:33.508 real 0m2.981s 00:05:33.508 user 0m3.186s 00:05:33.508 sys 0m0.759s 00:05:33.508 20:06:10 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:33.508 20:06:10 -- common/autotest_common.sh@10 -- # set +x 00:05:33.508 ************************************ 00:05:33.508 END TEST locking_app_on_unlocked_coremask 00:05:33.508 ************************************ 00:05:33.508 20:06:10 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:33.508 20:06:10 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:05:33.508 20:06:10 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:33.508 20:06:10 -- common/autotest_common.sh@10 -- # set +x 00:05:33.508 ************************************ 00:05:33.508 START TEST locking_app_on_locked_coremask 00:05:33.508 ************************************ 00:05:33.508 20:06:10 -- common/autotest_common.sh@1102 -- # locking_app_on_locked_coremask 00:05:33.508 20:06:10 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1602720 00:05:33.508 20:06:10 -- event/cpu_locks.sh@116 -- # waitforlisten 1602720 /var/tmp/spdk.sock 00:05:33.508 20:06:10 -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:33.508 20:06:10 -- common/autotest_common.sh@817 -- # '[' -z 1602720 ']' 00:05:33.508 20:06:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.508 20:06:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:33.508 20:06:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.508 20:06:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:33.508 20:06:10 -- common/autotest_common.sh@10 -- # set +x 00:05:33.508 [2024-02-14 20:06:10.774974] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:33.508 [2024-02-14 20:06:10.775023] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1602720 ] 00:05:33.508 EAL: No free 2048 kB hugepages reported on node 1 00:05:33.508 [2024-02-14 20:06:10.833997] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.508 [2024-02-14 20:06:10.903722] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:33.508 [2024-02-14 20:06:10.903846] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.445 20:06:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:34.445 20:06:11 -- common/autotest_common.sh@850 -- # return 0 00:05:34.445 20:06:11 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1602789 00:05:34.445 20:06:11 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1602789 /var/tmp/spdk2.sock 00:05:34.445 20:06:11 -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:34.445 20:06:11 -- common/autotest_common.sh@638 -- # local es=0 00:05:34.445 20:06:11 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 1602789 /var/tmp/spdk2.sock 00:05:34.445 20:06:11 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:05:34.445 20:06:11 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:34.445 20:06:11 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:05:34.445 20:06:11 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:34.445 20:06:11 -- common/autotest_common.sh@641 -- # waitforlisten 1602789 /var/tmp/spdk2.sock 00:05:34.445 20:06:11 -- common/autotest_common.sh@817 -- # '[' -z 1602789 ']' 00:05:34.445 20:06:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:34.445 20:06:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:34.445 20:06:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:34.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:34.445 20:06:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:34.445 20:06:11 -- common/autotest_common.sh@10 -- # set +x 00:05:34.445 [2024-02-14 20:06:11.603451] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:34.445 [2024-02-14 20:06:11.603494] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1602789 ] 00:05:34.445 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.445 [2024-02-14 20:06:11.683204] app.c: 663:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1602720 has claimed it. 00:05:34.445 [2024-02-14 20:06:11.683242] app.c: 789:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:35.013 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (1602789) - No such process 00:05:35.013 ERROR: process (pid: 1602789) is no longer running 00:05:35.013 20:06:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:35.013 20:06:12 -- common/autotest_common.sh@850 -- # return 1 00:05:35.013 20:06:12 -- common/autotest_common.sh@641 -- # es=1 00:05:35.013 20:06:12 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:35.013 20:06:12 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:35.013 20:06:12 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:35.013 20:06:12 -- event/cpu_locks.sh@122 -- # locks_exist 1602720 00:05:35.013 20:06:12 -- event/cpu_locks.sh@22 -- # lslocks -p 1602720 00:05:35.013 20:06:12 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:35.273 lslocks: write error 00:05:35.273 20:06:12 -- event/cpu_locks.sh@124 -- # killprocess 1602720 00:05:35.273 20:06:12 -- common/autotest_common.sh@924 -- # '[' -z 1602720 ']' 00:05:35.273 20:06:12 -- common/autotest_common.sh@928 -- # kill -0 1602720 00:05:35.273 20:06:12 -- common/autotest_common.sh@929 -- # uname 00:05:35.273 20:06:12 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:05:35.273 20:06:12 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 1602720 00:05:35.273 20:06:12 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:05:35.273 20:06:12 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:05:35.273 20:06:12 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 1602720' 00:05:35.273 killing process with pid 1602720 00:05:35.273 20:06:12 -- common/autotest_common.sh@943 -- # kill 1602720 00:05:35.273 20:06:12 -- common/autotest_common.sh@948 -- # wait 1602720 00:05:35.843 00:05:35.843 real 0m2.250s 00:05:35.843 user 0m2.447s 00:05:35.843 sys 0m0.622s 00:05:35.843 20:06:12 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:35.843 20:06:12 -- common/autotest_common.sh@10 -- # set +x 00:05:35.843 ************************************ 00:05:35.843 END TEST locking_app_on_locked_coremask 00:05:35.843 ************************************ 00:05:35.843 20:06:13 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:35.843 20:06:13 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:05:35.843 20:06:13 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:35.843 20:06:13 -- common/autotest_common.sh@10 -- # set +x 00:05:35.843 ************************************ 00:05:35.843 START TEST locking_overlapped_coremask 00:05:35.843 ************************************ 00:05:35.843 20:06:13 -- common/autotest_common.sh@1102 -- # locking_overlapped_coremask 00:05:35.843 20:06:13 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1603044 00:05:35.843 20:06:13 -- event/cpu_locks.sh@133 -- # waitforlisten 1603044 /var/tmp/spdk.sock 00:05:35.843 20:06:13 -- common/autotest_common.sh@817 -- # '[' -z 1603044 ']' 00:05:35.843 20:06:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.843 20:06:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:35.843 20:06:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.843 20:06:13 -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:35.843 20:06:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:35.843 20:06:13 -- common/autotest_common.sh@10 -- # set +x 00:05:35.843 [2024-02-14 20:06:13.058371] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:35.843 [2024-02-14 20:06:13.058423] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1603044 ] 00:05:35.843 EAL: No free 2048 kB hugepages reported on node 1 00:05:35.843 [2024-02-14 20:06:13.117159] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:35.843 [2024-02-14 20:06:13.193829] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:35.843 [2024-02-14 20:06:13.193982] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:35.843 [2024-02-14 20:06:13.194101] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:35.843 [2024-02-14 20:06:13.194102] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.783 20:06:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:36.783 20:06:13 -- common/autotest_common.sh@850 -- # return 0 00:05:36.783 20:06:13 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1603279 00:05:36.783 20:06:13 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1603279 /var/tmp/spdk2.sock 00:05:36.783 20:06:13 -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:36.783 20:06:13 -- common/autotest_common.sh@638 -- # local es=0 00:05:36.783 20:06:13 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 1603279 /var/tmp/spdk2.sock 00:05:36.783 20:06:13 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:05:36.783 20:06:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:36.783 20:06:13 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:05:36.783 20:06:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:36.783 20:06:13 -- common/autotest_common.sh@641 -- # waitforlisten 1603279 /var/tmp/spdk2.sock 00:05:36.783 20:06:13 -- common/autotest_common.sh@817 -- # '[' -z 1603279 ']' 00:05:36.783 20:06:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:36.783 20:06:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:36.783 20:06:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:36.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:36.783 20:06:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:36.783 20:06:13 -- common/autotest_common.sh@10 -- # set +x 00:05:36.783 [2024-02-14 20:06:13.892446] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:36.783 [2024-02-14 20:06:13.892489] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1603279 ] 00:05:36.783 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.783 [2024-02-14 20:06:13.971570] app.c: 663:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1603044 has claimed it. 00:05:36.783 [2024-02-14 20:06:13.971602] app.c: 789:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:37.353 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (1603279) - No such process 00:05:37.353 ERROR: process (pid: 1603279) is no longer running 00:05:37.353 20:06:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:37.353 20:06:14 -- common/autotest_common.sh@850 -- # return 1 00:05:37.353 20:06:14 -- common/autotest_common.sh@641 -- # es=1 00:05:37.353 20:06:14 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:37.353 20:06:14 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:37.353 20:06:14 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:37.353 20:06:14 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:37.353 20:06:14 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:37.353 20:06:14 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:37.353 20:06:14 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:37.353 20:06:14 -- event/cpu_locks.sh@141 -- # killprocess 1603044 00:05:37.353 20:06:14 -- common/autotest_common.sh@924 -- # '[' -z 1603044 ']' 00:05:37.353 20:06:14 -- common/autotest_common.sh@928 -- # kill -0 1603044 00:05:37.353 20:06:14 -- common/autotest_common.sh@929 -- # uname 00:05:37.353 20:06:14 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:05:37.353 20:06:14 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 1603044 00:05:37.353 20:06:14 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:05:37.353 20:06:14 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:05:37.353 20:06:14 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 1603044' 00:05:37.353 killing process with pid 1603044 00:05:37.353 20:06:14 -- common/autotest_common.sh@943 -- # kill 1603044 00:05:37.353 20:06:14 -- common/autotest_common.sh@948 -- # wait 1603044 00:05:37.613 00:05:37.613 real 0m1.878s 00:05:37.613 user 0m5.230s 00:05:37.613 sys 0m0.398s 00:05:37.613 20:06:14 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:37.613 20:06:14 -- common/autotest_common.sh@10 -- # set +x 00:05:37.613 ************************************ 00:05:37.613 END TEST locking_overlapped_coremask 00:05:37.613 ************************************ 00:05:37.613 20:06:14 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:37.613 20:06:14 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:05:37.614 20:06:14 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:37.614 20:06:14 -- common/autotest_common.sh@10 -- # set +x 00:05:37.614 ************************************ 00:05:37.614 START TEST locking_overlapped_coremask_via_rpc 00:05:37.614 ************************************ 00:05:37.614 20:06:14 -- common/autotest_common.sh@1102 -- # locking_overlapped_coremask_via_rpc 00:05:37.614 20:06:14 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1603534 00:05:37.614 20:06:14 -- event/cpu_locks.sh@149 -- # waitforlisten 1603534 /var/tmp/spdk.sock 00:05:37.614 20:06:14 -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:37.614 20:06:14 -- common/autotest_common.sh@817 -- # '[' -z 1603534 ']' 00:05:37.614 20:06:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.614 20:06:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:37.614 20:06:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.614 20:06:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:37.614 20:06:14 -- common/autotest_common.sh@10 -- # set +x 00:05:37.614 [2024-02-14 20:06:14.975105] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:37.614 [2024-02-14 20:06:14.975152] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1603534 ] 00:05:37.614 EAL: No free 2048 kB hugepages reported on node 1 00:05:37.873 [2024-02-14 20:06:15.034720] app.c: 793:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:37.873 [2024-02-14 20:06:15.034750] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:37.874 [2024-02-14 20:06:15.099945] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:37.874 [2024-02-14 20:06:15.100125] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:37.874 [2024-02-14 20:06:15.100243] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:37.874 [2024-02-14 20:06:15.100245] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.443 20:06:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:38.443 20:06:15 -- common/autotest_common.sh@850 -- # return 0 00:05:38.443 20:06:15 -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:38.443 20:06:15 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1603550 00:05:38.443 20:06:15 -- event/cpu_locks.sh@153 -- # waitforlisten 1603550 /var/tmp/spdk2.sock 00:05:38.443 20:06:15 -- common/autotest_common.sh@817 -- # '[' -z 1603550 ']' 00:05:38.443 20:06:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:38.443 20:06:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:38.443 20:06:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:38.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:38.443 20:06:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:38.443 20:06:15 -- common/autotest_common.sh@10 -- # set +x 00:05:38.443 [2024-02-14 20:06:15.801563] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:38.443 [2024-02-14 20:06:15.801606] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1603550 ] 00:05:38.443 EAL: No free 2048 kB hugepages reported on node 1 00:05:38.704 [2024-02-14 20:06:15.885076] app.c: 793:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:38.704 [2024-02-14 20:06:15.885104] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:38.704 [2024-02-14 20:06:16.022586] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:38.704 [2024-02-14 20:06:16.022764] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:38.704 [2024-02-14 20:06:16.022896] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:38.704 [2024-02-14 20:06:16.022897] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:05:39.274 20:06:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:39.274 20:06:16 -- common/autotest_common.sh@850 -- # return 0 00:05:39.274 20:06:16 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:39.274 20:06:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:39.274 20:06:16 -- common/autotest_common.sh@10 -- # set +x 00:05:39.274 20:06:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:39.274 20:06:16 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:39.274 20:06:16 -- common/autotest_common.sh@638 -- # local es=0 00:05:39.274 20:06:16 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:39.274 20:06:16 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:05:39.274 20:06:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:39.274 20:06:16 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:05:39.274 20:06:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:39.274 20:06:16 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:39.274 20:06:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:39.274 20:06:16 -- common/autotest_common.sh@10 -- # set +x 00:05:39.274 [2024-02-14 20:06:16.612712] app.c: 663:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1603534 has claimed it. 00:05:39.274 request: 00:05:39.274 { 00:05:39.274 "method": "framework_enable_cpumask_locks", 00:05:39.274 "req_id": 1 00:05:39.274 } 00:05:39.274 Got JSON-RPC error response 00:05:39.274 response: 00:05:39.274 { 00:05:39.274 "code": -32603, 00:05:39.274 "message": "Failed to claim CPU core: 2" 00:05:39.274 } 00:05:39.274 20:06:16 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:05:39.274 20:06:16 -- common/autotest_common.sh@641 -- # es=1 00:05:39.274 20:06:16 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:39.274 20:06:16 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:39.274 20:06:16 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:39.274 20:06:16 -- event/cpu_locks.sh@158 -- # waitforlisten 1603534 /var/tmp/spdk.sock 00:05:39.274 20:06:16 -- common/autotest_common.sh@817 -- # '[' -z 1603534 ']' 00:05:39.274 20:06:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.274 20:06:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:39.274 20:06:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.274 20:06:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:39.274 20:06:16 -- common/autotest_common.sh@10 -- # set +x 00:05:39.534 20:06:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:39.534 20:06:16 -- common/autotest_common.sh@850 -- # return 0 00:05:39.534 20:06:16 -- event/cpu_locks.sh@159 -- # waitforlisten 1603550 /var/tmp/spdk2.sock 00:05:39.534 20:06:16 -- common/autotest_common.sh@817 -- # '[' -z 1603550 ']' 00:05:39.534 20:06:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:39.534 20:06:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:39.534 20:06:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:39.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:39.534 20:06:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:39.534 20:06:16 -- common/autotest_common.sh@10 -- # set +x 00:05:39.795 20:06:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:39.795 20:06:16 -- common/autotest_common.sh@850 -- # return 0 00:05:39.795 20:06:16 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:39.795 20:06:16 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:39.795 20:06:16 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:39.795 20:06:16 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:39.795 00:05:39.795 real 0m2.052s 00:05:39.795 user 0m0.797s 00:05:39.795 sys 0m0.174s 00:05:39.795 20:06:16 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:39.795 20:06:16 -- common/autotest_common.sh@10 -- # set +x 00:05:39.795 ************************************ 00:05:39.795 END TEST locking_overlapped_coremask_via_rpc 00:05:39.795 ************************************ 00:05:39.795 20:06:17 -- event/cpu_locks.sh@174 -- # cleanup 00:05:39.795 20:06:17 -- event/cpu_locks.sh@15 -- # [[ -z 1603534 ]] 00:05:39.795 20:06:17 -- event/cpu_locks.sh@15 -- # killprocess 1603534 00:05:39.795 20:06:17 -- common/autotest_common.sh@924 -- # '[' -z 1603534 ']' 00:05:39.795 20:06:17 -- common/autotest_common.sh@928 -- # kill -0 1603534 00:05:39.795 20:06:17 -- common/autotest_common.sh@929 -- # uname 00:05:39.795 20:06:17 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:05:39.795 20:06:17 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 1603534 00:05:39.795 20:06:17 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:05:39.795 20:06:17 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:05:39.795 20:06:17 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 1603534' 00:05:39.795 killing process with pid 1603534 00:05:39.795 20:06:17 -- common/autotest_common.sh@943 -- # kill 1603534 00:05:39.795 20:06:17 -- common/autotest_common.sh@948 -- # wait 1603534 00:05:40.055 20:06:17 -- event/cpu_locks.sh@16 -- # [[ -z 1603550 ]] 00:05:40.055 20:06:17 -- event/cpu_locks.sh@16 -- # killprocess 1603550 00:05:40.055 20:06:17 -- common/autotest_common.sh@924 -- # '[' -z 1603550 ']' 00:05:40.055 20:06:17 -- common/autotest_common.sh@928 -- # kill -0 1603550 00:05:40.055 20:06:17 -- common/autotest_common.sh@929 -- # uname 00:05:40.055 20:06:17 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:05:40.055 20:06:17 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 1603550 00:05:40.055 20:06:17 -- common/autotest_common.sh@930 -- # process_name=reactor_2 00:05:40.055 20:06:17 -- common/autotest_common.sh@934 -- # '[' reactor_2 = sudo ']' 00:05:40.055 20:06:17 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 1603550' 00:05:40.055 killing process with pid 1603550 00:05:40.055 20:06:17 -- common/autotest_common.sh@943 -- # kill 1603550 00:05:40.055 20:06:17 -- common/autotest_common.sh@948 -- # wait 1603550 00:05:40.625 20:06:17 -- event/cpu_locks.sh@18 -- # rm -f 00:05:40.625 20:06:17 -- event/cpu_locks.sh@1 -- # cleanup 00:05:40.625 20:06:17 -- event/cpu_locks.sh@15 -- # [[ -z 1603534 ]] 00:05:40.625 20:06:17 -- event/cpu_locks.sh@15 -- # killprocess 1603534 00:05:40.625 20:06:17 -- common/autotest_common.sh@924 -- # '[' -z 1603534 ']' 00:05:40.625 20:06:17 -- common/autotest_common.sh@928 -- # kill -0 1603534 00:05:40.625 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 928: kill: (1603534) - No such process 00:05:40.625 20:06:17 -- common/autotest_common.sh@951 -- # echo 'Process with pid 1603534 is not found' 00:05:40.625 Process with pid 1603534 is not found 00:05:40.625 20:06:17 -- event/cpu_locks.sh@16 -- # [[ -z 1603550 ]] 00:05:40.625 20:06:17 -- event/cpu_locks.sh@16 -- # killprocess 1603550 00:05:40.625 20:06:17 -- common/autotest_common.sh@924 -- # '[' -z 1603550 ']' 00:05:40.625 20:06:17 -- common/autotest_common.sh@928 -- # kill -0 1603550 00:05:40.625 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 928: kill: (1603550) - No such process 00:05:40.625 20:06:17 -- common/autotest_common.sh@951 -- # echo 'Process with pid 1603550 is not found' 00:05:40.625 Process with pid 1603550 is not found 00:05:40.625 20:06:17 -- event/cpu_locks.sh@18 -- # rm -f 00:05:40.625 00:05:40.625 real 0m16.231s 00:05:40.625 user 0m28.355s 00:05:40.625 sys 0m4.432s 00:05:40.625 20:06:17 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:40.625 20:06:17 -- common/autotest_common.sh@10 -- # set +x 00:05:40.625 ************************************ 00:05:40.625 END TEST cpu_locks 00:05:40.625 ************************************ 00:05:40.625 00:05:40.625 real 0m41.333s 00:05:40.625 user 1m19.831s 00:05:40.625 sys 0m7.549s 00:05:40.625 20:06:17 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:40.625 20:06:17 -- common/autotest_common.sh@10 -- # set +x 00:05:40.625 ************************************ 00:05:40.625 END TEST event 00:05:40.625 ************************************ 00:05:40.625 20:06:17 -- spdk/autotest.sh@188 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:40.625 20:06:17 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:05:40.625 20:06:17 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:40.625 20:06:17 -- common/autotest_common.sh@10 -- # set +x 00:05:40.625 ************************************ 00:05:40.625 START TEST thread 00:05:40.625 ************************************ 00:05:40.625 20:06:17 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:40.625 * Looking for test storage... 00:05:40.625 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:40.625 20:06:17 -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:40.625 20:06:17 -- common/autotest_common.sh@1075 -- # '[' 8 -le 1 ']' 00:05:40.625 20:06:17 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:40.625 20:06:17 -- common/autotest_common.sh@10 -- # set +x 00:05:40.625 ************************************ 00:05:40.625 START TEST thread_poller_perf 00:05:40.625 ************************************ 00:05:40.625 20:06:17 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:40.625 [2024-02-14 20:06:17.964609] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:40.625 [2024-02-14 20:06:17.964700] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1604098 ] 00:05:40.625 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.625 [2024-02-14 20:06:18.026690] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.885 [2024-02-14 20:06:18.096539] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.885 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:41.823 ====================================== 00:05:41.823 busy:2106953498 (cyc) 00:05:41.823 total_run_count: 408000 00:05:41.823 tsc_hz: 2100000000 (cyc) 00:05:41.823 ====================================== 00:05:41.823 poller_cost: 5164 (cyc), 2459 (nsec) 00:05:41.823 00:05:41.823 real 0m1.241s 00:05:41.823 user 0m1.164s 00:05:41.823 sys 0m0.072s 00:05:41.823 20:06:19 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:41.823 20:06:19 -- common/autotest_common.sh@10 -- # set +x 00:05:41.823 ************************************ 00:05:41.823 END TEST thread_poller_perf 00:05:41.823 ************************************ 00:05:41.823 20:06:19 -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:41.823 20:06:19 -- common/autotest_common.sh@1075 -- # '[' 8 -le 1 ']' 00:05:41.823 20:06:19 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:41.823 20:06:19 -- common/autotest_common.sh@10 -- # set +x 00:05:41.823 ************************************ 00:05:41.823 START TEST thread_poller_perf 00:05:41.823 ************************************ 00:05:41.823 20:06:19 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:41.823 [2024-02-14 20:06:19.238735] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:41.823 [2024-02-14 20:06:19.238813] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1604343 ] 00:05:42.082 EAL: No free 2048 kB hugepages reported on node 1 00:05:42.082 [2024-02-14 20:06:19.300012] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.082 [2024-02-14 20:06:19.367125] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.082 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:43.461 ====================================== 00:05:43.461 busy:2101935830 (cyc) 00:05:43.461 total_run_count: 5509000 00:05:43.461 tsc_hz: 2100000000 (cyc) 00:05:43.461 ====================================== 00:05:43.461 poller_cost: 381 (cyc), 181 (nsec) 00:05:43.461 00:05:43.461 real 0m1.242s 00:05:43.461 user 0m1.162s 00:05:43.461 sys 0m0.074s 00:05:43.461 20:06:20 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:43.461 20:06:20 -- common/autotest_common.sh@10 -- # set +x 00:05:43.461 ************************************ 00:05:43.461 END TEST thread_poller_perf 00:05:43.461 ************************************ 00:05:43.461 20:06:20 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:43.461 00:05:43.461 real 0m2.630s 00:05:43.461 user 0m2.389s 00:05:43.461 sys 0m0.247s 00:05:43.461 20:06:20 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:43.461 20:06:20 -- common/autotest_common.sh@10 -- # set +x 00:05:43.461 ************************************ 00:05:43.461 END TEST thread 00:05:43.461 ************************************ 00:05:43.461 20:06:20 -- spdk/autotest.sh@189 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:43.461 20:06:20 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:05:43.461 20:06:20 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:43.461 20:06:20 -- common/autotest_common.sh@10 -- # set +x 00:05:43.461 ************************************ 00:05:43.461 START TEST accel 00:05:43.461 ************************************ 00:05:43.461 20:06:20 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:43.461 * Looking for test storage... 00:05:43.461 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:05:43.461 20:06:20 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:05:43.461 20:06:20 -- accel/accel.sh@74 -- # get_expected_opcs 00:05:43.461 20:06:20 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:43.461 20:06:20 -- accel/accel.sh@59 -- # spdk_tgt_pid=1604636 00:05:43.461 20:06:20 -- accel/accel.sh@60 -- # waitforlisten 1604636 00:05:43.461 20:06:20 -- common/autotest_common.sh@817 -- # '[' -z 1604636 ']' 00:05:43.461 20:06:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.461 20:06:20 -- accel/accel.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:43.461 20:06:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:43.461 20:06:20 -- accel/accel.sh@58 -- # build_accel_config 00:05:43.461 20:06:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.461 20:06:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:43.461 20:06:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:43.461 20:06:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:43.461 20:06:20 -- common/autotest_common.sh@10 -- # set +x 00:05:43.461 20:06:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:43.461 20:06:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:43.461 20:06:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:43.461 20:06:20 -- accel/accel.sh@41 -- # local IFS=, 00:05:43.461 20:06:20 -- accel/accel.sh@42 -- # jq -r . 00:05:43.461 [2024-02-14 20:06:20.645278] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:43.461 [2024-02-14 20:06:20.645326] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1604636 ] 00:05:43.461 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.461 [2024-02-14 20:06:20.705694] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.461 [2024-02-14 20:06:20.775387] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:43.461 [2024-02-14 20:06:20.775520] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.461 [2024-02-14 20:06:20.775542] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:05:44.030 20:06:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:44.030 20:06:21 -- common/autotest_common.sh@850 -- # return 0 00:05:44.030 20:06:21 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:44.030 20:06:21 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:05:44.030 20:06:21 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:44.030 20:06:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:44.030 20:06:21 -- common/autotest_common.sh@10 -- # set +x 00:05:44.030 20:06:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:44.290 20:06:21 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:44.290 20:06:21 -- accel/accel.sh@64 -- # IFS== 00:05:44.291 20:06:21 -- accel/accel.sh@64 -- # read -r opc module 00:05:44.291 20:06:21 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:44.291 20:06:21 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:44.291 20:06:21 -- accel/accel.sh@64 -- # IFS== 00:05:44.291 20:06:21 -- accel/accel.sh@64 -- # read -r opc module 00:05:44.291 20:06:21 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:44.291 20:06:21 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:44.291 20:06:21 -- accel/accel.sh@64 -- # IFS== 00:05:44.291 20:06:21 -- accel/accel.sh@64 -- # read -r opc module 00:05:44.291 20:06:21 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:44.291 20:06:21 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:44.291 20:06:21 -- accel/accel.sh@64 -- # IFS== 00:05:44.291 20:06:21 -- accel/accel.sh@64 -- # read -r opc module 00:05:44.291 20:06:21 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:44.291 20:06:21 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:44.291 20:06:21 -- accel/accel.sh@64 -- # IFS== 00:05:44.291 20:06:21 -- accel/accel.sh@64 -- # read -r opc module 00:05:44.291 20:06:21 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:44.291 20:06:21 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:44.291 20:06:21 -- accel/accel.sh@64 -- # IFS== 00:05:44.291 20:06:21 -- accel/accel.sh@64 -- # read -r opc module 00:05:44.291 20:06:21 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:44.291 20:06:21 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:44.291 20:06:21 -- accel/accel.sh@64 -- # IFS== 00:05:44.291 20:06:21 -- accel/accel.sh@64 -- # read -r opc module 00:05:44.291 20:06:21 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:44.291 20:06:21 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:44.291 20:06:21 -- accel/accel.sh@64 -- # IFS== 00:05:44.291 20:06:21 -- accel/accel.sh@64 -- # read -r opc module 00:05:44.291 20:06:21 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:44.291 20:06:21 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:44.291 20:06:21 -- accel/accel.sh@64 -- # IFS== 00:05:44.291 20:06:21 -- accel/accel.sh@64 -- # read -r opc module 00:05:44.291 20:06:21 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:44.291 20:06:21 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:44.291 20:06:21 -- accel/accel.sh@64 -- # IFS== 00:05:44.291 20:06:21 -- accel/accel.sh@64 -- # read -r opc module 00:05:44.291 20:06:21 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:44.291 20:06:21 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:44.291 20:06:21 -- accel/accel.sh@64 -- # IFS== 00:05:44.291 20:06:21 -- accel/accel.sh@64 -- # read -r opc module 00:05:44.291 20:06:21 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:44.291 20:06:21 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:44.291 20:06:21 -- accel/accel.sh@64 -- # IFS== 00:05:44.291 20:06:21 -- accel/accel.sh@64 -- # read -r opc module 00:05:44.291 20:06:21 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:44.291 20:06:21 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:44.291 20:06:21 -- accel/accel.sh@64 -- # IFS== 00:05:44.291 20:06:21 -- accel/accel.sh@64 -- # read -r opc module 00:05:44.291 20:06:21 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:44.291 20:06:21 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:44.291 20:06:21 -- accel/accel.sh@64 -- # IFS== 00:05:44.291 20:06:21 -- accel/accel.sh@64 -- # read -r opc module 00:05:44.291 20:06:21 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:44.291 20:06:21 -- accel/accel.sh@67 -- # killprocess 1604636 00:05:44.291 20:06:21 -- common/autotest_common.sh@924 -- # '[' -z 1604636 ']' 00:05:44.291 20:06:21 -- common/autotest_common.sh@928 -- # kill -0 1604636 00:05:44.291 20:06:21 -- common/autotest_common.sh@929 -- # uname 00:05:44.291 20:06:21 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:05:44.291 20:06:21 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 1604636 00:05:44.291 20:06:21 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:05:44.291 20:06:21 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:05:44.291 20:06:21 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 1604636' 00:05:44.291 killing process with pid 1604636 00:05:44.291 20:06:21 -- common/autotest_common.sh@943 -- # kill 1604636 00:05:44.291 [2024-02-14 20:06:21.533979] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:05:44.291 20:06:21 -- common/autotest_common.sh@948 -- # wait 1604636 00:05:44.551 20:06:21 -- accel/accel.sh@68 -- # trap - ERR 00:05:44.551 20:06:21 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:05:44.551 20:06:21 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:05:44.551 20:06:21 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:44.551 20:06:21 -- common/autotest_common.sh@10 -- # set +x 00:05:44.551 20:06:21 -- common/autotest_common.sh@1102 -- # accel_perf -h 00:05:44.551 20:06:21 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:44.551 20:06:21 -- accel/accel.sh@12 -- # build_accel_config 00:05:44.551 20:06:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:44.551 20:06:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:44.551 20:06:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:44.551 20:06:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:44.551 20:06:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:44.551 20:06:21 -- accel/accel.sh@41 -- # local IFS=, 00:05:44.551 20:06:21 -- accel/accel.sh@42 -- # jq -r . 00:05:44.551 20:06:21 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:44.551 20:06:21 -- common/autotest_common.sh@10 -- # set +x 00:05:44.551 20:06:21 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:44.551 20:06:21 -- common/autotest_common.sh@1075 -- # '[' 7 -le 1 ']' 00:05:44.551 20:06:21 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:44.551 20:06:21 -- common/autotest_common.sh@10 -- # set +x 00:05:44.551 ************************************ 00:05:44.551 START TEST accel_missing_filename 00:05:44.551 ************************************ 00:05:44.551 20:06:21 -- common/autotest_common.sh@1102 -- # NOT accel_perf -t 1 -w compress 00:05:44.551 20:06:21 -- common/autotest_common.sh@638 -- # local es=0 00:05:44.551 20:06:21 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:44.551 20:06:21 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:05:44.551 20:06:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:44.551 20:06:21 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:05:44.551 20:06:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:44.551 20:06:21 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress 00:05:44.551 20:06:21 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:44.551 20:06:21 -- accel/accel.sh@12 -- # build_accel_config 00:05:44.551 20:06:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:44.551 20:06:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:44.551 20:06:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:44.551 20:06:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:44.551 20:06:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:44.551 20:06:21 -- accel/accel.sh@41 -- # local IFS=, 00:05:44.551 20:06:21 -- accel/accel.sh@42 -- # jq -r . 00:05:44.551 [2024-02-14 20:06:21.952494] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:44.551 [2024-02-14 20:06:21.952572] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1604898 ] 00:05:44.812 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.812 [2024-02-14 20:06:22.011981] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.812 [2024-02-14 20:06:22.079626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.812 [2024-02-14 20:06:22.079688] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:05:44.812 [2024-02-14 20:06:22.119069] app.c: 908:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:44.812 [2024-02-14 20:06:22.119109] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:05:44.812 [2024-02-14 20:06:22.177832] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:05:45.072 A filename is required. 00:05:45.072 20:06:22 -- common/autotest_common.sh@641 -- # es=234 00:05:45.072 20:06:22 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:45.072 20:06:22 -- common/autotest_common.sh@650 -- # es=106 00:05:45.072 20:06:22 -- common/autotest_common.sh@651 -- # case "$es" in 00:05:45.072 20:06:22 -- common/autotest_common.sh@658 -- # es=1 00:05:45.072 20:06:22 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:45.072 00:05:45.072 real 0m0.343s 00:05:45.072 user 0m0.261s 00:05:45.072 sys 0m0.118s 00:05:45.072 20:06:22 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:45.072 20:06:22 -- common/autotest_common.sh@10 -- # set +x 00:05:45.072 ************************************ 00:05:45.072 END TEST accel_missing_filename 00:05:45.072 ************************************ 00:05:45.072 20:06:22 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:45.072 20:06:22 -- common/autotest_common.sh@1075 -- # '[' 10 -le 1 ']' 00:05:45.072 20:06:22 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:45.072 20:06:22 -- common/autotest_common.sh@10 -- # set +x 00:05:45.072 ************************************ 00:05:45.072 START TEST accel_compress_verify 00:05:45.072 ************************************ 00:05:45.072 20:06:22 -- common/autotest_common.sh@1102 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:45.072 20:06:22 -- common/autotest_common.sh@638 -- # local es=0 00:05:45.072 20:06:22 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:45.072 20:06:22 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:05:45.072 20:06:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:45.072 20:06:22 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:05:45.072 20:06:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:45.072 20:06:22 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:45.072 20:06:22 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:45.072 20:06:22 -- accel/accel.sh@12 -- # build_accel_config 00:05:45.072 20:06:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:45.072 20:06:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:45.072 20:06:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:45.072 20:06:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:45.072 20:06:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:45.072 20:06:22 -- accel/accel.sh@41 -- # local IFS=, 00:05:45.072 20:06:22 -- accel/accel.sh@42 -- # jq -r . 00:05:45.072 [2024-02-14 20:06:22.329550] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:45.072 [2024-02-14 20:06:22.329617] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1604922 ] 00:05:45.072 EAL: No free 2048 kB hugepages reported on node 1 00:05:45.072 [2024-02-14 20:06:22.392747] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.072 [2024-02-14 20:06:22.462173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.072 [2024-02-14 20:06:22.462227] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:05:45.332 [2024-02-14 20:06:22.502618] app.c: 908:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:45.332 [2024-02-14 20:06:22.502681] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:05:45.332 [2024-02-14 20:06:22.562247] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:05:45.332 00:05:45.332 Compression does not support the verify option, aborting. 00:05:45.332 20:06:22 -- common/autotest_common.sh@641 -- # es=161 00:05:45.332 20:06:22 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:45.332 20:06:22 -- common/autotest_common.sh@650 -- # es=33 00:05:45.332 20:06:22 -- common/autotest_common.sh@651 -- # case "$es" in 00:05:45.332 20:06:22 -- common/autotest_common.sh@658 -- # es=1 00:05:45.332 20:06:22 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:45.332 00:05:45.332 real 0m0.353s 00:05:45.332 user 0m0.271s 00:05:45.332 sys 0m0.118s 00:05:45.332 20:06:22 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:45.332 20:06:22 -- common/autotest_common.sh@10 -- # set +x 00:05:45.332 ************************************ 00:05:45.332 END TEST accel_compress_verify 00:05:45.332 ************************************ 00:05:45.332 20:06:22 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:45.332 20:06:22 -- common/autotest_common.sh@1075 -- # '[' 7 -le 1 ']' 00:05:45.333 20:06:22 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:45.333 20:06:22 -- common/autotest_common.sh@10 -- # set +x 00:05:45.333 ************************************ 00:05:45.333 START TEST accel_wrong_workload 00:05:45.333 ************************************ 00:05:45.333 20:06:22 -- common/autotest_common.sh@1102 -- # NOT accel_perf -t 1 -w foobar 00:05:45.333 20:06:22 -- common/autotest_common.sh@638 -- # local es=0 00:05:45.333 20:06:22 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:45.333 20:06:22 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:05:45.333 20:06:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:45.333 20:06:22 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:05:45.333 20:06:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:45.333 20:06:22 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w foobar 00:05:45.333 20:06:22 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:45.333 20:06:22 -- accel/accel.sh@12 -- # build_accel_config 00:05:45.333 20:06:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:45.333 20:06:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:45.333 20:06:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:45.333 20:06:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:45.333 20:06:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:45.333 20:06:22 -- accel/accel.sh@41 -- # local IFS=, 00:05:45.333 20:06:22 -- accel/accel.sh@42 -- # jq -r . 00:05:45.333 Unsupported workload type: foobar 00:05:45.333 [2024-02-14 20:06:22.715861] app.c:1290:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:45.333 accel_perf options: 00:05:45.333 [-h help message] 00:05:45.333 [-q queue depth per core] 00:05:45.333 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:45.333 [-T number of threads per core 00:05:45.333 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:45.333 [-t time in seconds] 00:05:45.333 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:45.333 [ dif_verify, , dif_generate, dif_generate_copy 00:05:45.333 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:45.333 [-l for compress/decompress workloads, name of uncompressed input file 00:05:45.333 [-S for crc32c workload, use this seed value (default 0) 00:05:45.333 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:45.333 [-f for fill workload, use this BYTE value (default 255) 00:05:45.333 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:45.333 [-y verify result if this switch is on] 00:05:45.333 [-a tasks to allocate per core (default: same value as -q)] 00:05:45.333 Can be used to spread operations across a wider range of memory. 00:05:45.333 20:06:22 -- common/autotest_common.sh@641 -- # es=1 00:05:45.333 20:06:22 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:45.333 20:06:22 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:45.333 20:06:22 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:45.333 00:05:45.333 real 0m0.034s 00:05:45.333 user 0m0.024s 00:05:45.333 sys 0m0.009s 00:05:45.333 20:06:22 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:45.333 20:06:22 -- common/autotest_common.sh@10 -- # set +x 00:05:45.333 ************************************ 00:05:45.333 END TEST accel_wrong_workload 00:05:45.333 ************************************ 00:05:45.333 Error: writing output failed: Broken pipe 00:05:45.593 20:06:22 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:45.593 20:06:22 -- common/autotest_common.sh@1075 -- # '[' 10 -le 1 ']' 00:05:45.593 20:06:22 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:45.593 20:06:22 -- common/autotest_common.sh@10 -- # set +x 00:05:45.593 ************************************ 00:05:45.593 START TEST accel_negative_buffers 00:05:45.593 ************************************ 00:05:45.593 20:06:22 -- common/autotest_common.sh@1102 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:45.593 20:06:22 -- common/autotest_common.sh@638 -- # local es=0 00:05:45.593 20:06:22 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:45.594 20:06:22 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:05:45.594 20:06:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:45.594 20:06:22 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:05:45.594 20:06:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:45.594 20:06:22 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w xor -y -x -1 00:05:45.594 20:06:22 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:45.594 20:06:22 -- accel/accel.sh@12 -- # build_accel_config 00:05:45.594 20:06:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:45.594 20:06:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:45.594 20:06:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:45.594 20:06:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:45.594 20:06:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:45.594 20:06:22 -- accel/accel.sh@41 -- # local IFS=, 00:05:45.594 20:06:22 -- accel/accel.sh@42 -- # jq -r . 00:05:45.594 -x option must be non-negative. 00:05:45.594 [2024-02-14 20:06:22.788202] app.c:1290:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:45.594 accel_perf options: 00:05:45.594 [-h help message] 00:05:45.594 [-q queue depth per core] 00:05:45.594 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:45.594 [-T number of threads per core 00:05:45.594 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:45.594 [-t time in seconds] 00:05:45.594 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:45.594 [ dif_verify, , dif_generate, dif_generate_copy 00:05:45.594 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:45.594 [-l for compress/decompress workloads, name of uncompressed input file 00:05:45.594 [-S for crc32c workload, use this seed value (default 0) 00:05:45.594 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:45.594 [-f for fill workload, use this BYTE value (default 255) 00:05:45.594 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:45.594 [-y verify result if this switch is on] 00:05:45.594 [-a tasks to allocate per core (default: same value as -q)] 00:05:45.594 Can be used to spread operations across a wider range of memory. 00:05:45.594 20:06:22 -- common/autotest_common.sh@641 -- # es=1 00:05:45.594 20:06:22 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:45.594 20:06:22 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:45.594 20:06:22 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:45.594 00:05:45.594 real 0m0.033s 00:05:45.594 user 0m0.018s 00:05:45.594 sys 0m0.014s 00:05:45.594 20:06:22 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:45.594 20:06:22 -- common/autotest_common.sh@10 -- # set +x 00:05:45.594 ************************************ 00:05:45.594 END TEST accel_negative_buffers 00:05:45.594 ************************************ 00:05:45.594 Error: writing output failed: Broken pipe 00:05:45.594 20:06:22 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:45.594 20:06:22 -- common/autotest_common.sh@1075 -- # '[' 9 -le 1 ']' 00:05:45.594 20:06:22 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:45.594 20:06:22 -- common/autotest_common.sh@10 -- # set +x 00:05:45.594 ************************************ 00:05:45.594 START TEST accel_crc32c 00:05:45.594 ************************************ 00:05:45.594 20:06:22 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:45.594 20:06:22 -- accel/accel.sh@16 -- # local accel_opc 00:05:45.594 20:06:22 -- accel/accel.sh@17 -- # local accel_module 00:05:45.594 20:06:22 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:45.594 20:06:22 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:45.594 20:06:22 -- accel/accel.sh@12 -- # build_accel_config 00:05:45.594 20:06:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:45.594 20:06:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:45.594 20:06:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:45.594 20:06:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:45.594 20:06:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:45.594 20:06:22 -- accel/accel.sh@41 -- # local IFS=, 00:05:45.594 20:06:22 -- accel/accel.sh@42 -- # jq -r . 00:05:45.594 [2024-02-14 20:06:22.857772] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:45.594 [2024-02-14 20:06:22.857824] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1605061 ] 00:05:45.594 EAL: No free 2048 kB hugepages reported on node 1 00:05:45.594 [2024-02-14 20:06:22.920506] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.594 [2024-02-14 20:06:22.988928] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.594 [2024-02-14 20:06:22.988986] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:05:46.975 [2024-02-14 20:06:24.033664] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:05:46.975 20:06:24 -- accel/accel.sh@18 -- # out=' 00:05:46.975 SPDK Configuration: 00:05:46.975 Core mask: 0x1 00:05:46.975 00:05:46.975 Accel Perf Configuration: 00:05:46.975 Workload Type: crc32c 00:05:46.975 CRC-32C seed: 32 00:05:46.975 Transfer size: 4096 bytes 00:05:46.975 Vector count 1 00:05:46.975 Module: software 00:05:46.975 Queue depth: 32 00:05:46.975 Allocate depth: 32 00:05:46.975 # threads/core: 1 00:05:46.975 Run time: 1 seconds 00:05:46.975 Verify: Yes 00:05:46.975 00:05:46.975 Running for 1 seconds... 00:05:46.975 00:05:46.975 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:46.975 ------------------------------------------------------------------------------------ 00:05:46.975 0,0 593824/s 2319 MiB/s 0 0 00:05:46.975 ==================================================================================== 00:05:46.975 Total 593824/s 2319 MiB/s 0 0' 00:05:46.975 20:06:24 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:46.975 20:06:24 -- accel/accel.sh@20 -- # IFS=: 00:05:46.975 20:06:24 -- accel/accel.sh@20 -- # read -r var val 00:05:46.975 20:06:24 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:46.975 20:06:24 -- accel/accel.sh@12 -- # build_accel_config 00:05:46.975 20:06:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:46.975 20:06:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:46.975 20:06:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:46.975 20:06:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:46.975 20:06:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:46.975 20:06:24 -- accel/accel.sh@41 -- # local IFS=, 00:05:46.975 20:06:24 -- accel/accel.sh@42 -- # jq -r . 00:05:46.975 [2024-02-14 20:06:24.201133] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:46.975 [2024-02-14 20:06:24.201186] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1605282 ] 00:05:46.975 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.975 [2024-02-14 20:06:24.258317] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.975 [2024-02-14 20:06:24.324612] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.975 [2024-02-14 20:06:24.324676] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:05:46.975 20:06:24 -- accel/accel.sh@21 -- # val= 00:05:46.975 20:06:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.975 20:06:24 -- accel/accel.sh@20 -- # IFS=: 00:05:46.975 20:06:24 -- accel/accel.sh@20 -- # read -r var val 00:05:46.975 20:06:24 -- accel/accel.sh@21 -- # val= 00:05:46.975 20:06:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.975 20:06:24 -- accel/accel.sh@20 -- # IFS=: 00:05:46.975 20:06:24 -- accel/accel.sh@20 -- # read -r var val 00:05:46.975 20:06:24 -- accel/accel.sh@21 -- # val=0x1 00:05:46.975 20:06:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.975 20:06:24 -- accel/accel.sh@20 -- # IFS=: 00:05:46.975 20:06:24 -- accel/accel.sh@20 -- # read -r var val 00:05:46.975 20:06:24 -- accel/accel.sh@21 -- # val= 00:05:46.975 20:06:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.975 20:06:24 -- accel/accel.sh@20 -- # IFS=: 00:05:46.975 20:06:24 -- accel/accel.sh@20 -- # read -r var val 00:05:46.975 20:06:24 -- accel/accel.sh@21 -- # val= 00:05:46.975 20:06:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.975 20:06:24 -- accel/accel.sh@20 -- # IFS=: 00:05:46.975 20:06:24 -- accel/accel.sh@20 -- # read -r var val 00:05:46.975 20:06:24 -- accel/accel.sh@21 -- # val=crc32c 00:05:46.975 20:06:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.975 20:06:24 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:05:46.975 20:06:24 -- accel/accel.sh@20 -- # IFS=: 00:05:46.976 20:06:24 -- accel/accel.sh@20 -- # read -r var val 00:05:46.976 20:06:24 -- accel/accel.sh@21 -- # val=32 00:05:46.976 20:06:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.976 20:06:24 -- accel/accel.sh@20 -- # IFS=: 00:05:46.976 20:06:24 -- accel/accel.sh@20 -- # read -r var val 00:05:46.976 20:06:24 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:46.976 20:06:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.976 20:06:24 -- accel/accel.sh@20 -- # IFS=: 00:05:46.976 20:06:24 -- accel/accel.sh@20 -- # read -r var val 00:05:46.976 20:06:24 -- accel/accel.sh@21 -- # val= 00:05:46.976 20:06:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.976 20:06:24 -- accel/accel.sh@20 -- # IFS=: 00:05:46.976 20:06:24 -- accel/accel.sh@20 -- # read -r var val 00:05:46.976 20:06:24 -- accel/accel.sh@21 -- # val=software 00:05:46.976 20:06:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.976 20:06:24 -- accel/accel.sh@23 -- # accel_module=software 00:05:46.976 20:06:24 -- accel/accel.sh@20 -- # IFS=: 00:05:46.976 20:06:24 -- accel/accel.sh@20 -- # read -r var val 00:05:46.976 20:06:24 -- accel/accel.sh@21 -- # val=32 00:05:46.976 20:06:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.976 20:06:24 -- accel/accel.sh@20 -- # IFS=: 00:05:46.976 20:06:24 -- accel/accel.sh@20 -- # read -r var val 00:05:46.976 20:06:24 -- accel/accel.sh@21 -- # val=32 00:05:46.976 20:06:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.976 20:06:24 -- accel/accel.sh@20 -- # IFS=: 00:05:46.976 20:06:24 -- accel/accel.sh@20 -- # read -r var val 00:05:46.976 20:06:24 -- accel/accel.sh@21 -- # val=1 00:05:46.976 20:06:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.976 20:06:24 -- accel/accel.sh@20 -- # IFS=: 00:05:46.976 20:06:24 -- accel/accel.sh@20 -- # read -r var val 00:05:46.976 20:06:24 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:46.976 20:06:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.976 20:06:24 -- accel/accel.sh@20 -- # IFS=: 00:05:46.976 20:06:24 -- accel/accel.sh@20 -- # read -r var val 00:05:46.976 20:06:24 -- accel/accel.sh@21 -- # val=Yes 00:05:46.976 20:06:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.976 20:06:24 -- accel/accel.sh@20 -- # IFS=: 00:05:46.976 20:06:24 -- accel/accel.sh@20 -- # read -r var val 00:05:46.976 20:06:24 -- accel/accel.sh@21 -- # val= 00:05:46.976 20:06:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.976 20:06:24 -- accel/accel.sh@20 -- # IFS=: 00:05:46.976 20:06:24 -- accel/accel.sh@20 -- # read -r var val 00:05:46.976 20:06:24 -- accel/accel.sh@21 -- # val= 00:05:46.976 20:06:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.976 20:06:24 -- accel/accel.sh@20 -- # IFS=: 00:05:46.976 20:06:24 -- accel/accel.sh@20 -- # read -r var val 00:05:48.357 [2024-02-14 20:06:25.368935] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:05:48.357 20:06:25 -- accel/accel.sh@21 -- # val= 00:05:48.357 20:06:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.357 20:06:25 -- accel/accel.sh@20 -- # IFS=: 00:05:48.357 20:06:25 -- accel/accel.sh@20 -- # read -r var val 00:05:48.357 20:06:25 -- accel/accel.sh@21 -- # val= 00:05:48.357 20:06:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.357 20:06:25 -- accel/accel.sh@20 -- # IFS=: 00:05:48.357 20:06:25 -- accel/accel.sh@20 -- # read -r var val 00:05:48.357 20:06:25 -- accel/accel.sh@21 -- # val= 00:05:48.357 20:06:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.357 20:06:25 -- accel/accel.sh@20 -- # IFS=: 00:05:48.357 20:06:25 -- accel/accel.sh@20 -- # read -r var val 00:05:48.357 20:06:25 -- accel/accel.sh@21 -- # val= 00:05:48.357 20:06:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.357 20:06:25 -- accel/accel.sh@20 -- # IFS=: 00:05:48.357 20:06:25 -- accel/accel.sh@20 -- # read -r var val 00:05:48.357 20:06:25 -- accel/accel.sh@21 -- # val= 00:05:48.357 20:06:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.357 20:06:25 -- accel/accel.sh@20 -- # IFS=: 00:05:48.357 20:06:25 -- accel/accel.sh@20 -- # read -r var val 00:05:48.357 20:06:25 -- accel/accel.sh@21 -- # val= 00:05:48.357 20:06:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.357 20:06:25 -- accel/accel.sh@20 -- # IFS=: 00:05:48.357 20:06:25 -- accel/accel.sh@20 -- # read -r var val 00:05:48.357 20:06:25 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:48.357 20:06:25 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:05:48.357 20:06:25 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:48.357 00:05:48.357 real 0m2.694s 00:05:48.357 user 0m2.470s 00:05:48.357 sys 0m0.233s 00:05:48.357 20:06:25 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:48.357 20:06:25 -- common/autotest_common.sh@10 -- # set +x 00:05:48.357 ************************************ 00:05:48.357 END TEST accel_crc32c 00:05:48.357 ************************************ 00:05:48.357 20:06:25 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:48.357 20:06:25 -- common/autotest_common.sh@1075 -- # '[' 9 -le 1 ']' 00:05:48.357 20:06:25 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:48.357 20:06:25 -- common/autotest_common.sh@10 -- # set +x 00:05:48.357 ************************************ 00:05:48.357 START TEST accel_crc32c_C2 00:05:48.357 ************************************ 00:05:48.357 20:06:25 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:48.357 20:06:25 -- accel/accel.sh@16 -- # local accel_opc 00:05:48.357 20:06:25 -- accel/accel.sh@17 -- # local accel_module 00:05:48.357 20:06:25 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:48.357 20:06:25 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:48.357 20:06:25 -- accel/accel.sh@12 -- # build_accel_config 00:05:48.357 20:06:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:48.357 20:06:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:48.357 20:06:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:48.357 20:06:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:48.357 20:06:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:48.357 20:06:25 -- accel/accel.sh@41 -- # local IFS=, 00:05:48.357 20:06:25 -- accel/accel.sh@42 -- # jq -r . 00:05:48.357 [2024-02-14 20:06:25.585469] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:48.357 [2024-02-14 20:06:25.585528] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1605538 ] 00:05:48.357 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.357 [2024-02-14 20:06:25.645082] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.357 [2024-02-14 20:06:25.712398] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.357 [2024-02-14 20:06:25.712453] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:05:49.737 [2024-02-14 20:06:26.756918] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:05:49.737 20:06:26 -- accel/accel.sh@18 -- # out=' 00:05:49.737 SPDK Configuration: 00:05:49.737 Core mask: 0x1 00:05:49.737 00:05:49.737 Accel Perf Configuration: 00:05:49.737 Workload Type: crc32c 00:05:49.737 CRC-32C seed: 0 00:05:49.737 Transfer size: 4096 bytes 00:05:49.737 Vector count 2 00:05:49.738 Module: software 00:05:49.738 Queue depth: 32 00:05:49.738 Allocate depth: 32 00:05:49.738 # threads/core: 1 00:05:49.738 Run time: 1 seconds 00:05:49.738 Verify: Yes 00:05:49.738 00:05:49.738 Running for 1 seconds... 00:05:49.738 00:05:49.738 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:49.738 ------------------------------------------------------------------------------------ 00:05:49.738 0,0 464352/s 3627 MiB/s 0 0 00:05:49.738 ==================================================================================== 00:05:49.738 Total 464352/s 1813 MiB/s 0 0' 00:05:49.738 20:06:26 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:49.738 20:06:26 -- accel/accel.sh@20 -- # IFS=: 00:05:49.738 20:06:26 -- accel/accel.sh@20 -- # read -r var val 00:05:49.738 20:06:26 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:49.738 20:06:26 -- accel/accel.sh@12 -- # build_accel_config 00:05:49.738 20:06:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:49.738 20:06:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:49.738 20:06:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:49.738 20:06:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:49.738 20:06:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:49.738 20:06:26 -- accel/accel.sh@41 -- # local IFS=, 00:05:49.738 20:06:26 -- accel/accel.sh@42 -- # jq -r . 00:05:49.738 [2024-02-14 20:06:26.919584] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:49.738 [2024-02-14 20:06:26.919632] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1605754 ] 00:05:49.738 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.738 [2024-02-14 20:06:26.971754] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.738 [2024-02-14 20:06:27.045818] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.738 [2024-02-14 20:06:27.045866] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:05:49.738 20:06:27 -- accel/accel.sh@21 -- # val= 00:05:49.738 20:06:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.738 20:06:27 -- accel/accel.sh@20 -- # IFS=: 00:05:49.738 20:06:27 -- accel/accel.sh@20 -- # read -r var val 00:05:49.738 20:06:27 -- accel/accel.sh@21 -- # val= 00:05:49.738 20:06:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.738 20:06:27 -- accel/accel.sh@20 -- # IFS=: 00:05:49.738 20:06:27 -- accel/accel.sh@20 -- # read -r var val 00:05:49.738 20:06:27 -- accel/accel.sh@21 -- # val=0x1 00:05:49.738 20:06:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.738 20:06:27 -- accel/accel.sh@20 -- # IFS=: 00:05:49.738 20:06:27 -- accel/accel.sh@20 -- # read -r var val 00:05:49.738 20:06:27 -- accel/accel.sh@21 -- # val= 00:05:49.738 20:06:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.738 20:06:27 -- accel/accel.sh@20 -- # IFS=: 00:05:49.738 20:06:27 -- accel/accel.sh@20 -- # read -r var val 00:05:49.738 20:06:27 -- accel/accel.sh@21 -- # val= 00:05:49.738 20:06:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.738 20:06:27 -- accel/accel.sh@20 -- # IFS=: 00:05:49.738 20:06:27 -- accel/accel.sh@20 -- # read -r var val 00:05:49.738 20:06:27 -- accel/accel.sh@21 -- # val=crc32c 00:05:49.738 20:06:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.738 20:06:27 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:05:49.738 20:06:27 -- accel/accel.sh@20 -- # IFS=: 00:05:49.738 20:06:27 -- accel/accel.sh@20 -- # read -r var val 00:05:49.738 20:06:27 -- accel/accel.sh@21 -- # val=0 00:05:49.738 20:06:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.738 20:06:27 -- accel/accel.sh@20 -- # IFS=: 00:05:49.738 20:06:27 -- accel/accel.sh@20 -- # read -r var val 00:05:49.738 20:06:27 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:49.738 20:06:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.738 20:06:27 -- accel/accel.sh@20 -- # IFS=: 00:05:49.738 20:06:27 -- accel/accel.sh@20 -- # read -r var val 00:05:49.738 20:06:27 -- accel/accel.sh@21 -- # val= 00:05:49.738 20:06:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.738 20:06:27 -- accel/accel.sh@20 -- # IFS=: 00:05:49.738 20:06:27 -- accel/accel.sh@20 -- # read -r var val 00:05:49.738 20:06:27 -- accel/accel.sh@21 -- # val=software 00:05:49.738 20:06:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.738 20:06:27 -- accel/accel.sh@23 -- # accel_module=software 00:05:49.738 20:06:27 -- accel/accel.sh@20 -- # IFS=: 00:05:49.738 20:06:27 -- accel/accel.sh@20 -- # read -r var val 00:05:49.738 20:06:27 -- accel/accel.sh@21 -- # val=32 00:05:49.738 20:06:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.738 20:06:27 -- accel/accel.sh@20 -- # IFS=: 00:05:49.738 20:06:27 -- accel/accel.sh@20 -- # read -r var val 00:05:49.738 20:06:27 -- accel/accel.sh@21 -- # val=32 00:05:49.738 20:06:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.738 20:06:27 -- accel/accel.sh@20 -- # IFS=: 00:05:49.738 20:06:27 -- accel/accel.sh@20 -- # read -r var val 00:05:49.738 20:06:27 -- accel/accel.sh@21 -- # val=1 00:05:49.738 20:06:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.738 20:06:27 -- accel/accel.sh@20 -- # IFS=: 00:05:49.738 20:06:27 -- accel/accel.sh@20 -- # read -r var val 00:05:49.738 20:06:27 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:49.738 20:06:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.738 20:06:27 -- accel/accel.sh@20 -- # IFS=: 00:05:49.738 20:06:27 -- accel/accel.sh@20 -- # read -r var val 00:05:49.738 20:06:27 -- accel/accel.sh@21 -- # val=Yes 00:05:49.738 20:06:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.738 20:06:27 -- accel/accel.sh@20 -- # IFS=: 00:05:49.738 20:06:27 -- accel/accel.sh@20 -- # read -r var val 00:05:49.738 20:06:27 -- accel/accel.sh@21 -- # val= 00:05:49.738 20:06:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.738 20:06:27 -- accel/accel.sh@20 -- # IFS=: 00:05:49.738 20:06:27 -- accel/accel.sh@20 -- # read -r var val 00:05:49.738 20:06:27 -- accel/accel.sh@21 -- # val= 00:05:49.738 20:06:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.738 20:06:27 -- accel/accel.sh@20 -- # IFS=: 00:05:49.738 20:06:27 -- accel/accel.sh@20 -- # read -r var val 00:05:50.677 [2024-02-14 20:06:28.089632] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:05:50.936 20:06:28 -- accel/accel.sh@21 -- # val= 00:05:50.936 20:06:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.936 20:06:28 -- accel/accel.sh@20 -- # IFS=: 00:05:50.936 20:06:28 -- accel/accel.sh@20 -- # read -r var val 00:05:50.936 20:06:28 -- accel/accel.sh@21 -- # val= 00:05:50.936 20:06:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.936 20:06:28 -- accel/accel.sh@20 -- # IFS=: 00:05:50.936 20:06:28 -- accel/accel.sh@20 -- # read -r var val 00:05:50.936 20:06:28 -- accel/accel.sh@21 -- # val= 00:05:50.936 20:06:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.936 20:06:28 -- accel/accel.sh@20 -- # IFS=: 00:05:50.936 20:06:28 -- accel/accel.sh@20 -- # read -r var val 00:05:50.936 20:06:28 -- accel/accel.sh@21 -- # val= 00:05:50.936 20:06:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.936 20:06:28 -- accel/accel.sh@20 -- # IFS=: 00:05:50.936 20:06:28 -- accel/accel.sh@20 -- # read -r var val 00:05:50.936 20:06:28 -- accel/accel.sh@21 -- # val= 00:05:50.936 20:06:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.936 20:06:28 -- accel/accel.sh@20 -- # IFS=: 00:05:50.936 20:06:28 -- accel/accel.sh@20 -- # read -r var val 00:05:50.936 20:06:28 -- accel/accel.sh@21 -- # val= 00:05:50.936 20:06:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.936 20:06:28 -- accel/accel.sh@20 -- # IFS=: 00:05:50.936 20:06:28 -- accel/accel.sh@20 -- # read -r var val 00:05:50.936 20:06:28 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:50.936 20:06:28 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:05:50.936 20:06:28 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:50.936 00:05:50.936 real 0m2.687s 00:05:50.936 user 0m2.480s 00:05:50.936 sys 0m0.216s 00:05:50.936 20:06:28 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:50.936 20:06:28 -- common/autotest_common.sh@10 -- # set +x 00:05:50.936 ************************************ 00:05:50.936 END TEST accel_crc32c_C2 00:05:50.936 ************************************ 00:05:50.936 20:06:28 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:50.936 20:06:28 -- common/autotest_common.sh@1075 -- # '[' 7 -le 1 ']' 00:05:50.936 20:06:28 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:50.936 20:06:28 -- common/autotest_common.sh@10 -- # set +x 00:05:50.936 ************************************ 00:05:50.936 START TEST accel_copy 00:05:50.936 ************************************ 00:05:50.936 20:06:28 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w copy -y 00:05:50.936 20:06:28 -- accel/accel.sh@16 -- # local accel_opc 00:05:50.936 20:06:28 -- accel/accel.sh@17 -- # local accel_module 00:05:50.936 20:06:28 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:05:50.936 20:06:28 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:50.936 20:06:28 -- accel/accel.sh@12 -- # build_accel_config 00:05:50.936 20:06:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:50.936 20:06:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:50.936 20:06:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:50.936 20:06:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:50.936 20:06:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:50.936 20:06:28 -- accel/accel.sh@41 -- # local IFS=, 00:05:50.936 20:06:28 -- accel/accel.sh@42 -- # jq -r . 00:05:50.936 [2024-02-14 20:06:28.305910] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:50.936 [2024-02-14 20:06:28.305969] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1606013 ] 00:05:50.936 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.211 [2024-02-14 20:06:28.364732] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.211 [2024-02-14 20:06:28.443564] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.211 [2024-02-14 20:06:28.443613] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:05:52.162 [2024-02-14 20:06:29.489176] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:05:52.422 20:06:29 -- accel/accel.sh@18 -- # out=' 00:05:52.422 SPDK Configuration: 00:05:52.422 Core mask: 0x1 00:05:52.422 00:05:52.422 Accel Perf Configuration: 00:05:52.422 Workload Type: copy 00:05:52.422 Transfer size: 4096 bytes 00:05:52.422 Vector count 1 00:05:52.422 Module: software 00:05:52.422 Queue depth: 32 00:05:52.422 Allocate depth: 32 00:05:52.422 # threads/core: 1 00:05:52.422 Run time: 1 seconds 00:05:52.422 Verify: Yes 00:05:52.422 00:05:52.422 Running for 1 seconds... 00:05:52.422 00:05:52.422 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:52.422 ------------------------------------------------------------------------------------ 00:05:52.422 0,0 438784/s 1714 MiB/s 0 0 00:05:52.422 ==================================================================================== 00:05:52.422 Total 438784/s 1714 MiB/s 0 0' 00:05:52.422 20:06:29 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:52.422 20:06:29 -- accel/accel.sh@20 -- # IFS=: 00:05:52.422 20:06:29 -- accel/accel.sh@20 -- # read -r var val 00:05:52.422 20:06:29 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:52.422 20:06:29 -- accel/accel.sh@12 -- # build_accel_config 00:05:52.422 20:06:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:52.422 20:06:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:52.422 20:06:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:52.422 20:06:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:52.422 20:06:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:52.422 20:06:29 -- accel/accel.sh@41 -- # local IFS=, 00:05:52.422 20:06:29 -- accel/accel.sh@42 -- # jq -r . 00:05:52.422 [2024-02-14 20:06:29.654862] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:52.422 [2024-02-14 20:06:29.654919] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1606261 ] 00:05:52.422 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.422 [2024-02-14 20:06:29.714589] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.422 [2024-02-14 20:06:29.781491] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.422 [2024-02-14 20:06:29.781546] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:05:52.422 20:06:29 -- accel/accel.sh@21 -- # val= 00:05:52.422 20:06:29 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.422 20:06:29 -- accel/accel.sh@20 -- # IFS=: 00:05:52.422 20:06:29 -- accel/accel.sh@20 -- # read -r var val 00:05:52.422 20:06:29 -- accel/accel.sh@21 -- # val= 00:05:52.422 20:06:29 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.422 20:06:29 -- accel/accel.sh@20 -- # IFS=: 00:05:52.422 20:06:29 -- accel/accel.sh@20 -- # read -r var val 00:05:52.422 20:06:29 -- accel/accel.sh@21 -- # val=0x1 00:05:52.422 20:06:29 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.422 20:06:29 -- accel/accel.sh@20 -- # IFS=: 00:05:52.422 20:06:29 -- accel/accel.sh@20 -- # read -r var val 00:05:52.422 20:06:29 -- accel/accel.sh@21 -- # val= 00:05:52.422 20:06:29 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.422 20:06:29 -- accel/accel.sh@20 -- # IFS=: 00:05:52.422 20:06:29 -- accel/accel.sh@20 -- # read -r var val 00:05:52.422 20:06:29 -- accel/accel.sh@21 -- # val= 00:05:52.422 20:06:29 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.422 20:06:29 -- accel/accel.sh@20 -- # IFS=: 00:05:52.422 20:06:29 -- accel/accel.sh@20 -- # read -r var val 00:05:52.422 20:06:29 -- accel/accel.sh@21 -- # val=copy 00:05:52.422 20:06:29 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.422 20:06:29 -- accel/accel.sh@24 -- # accel_opc=copy 00:05:52.422 20:06:29 -- accel/accel.sh@20 -- # IFS=: 00:05:52.422 20:06:29 -- accel/accel.sh@20 -- # read -r var val 00:05:52.422 20:06:29 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:52.422 20:06:29 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.422 20:06:29 -- accel/accel.sh@20 -- # IFS=: 00:05:52.422 20:06:29 -- accel/accel.sh@20 -- # read -r var val 00:05:52.422 20:06:29 -- accel/accel.sh@21 -- # val= 00:05:52.422 20:06:29 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.422 20:06:29 -- accel/accel.sh@20 -- # IFS=: 00:05:52.422 20:06:29 -- accel/accel.sh@20 -- # read -r var val 00:05:52.422 20:06:29 -- accel/accel.sh@21 -- # val=software 00:05:52.422 20:06:29 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.422 20:06:29 -- accel/accel.sh@23 -- # accel_module=software 00:05:52.422 20:06:29 -- accel/accel.sh@20 -- # IFS=: 00:05:52.422 20:06:29 -- accel/accel.sh@20 -- # read -r var val 00:05:52.422 20:06:29 -- accel/accel.sh@21 -- # val=32 00:05:52.422 20:06:29 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.422 20:06:29 -- accel/accel.sh@20 -- # IFS=: 00:05:52.422 20:06:29 -- accel/accel.sh@20 -- # read -r var val 00:05:52.422 20:06:29 -- accel/accel.sh@21 -- # val=32 00:05:52.422 20:06:29 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.422 20:06:29 -- accel/accel.sh@20 -- # IFS=: 00:05:52.422 20:06:29 -- accel/accel.sh@20 -- # read -r var val 00:05:52.422 20:06:29 -- accel/accel.sh@21 -- # val=1 00:05:52.422 20:06:29 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.422 20:06:29 -- accel/accel.sh@20 -- # IFS=: 00:05:52.422 20:06:29 -- accel/accel.sh@20 -- # read -r var val 00:05:52.422 20:06:29 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:52.422 20:06:29 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.422 20:06:29 -- accel/accel.sh@20 -- # IFS=: 00:05:52.422 20:06:29 -- accel/accel.sh@20 -- # read -r var val 00:05:52.422 20:06:29 -- accel/accel.sh@21 -- # val=Yes 00:05:52.422 20:06:29 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.422 20:06:29 -- accel/accel.sh@20 -- # IFS=: 00:05:52.422 20:06:29 -- accel/accel.sh@20 -- # read -r var val 00:05:52.422 20:06:29 -- accel/accel.sh@21 -- # val= 00:05:52.422 20:06:29 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.422 20:06:29 -- accel/accel.sh@20 -- # IFS=: 00:05:52.422 20:06:29 -- accel/accel.sh@20 -- # read -r var val 00:05:52.422 20:06:29 -- accel/accel.sh@21 -- # val= 00:05:52.422 20:06:29 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.422 20:06:29 -- accel/accel.sh@20 -- # IFS=: 00:05:52.422 20:06:29 -- accel/accel.sh@20 -- # read -r var val 00:05:53.802 [2024-02-14 20:06:30.826320] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:05:53.802 20:06:30 -- accel/accel.sh@21 -- # val= 00:05:53.802 20:06:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.802 20:06:30 -- accel/accel.sh@20 -- # IFS=: 00:05:53.802 20:06:30 -- accel/accel.sh@20 -- # read -r var val 00:05:53.802 20:06:30 -- accel/accel.sh@21 -- # val= 00:05:53.802 20:06:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.802 20:06:30 -- accel/accel.sh@20 -- # IFS=: 00:05:53.802 20:06:30 -- accel/accel.sh@20 -- # read -r var val 00:05:53.802 20:06:30 -- accel/accel.sh@21 -- # val= 00:05:53.802 20:06:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.802 20:06:30 -- accel/accel.sh@20 -- # IFS=: 00:05:53.802 20:06:30 -- accel/accel.sh@20 -- # read -r var val 00:05:53.802 20:06:30 -- accel/accel.sh@21 -- # val= 00:05:53.802 20:06:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.802 20:06:30 -- accel/accel.sh@20 -- # IFS=: 00:05:53.802 20:06:30 -- accel/accel.sh@20 -- # read -r var val 00:05:53.802 20:06:30 -- accel/accel.sh@21 -- # val= 00:05:53.802 20:06:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.802 20:06:30 -- accel/accel.sh@20 -- # IFS=: 00:05:53.802 20:06:30 -- accel/accel.sh@20 -- # read -r var val 00:05:53.802 20:06:30 -- accel/accel.sh@21 -- # val= 00:05:53.802 20:06:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.802 20:06:30 -- accel/accel.sh@20 -- # IFS=: 00:05:53.802 20:06:30 -- accel/accel.sh@20 -- # read -r var val 00:05:53.802 20:06:30 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:53.802 20:06:30 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:05:53.802 20:06:30 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:53.802 00:05:53.802 real 0m2.702s 00:05:53.802 user 0m2.480s 00:05:53.802 sys 0m0.228s 00:05:53.802 20:06:30 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:53.802 20:06:30 -- common/autotest_common.sh@10 -- # set +x 00:05:53.802 ************************************ 00:05:53.802 END TEST accel_copy 00:05:53.802 ************************************ 00:05:53.802 20:06:31 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:53.802 20:06:31 -- common/autotest_common.sh@1075 -- # '[' 13 -le 1 ']' 00:05:53.802 20:06:31 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:53.802 20:06:31 -- common/autotest_common.sh@10 -- # set +x 00:05:53.802 ************************************ 00:05:53.802 START TEST accel_fill 00:05:53.802 ************************************ 00:05:53.802 20:06:31 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:53.802 20:06:31 -- accel/accel.sh@16 -- # local accel_opc 00:05:53.802 20:06:31 -- accel/accel.sh@17 -- # local accel_module 00:05:53.802 20:06:31 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:53.802 20:06:31 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:53.802 20:06:31 -- accel/accel.sh@12 -- # build_accel_config 00:05:53.802 20:06:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:53.802 20:06:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:53.802 20:06:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:53.802 20:06:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:53.802 20:06:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:53.802 20:06:31 -- accel/accel.sh@41 -- # local IFS=, 00:05:53.802 20:06:31 -- accel/accel.sh@42 -- # jq -r . 00:05:53.802 [2024-02-14 20:06:31.043730] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:53.802 [2024-02-14 20:06:31.043810] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1606514 ] 00:05:53.802 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.802 [2024-02-14 20:06:31.104859] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.802 [2024-02-14 20:06:31.173683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.802 [2024-02-14 20:06:31.173737] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:05:55.184 [2024-02-14 20:06:32.218785] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:05:55.184 20:06:32 -- accel/accel.sh@18 -- # out=' 00:05:55.184 SPDK Configuration: 00:05:55.184 Core mask: 0x1 00:05:55.184 00:05:55.184 Accel Perf Configuration: 00:05:55.184 Workload Type: fill 00:05:55.184 Fill pattern: 0x80 00:05:55.184 Transfer size: 4096 bytes 00:05:55.184 Vector count 1 00:05:55.184 Module: software 00:05:55.184 Queue depth: 64 00:05:55.184 Allocate depth: 64 00:05:55.184 # threads/core: 1 00:05:55.184 Run time: 1 seconds 00:05:55.184 Verify: Yes 00:05:55.184 00:05:55.184 Running for 1 seconds... 00:05:55.184 00:05:55.184 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:55.184 ------------------------------------------------------------------------------------ 00:05:55.184 0,0 678592/s 2650 MiB/s 0 0 00:05:55.184 ==================================================================================== 00:05:55.184 Total 678592/s 2650 MiB/s 0 0' 00:05:55.184 20:06:32 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:55.184 20:06:32 -- accel/accel.sh@20 -- # IFS=: 00:05:55.184 20:06:32 -- accel/accel.sh@20 -- # read -r var val 00:05:55.184 20:06:32 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:55.184 20:06:32 -- accel/accel.sh@12 -- # build_accel_config 00:05:55.184 20:06:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:55.184 20:06:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:55.184 20:06:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:55.184 20:06:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:55.184 20:06:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:55.184 20:06:32 -- accel/accel.sh@41 -- # local IFS=, 00:05:55.184 20:06:32 -- accel/accel.sh@42 -- # jq -r . 00:05:55.184 [2024-02-14 20:06:32.383736] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:55.184 [2024-02-14 20:06:32.383787] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1606739 ] 00:05:55.184 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.184 [2024-02-14 20:06:32.435617] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.184 [2024-02-14 20:06:32.504113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.184 [2024-02-14 20:06:32.504167] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:05:55.184 20:06:32 -- accel/accel.sh@21 -- # val= 00:05:55.184 20:06:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.184 20:06:32 -- accel/accel.sh@20 -- # IFS=: 00:05:55.184 20:06:32 -- accel/accel.sh@20 -- # read -r var val 00:05:55.184 20:06:32 -- accel/accel.sh@21 -- # val= 00:05:55.184 20:06:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.184 20:06:32 -- accel/accel.sh@20 -- # IFS=: 00:05:55.184 20:06:32 -- accel/accel.sh@20 -- # read -r var val 00:05:55.184 20:06:32 -- accel/accel.sh@21 -- # val=0x1 00:05:55.184 20:06:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.184 20:06:32 -- accel/accel.sh@20 -- # IFS=: 00:05:55.184 20:06:32 -- accel/accel.sh@20 -- # read -r var val 00:05:55.184 20:06:32 -- accel/accel.sh@21 -- # val= 00:05:55.184 20:06:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.184 20:06:32 -- accel/accel.sh@20 -- # IFS=: 00:05:55.184 20:06:32 -- accel/accel.sh@20 -- # read -r var val 00:05:55.184 20:06:32 -- accel/accel.sh@21 -- # val= 00:05:55.184 20:06:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.184 20:06:32 -- accel/accel.sh@20 -- # IFS=: 00:05:55.184 20:06:32 -- accel/accel.sh@20 -- # read -r var val 00:05:55.184 20:06:32 -- accel/accel.sh@21 -- # val=fill 00:05:55.184 20:06:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.184 20:06:32 -- accel/accel.sh@24 -- # accel_opc=fill 00:05:55.184 20:06:32 -- accel/accel.sh@20 -- # IFS=: 00:05:55.184 20:06:32 -- accel/accel.sh@20 -- # read -r var val 00:05:55.184 20:06:32 -- accel/accel.sh@21 -- # val=0x80 00:05:55.184 20:06:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.184 20:06:32 -- accel/accel.sh@20 -- # IFS=: 00:05:55.184 20:06:32 -- accel/accel.sh@20 -- # read -r var val 00:05:55.184 20:06:32 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:55.184 20:06:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.184 20:06:32 -- accel/accel.sh@20 -- # IFS=: 00:05:55.184 20:06:32 -- accel/accel.sh@20 -- # read -r var val 00:05:55.184 20:06:32 -- accel/accel.sh@21 -- # val= 00:05:55.184 20:06:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.184 20:06:32 -- accel/accel.sh@20 -- # IFS=: 00:05:55.184 20:06:32 -- accel/accel.sh@20 -- # read -r var val 00:05:55.184 20:06:32 -- accel/accel.sh@21 -- # val=software 00:05:55.184 20:06:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.184 20:06:32 -- accel/accel.sh@23 -- # accel_module=software 00:05:55.184 20:06:32 -- accel/accel.sh@20 -- # IFS=: 00:05:55.184 20:06:32 -- accel/accel.sh@20 -- # read -r var val 00:05:55.184 20:06:32 -- accel/accel.sh@21 -- # val=64 00:05:55.184 20:06:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.184 20:06:32 -- accel/accel.sh@20 -- # IFS=: 00:05:55.184 20:06:32 -- accel/accel.sh@20 -- # read -r var val 00:05:55.184 20:06:32 -- accel/accel.sh@21 -- # val=64 00:05:55.184 20:06:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.184 20:06:32 -- accel/accel.sh@20 -- # IFS=: 00:05:55.184 20:06:32 -- accel/accel.sh@20 -- # read -r var val 00:05:55.184 20:06:32 -- accel/accel.sh@21 -- # val=1 00:05:55.184 20:06:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.184 20:06:32 -- accel/accel.sh@20 -- # IFS=: 00:05:55.184 20:06:32 -- accel/accel.sh@20 -- # read -r var val 00:05:55.184 20:06:32 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:55.185 20:06:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.185 20:06:32 -- accel/accel.sh@20 -- # IFS=: 00:05:55.185 20:06:32 -- accel/accel.sh@20 -- # read -r var val 00:05:55.185 20:06:32 -- accel/accel.sh@21 -- # val=Yes 00:05:55.185 20:06:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.185 20:06:32 -- accel/accel.sh@20 -- # IFS=: 00:05:55.185 20:06:32 -- accel/accel.sh@20 -- # read -r var val 00:05:55.185 20:06:32 -- accel/accel.sh@21 -- # val= 00:05:55.185 20:06:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.185 20:06:32 -- accel/accel.sh@20 -- # IFS=: 00:05:55.185 20:06:32 -- accel/accel.sh@20 -- # read -r var val 00:05:55.185 20:06:32 -- accel/accel.sh@21 -- # val= 00:05:55.185 20:06:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.185 20:06:32 -- accel/accel.sh@20 -- # IFS=: 00:05:55.185 20:06:32 -- accel/accel.sh@20 -- # read -r var val 00:05:56.564 [2024-02-14 20:06:33.549010] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:05:56.564 20:06:33 -- accel/accel.sh@21 -- # val= 00:05:56.564 20:06:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.564 20:06:33 -- accel/accel.sh@20 -- # IFS=: 00:05:56.564 20:06:33 -- accel/accel.sh@20 -- # read -r var val 00:05:56.564 20:06:33 -- accel/accel.sh@21 -- # val= 00:05:56.564 20:06:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.564 20:06:33 -- accel/accel.sh@20 -- # IFS=: 00:05:56.564 20:06:33 -- accel/accel.sh@20 -- # read -r var val 00:05:56.564 20:06:33 -- accel/accel.sh@21 -- # val= 00:05:56.564 20:06:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.564 20:06:33 -- accel/accel.sh@20 -- # IFS=: 00:05:56.564 20:06:33 -- accel/accel.sh@20 -- # read -r var val 00:05:56.564 20:06:33 -- accel/accel.sh@21 -- # val= 00:05:56.564 20:06:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.564 20:06:33 -- accel/accel.sh@20 -- # IFS=: 00:05:56.564 20:06:33 -- accel/accel.sh@20 -- # read -r var val 00:05:56.564 20:06:33 -- accel/accel.sh@21 -- # val= 00:05:56.564 20:06:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.564 20:06:33 -- accel/accel.sh@20 -- # IFS=: 00:05:56.564 20:06:33 -- accel/accel.sh@20 -- # read -r var val 00:05:56.564 20:06:33 -- accel/accel.sh@21 -- # val= 00:05:56.564 20:06:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.564 20:06:33 -- accel/accel.sh@20 -- # IFS=: 00:05:56.564 20:06:33 -- accel/accel.sh@20 -- # read -r var val 00:05:56.564 20:06:33 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:56.564 20:06:33 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:05:56.564 20:06:33 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:56.564 00:05:56.564 real 0m2.690s 00:05:56.564 user 0m2.477s 00:05:56.564 sys 0m0.222s 00:05:56.564 20:06:33 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:56.564 20:06:33 -- common/autotest_common.sh@10 -- # set +x 00:05:56.564 ************************************ 00:05:56.564 END TEST accel_fill 00:05:56.564 ************************************ 00:05:56.564 20:06:33 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:05:56.564 20:06:33 -- common/autotest_common.sh@1075 -- # '[' 7 -le 1 ']' 00:05:56.564 20:06:33 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:56.564 20:06:33 -- common/autotest_common.sh@10 -- # set +x 00:05:56.564 ************************************ 00:05:56.564 START TEST accel_copy_crc32c 00:05:56.564 ************************************ 00:05:56.564 20:06:33 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w copy_crc32c -y 00:05:56.564 20:06:33 -- accel/accel.sh@16 -- # local accel_opc 00:05:56.564 20:06:33 -- accel/accel.sh@17 -- # local accel_module 00:05:56.564 20:06:33 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:56.564 20:06:33 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:56.564 20:06:33 -- accel/accel.sh@12 -- # build_accel_config 00:05:56.564 20:06:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:56.564 20:06:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:56.564 20:06:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:56.564 20:06:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:56.564 20:06:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:56.564 20:06:33 -- accel/accel.sh@41 -- # local IFS=, 00:05:56.564 20:06:33 -- accel/accel.sh@42 -- # jq -r . 00:05:56.564 [2024-02-14 20:06:33.766248] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:56.564 [2024-02-14 20:06:33.766308] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1606991 ] 00:05:56.564 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.564 [2024-02-14 20:06:33.825335] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.564 [2024-02-14 20:06:33.893941] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.564 [2024-02-14 20:06:33.893997] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:05:57.944 [2024-02-14 20:06:34.938717] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:05:57.944 20:06:35 -- accel/accel.sh@18 -- # out=' 00:05:57.944 SPDK Configuration: 00:05:57.944 Core mask: 0x1 00:05:57.944 00:05:57.944 Accel Perf Configuration: 00:05:57.944 Workload Type: copy_crc32c 00:05:57.944 CRC-32C seed: 0 00:05:57.944 Vector size: 4096 bytes 00:05:57.944 Transfer size: 4096 bytes 00:05:57.944 Vector count 1 00:05:57.944 Module: software 00:05:57.944 Queue depth: 32 00:05:57.944 Allocate depth: 32 00:05:57.944 # threads/core: 1 00:05:57.944 Run time: 1 seconds 00:05:57.944 Verify: Yes 00:05:57.944 00:05:57.944 Running for 1 seconds... 00:05:57.944 00:05:57.944 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:57.944 ------------------------------------------------------------------------------------ 00:05:57.944 0,0 335008/s 1308 MiB/s 0 0 00:05:57.944 ==================================================================================== 00:05:57.944 Total 335008/s 1308 MiB/s 0 0' 00:05:57.944 20:06:35 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:57.944 20:06:35 -- accel/accel.sh@20 -- # IFS=: 00:05:57.944 20:06:35 -- accel/accel.sh@20 -- # read -r var val 00:05:57.944 20:06:35 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:57.944 20:06:35 -- accel/accel.sh@12 -- # build_accel_config 00:05:57.944 20:06:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:57.944 20:06:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:57.944 20:06:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:57.944 20:06:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:57.944 20:06:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:57.944 20:06:35 -- accel/accel.sh@41 -- # local IFS=, 00:05:57.944 20:06:35 -- accel/accel.sh@42 -- # jq -r . 00:05:57.944 [2024-02-14 20:06:35.102208] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:57.944 [2024-02-14 20:06:35.102258] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1607209 ] 00:05:57.944 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.944 [2024-02-14 20:06:35.153999] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.944 [2024-02-14 20:06:35.223273] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.944 [2024-02-14 20:06:35.223328] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:05:57.944 20:06:35 -- accel/accel.sh@21 -- # val= 00:05:57.944 20:06:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.944 20:06:35 -- accel/accel.sh@20 -- # IFS=: 00:05:57.944 20:06:35 -- accel/accel.sh@20 -- # read -r var val 00:05:57.944 20:06:35 -- accel/accel.sh@21 -- # val= 00:05:57.944 20:06:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.944 20:06:35 -- accel/accel.sh@20 -- # IFS=: 00:05:57.944 20:06:35 -- accel/accel.sh@20 -- # read -r var val 00:05:57.944 20:06:35 -- accel/accel.sh@21 -- # val=0x1 00:05:57.944 20:06:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.944 20:06:35 -- accel/accel.sh@20 -- # IFS=: 00:05:57.944 20:06:35 -- accel/accel.sh@20 -- # read -r var val 00:05:57.944 20:06:35 -- accel/accel.sh@21 -- # val= 00:05:57.944 20:06:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.944 20:06:35 -- accel/accel.sh@20 -- # IFS=: 00:05:57.944 20:06:35 -- accel/accel.sh@20 -- # read -r var val 00:05:57.944 20:06:35 -- accel/accel.sh@21 -- # val= 00:05:57.944 20:06:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.944 20:06:35 -- accel/accel.sh@20 -- # IFS=: 00:05:57.944 20:06:35 -- accel/accel.sh@20 -- # read -r var val 00:05:57.944 20:06:35 -- accel/accel.sh@21 -- # val=copy_crc32c 00:05:57.944 20:06:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.944 20:06:35 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:05:57.944 20:06:35 -- accel/accel.sh@20 -- # IFS=: 00:05:57.944 20:06:35 -- accel/accel.sh@20 -- # read -r var val 00:05:57.944 20:06:35 -- accel/accel.sh@21 -- # val=0 00:05:57.944 20:06:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.944 20:06:35 -- accel/accel.sh@20 -- # IFS=: 00:05:57.944 20:06:35 -- accel/accel.sh@20 -- # read -r var val 00:05:57.944 20:06:35 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:57.944 20:06:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.944 20:06:35 -- accel/accel.sh@20 -- # IFS=: 00:05:57.944 20:06:35 -- accel/accel.sh@20 -- # read -r var val 00:05:57.945 20:06:35 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:57.945 20:06:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.945 20:06:35 -- accel/accel.sh@20 -- # IFS=: 00:05:57.945 20:06:35 -- accel/accel.sh@20 -- # read -r var val 00:05:57.945 20:06:35 -- accel/accel.sh@21 -- # val= 00:05:57.945 20:06:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.945 20:06:35 -- accel/accel.sh@20 -- # IFS=: 00:05:57.945 20:06:35 -- accel/accel.sh@20 -- # read -r var val 00:05:57.945 20:06:35 -- accel/accel.sh@21 -- # val=software 00:05:57.945 20:06:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.945 20:06:35 -- accel/accel.sh@23 -- # accel_module=software 00:05:57.945 20:06:35 -- accel/accel.sh@20 -- # IFS=: 00:05:57.945 20:06:35 -- accel/accel.sh@20 -- # read -r var val 00:05:57.945 20:06:35 -- accel/accel.sh@21 -- # val=32 00:05:57.945 20:06:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.945 20:06:35 -- accel/accel.sh@20 -- # IFS=: 00:05:57.945 20:06:35 -- accel/accel.sh@20 -- # read -r var val 00:05:57.945 20:06:35 -- accel/accel.sh@21 -- # val=32 00:05:57.945 20:06:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.945 20:06:35 -- accel/accel.sh@20 -- # IFS=: 00:05:57.945 20:06:35 -- accel/accel.sh@20 -- # read -r var val 00:05:57.945 20:06:35 -- accel/accel.sh@21 -- # val=1 00:05:57.945 20:06:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.945 20:06:35 -- accel/accel.sh@20 -- # IFS=: 00:05:57.945 20:06:35 -- accel/accel.sh@20 -- # read -r var val 00:05:57.945 20:06:35 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:57.945 20:06:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.945 20:06:35 -- accel/accel.sh@20 -- # IFS=: 00:05:57.945 20:06:35 -- accel/accel.sh@20 -- # read -r var val 00:05:57.945 20:06:35 -- accel/accel.sh@21 -- # val=Yes 00:05:57.945 20:06:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.945 20:06:35 -- accel/accel.sh@20 -- # IFS=: 00:05:57.945 20:06:35 -- accel/accel.sh@20 -- # read -r var val 00:05:57.945 20:06:35 -- accel/accel.sh@21 -- # val= 00:05:57.945 20:06:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.945 20:06:35 -- accel/accel.sh@20 -- # IFS=: 00:05:57.945 20:06:35 -- accel/accel.sh@20 -- # read -r var val 00:05:57.945 20:06:35 -- accel/accel.sh@21 -- # val= 00:05:57.945 20:06:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.945 20:06:35 -- accel/accel.sh@20 -- # IFS=: 00:05:57.945 20:06:35 -- accel/accel.sh@20 -- # read -r var val 00:05:58.883 [2024-02-14 20:06:36.268021] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:05:59.144 20:06:36 -- accel/accel.sh@21 -- # val= 00:05:59.144 20:06:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.144 20:06:36 -- accel/accel.sh@20 -- # IFS=: 00:05:59.144 20:06:36 -- accel/accel.sh@20 -- # read -r var val 00:05:59.144 20:06:36 -- accel/accel.sh@21 -- # val= 00:05:59.144 20:06:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.144 20:06:36 -- accel/accel.sh@20 -- # IFS=: 00:05:59.144 20:06:36 -- accel/accel.sh@20 -- # read -r var val 00:05:59.144 20:06:36 -- accel/accel.sh@21 -- # val= 00:05:59.144 20:06:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.144 20:06:36 -- accel/accel.sh@20 -- # IFS=: 00:05:59.144 20:06:36 -- accel/accel.sh@20 -- # read -r var val 00:05:59.144 20:06:36 -- accel/accel.sh@21 -- # val= 00:05:59.144 20:06:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.144 20:06:36 -- accel/accel.sh@20 -- # IFS=: 00:05:59.144 20:06:36 -- accel/accel.sh@20 -- # read -r var val 00:05:59.144 20:06:36 -- accel/accel.sh@21 -- # val= 00:05:59.144 20:06:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.144 20:06:36 -- accel/accel.sh@20 -- # IFS=: 00:05:59.144 20:06:36 -- accel/accel.sh@20 -- # read -r var val 00:05:59.144 20:06:36 -- accel/accel.sh@21 -- # val= 00:05:59.144 20:06:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.144 20:06:36 -- accel/accel.sh@20 -- # IFS=: 00:05:59.144 20:06:36 -- accel/accel.sh@20 -- # read -r var val 00:05:59.144 20:06:36 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:59.144 20:06:36 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:05:59.144 20:06:36 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:59.144 00:05:59.144 real 0m2.684s 00:05:59.144 user 0m2.480s 00:05:59.144 sys 0m0.214s 00:05:59.144 20:06:36 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:59.144 20:06:36 -- common/autotest_common.sh@10 -- # set +x 00:05:59.144 ************************************ 00:05:59.144 END TEST accel_copy_crc32c 00:05:59.144 ************************************ 00:05:59.144 20:06:36 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:05:59.144 20:06:36 -- common/autotest_common.sh@1075 -- # '[' 9 -le 1 ']' 00:05:59.144 20:06:36 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:59.144 20:06:36 -- common/autotest_common.sh@10 -- # set +x 00:05:59.144 ************************************ 00:05:59.144 START TEST accel_copy_crc32c_C2 00:05:59.144 ************************************ 00:05:59.144 20:06:36 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:05:59.144 20:06:36 -- accel/accel.sh@16 -- # local accel_opc 00:05:59.144 20:06:36 -- accel/accel.sh@17 -- # local accel_module 00:05:59.144 20:06:36 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:59.144 20:06:36 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:59.144 20:06:36 -- accel/accel.sh@12 -- # build_accel_config 00:05:59.144 20:06:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:59.144 20:06:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:59.144 20:06:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:59.144 20:06:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:59.144 20:06:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:59.144 20:06:36 -- accel/accel.sh@41 -- # local IFS=, 00:05:59.144 20:06:36 -- accel/accel.sh@42 -- # jq -r . 00:05:59.144 [2024-02-14 20:06:36.484627] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:59.144 [2024-02-14 20:06:36.484693] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1607455 ] 00:05:59.144 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.144 [2024-02-14 20:06:36.544060] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.404 [2024-02-14 20:06:36.614820] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.404 [2024-02-14 20:06:36.614873] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:06:00.343 [2024-02-14 20:06:37.659677] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:06:00.604 20:06:37 -- accel/accel.sh@18 -- # out=' 00:06:00.604 SPDK Configuration: 00:06:00.604 Core mask: 0x1 00:06:00.604 00:06:00.604 Accel Perf Configuration: 00:06:00.604 Workload Type: copy_crc32c 00:06:00.604 CRC-32C seed: 0 00:06:00.604 Vector size: 4096 bytes 00:06:00.604 Transfer size: 8192 bytes 00:06:00.604 Vector count 2 00:06:00.604 Module: software 00:06:00.604 Queue depth: 32 00:06:00.604 Allocate depth: 32 00:06:00.604 # threads/core: 1 00:06:00.604 Run time: 1 seconds 00:06:00.605 Verify: Yes 00:06:00.605 00:06:00.605 Running for 1 seconds... 00:06:00.605 00:06:00.605 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:00.605 ------------------------------------------------------------------------------------ 00:06:00.605 0,0 244000/s 1906 MiB/s 0 0 00:06:00.605 ==================================================================================== 00:06:00.605 Total 244000/s 953 MiB/s 0 0' 00:06:00.605 20:06:37 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:00.605 20:06:37 -- accel/accel.sh@20 -- # IFS=: 00:06:00.605 20:06:37 -- accel/accel.sh@20 -- # read -r var val 00:06:00.605 20:06:37 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:00.605 20:06:37 -- accel/accel.sh@12 -- # build_accel_config 00:06:00.605 20:06:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:00.605 20:06:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:00.605 20:06:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:00.605 20:06:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:00.605 20:06:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:00.605 20:06:37 -- accel/accel.sh@41 -- # local IFS=, 00:06:00.605 20:06:37 -- accel/accel.sh@42 -- # jq -r . 00:06:00.605 [2024-02-14 20:06:37.823316] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:06:00.605 [2024-02-14 20:06:37.823365] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1607684 ] 00:06:00.605 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.605 [2024-02-14 20:06:37.874961] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.605 [2024-02-14 20:06:37.943423] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.605 [2024-02-14 20:06:37.943475] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:06:00.605 20:06:37 -- accel/accel.sh@21 -- # val= 00:06:00.605 20:06:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.605 20:06:37 -- accel/accel.sh@20 -- # IFS=: 00:06:00.605 20:06:37 -- accel/accel.sh@20 -- # read -r var val 00:06:00.605 20:06:37 -- accel/accel.sh@21 -- # val= 00:06:00.605 20:06:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.605 20:06:37 -- accel/accel.sh@20 -- # IFS=: 00:06:00.605 20:06:37 -- accel/accel.sh@20 -- # read -r var val 00:06:00.605 20:06:37 -- accel/accel.sh@21 -- # val=0x1 00:06:00.605 20:06:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.605 20:06:37 -- accel/accel.sh@20 -- # IFS=: 00:06:00.605 20:06:37 -- accel/accel.sh@20 -- # read -r var val 00:06:00.605 20:06:37 -- accel/accel.sh@21 -- # val= 00:06:00.605 20:06:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.605 20:06:37 -- accel/accel.sh@20 -- # IFS=: 00:06:00.605 20:06:37 -- accel/accel.sh@20 -- # read -r var val 00:06:00.605 20:06:37 -- accel/accel.sh@21 -- # val= 00:06:00.605 20:06:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.605 20:06:37 -- accel/accel.sh@20 -- # IFS=: 00:06:00.605 20:06:37 -- accel/accel.sh@20 -- # read -r var val 00:06:00.605 20:06:37 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:00.605 20:06:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.605 20:06:37 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:00.605 20:06:37 -- accel/accel.sh@20 -- # IFS=: 00:06:00.605 20:06:37 -- accel/accel.sh@20 -- # read -r var val 00:06:00.605 20:06:37 -- accel/accel.sh@21 -- # val=0 00:06:00.605 20:06:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.605 20:06:37 -- accel/accel.sh@20 -- # IFS=: 00:06:00.605 20:06:37 -- accel/accel.sh@20 -- # read -r var val 00:06:00.605 20:06:37 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:00.605 20:06:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.605 20:06:37 -- accel/accel.sh@20 -- # IFS=: 00:06:00.605 20:06:37 -- accel/accel.sh@20 -- # read -r var val 00:06:00.605 20:06:37 -- accel/accel.sh@21 -- # val='8192 bytes' 00:06:00.605 20:06:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.605 20:06:37 -- accel/accel.sh@20 -- # IFS=: 00:06:00.605 20:06:37 -- accel/accel.sh@20 -- # read -r var val 00:06:00.605 20:06:37 -- accel/accel.sh@21 -- # val= 00:06:00.605 20:06:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.605 20:06:37 -- accel/accel.sh@20 -- # IFS=: 00:06:00.605 20:06:37 -- accel/accel.sh@20 -- # read -r var val 00:06:00.605 20:06:37 -- accel/accel.sh@21 -- # val=software 00:06:00.605 20:06:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.605 20:06:37 -- accel/accel.sh@23 -- # accel_module=software 00:06:00.605 20:06:37 -- accel/accel.sh@20 -- # IFS=: 00:06:00.605 20:06:37 -- accel/accel.sh@20 -- # read -r var val 00:06:00.605 20:06:37 -- accel/accel.sh@21 -- # val=32 00:06:00.605 20:06:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.605 20:06:37 -- accel/accel.sh@20 -- # IFS=: 00:06:00.605 20:06:37 -- accel/accel.sh@20 -- # read -r var val 00:06:00.605 20:06:37 -- accel/accel.sh@21 -- # val=32 00:06:00.605 20:06:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.605 20:06:37 -- accel/accel.sh@20 -- # IFS=: 00:06:00.605 20:06:37 -- accel/accel.sh@20 -- # read -r var val 00:06:00.605 20:06:37 -- accel/accel.sh@21 -- # val=1 00:06:00.605 20:06:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.605 20:06:37 -- accel/accel.sh@20 -- # IFS=: 00:06:00.605 20:06:37 -- accel/accel.sh@20 -- # read -r var val 00:06:00.605 20:06:37 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:00.605 20:06:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.605 20:06:37 -- accel/accel.sh@20 -- # IFS=: 00:06:00.605 20:06:37 -- accel/accel.sh@20 -- # read -r var val 00:06:00.605 20:06:37 -- accel/accel.sh@21 -- # val=Yes 00:06:00.605 20:06:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.605 20:06:37 -- accel/accel.sh@20 -- # IFS=: 00:06:00.605 20:06:37 -- accel/accel.sh@20 -- # read -r var val 00:06:00.605 20:06:37 -- accel/accel.sh@21 -- # val= 00:06:00.605 20:06:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.605 20:06:38 -- accel/accel.sh@20 -- # IFS=: 00:06:00.605 20:06:38 -- accel/accel.sh@20 -- # read -r var val 00:06:00.605 20:06:38 -- accel/accel.sh@21 -- # val= 00:06:00.605 20:06:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.605 20:06:38 -- accel/accel.sh@20 -- # IFS=: 00:06:00.605 20:06:38 -- accel/accel.sh@20 -- # read -r var val 00:06:01.987 [2024-02-14 20:06:38.988421] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:06:01.987 20:06:39 -- accel/accel.sh@21 -- # val= 00:06:01.987 20:06:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.987 20:06:39 -- accel/accel.sh@20 -- # IFS=: 00:06:01.987 20:06:39 -- accel/accel.sh@20 -- # read -r var val 00:06:01.987 20:06:39 -- accel/accel.sh@21 -- # val= 00:06:01.987 20:06:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.987 20:06:39 -- accel/accel.sh@20 -- # IFS=: 00:06:01.987 20:06:39 -- accel/accel.sh@20 -- # read -r var val 00:06:01.987 20:06:39 -- accel/accel.sh@21 -- # val= 00:06:01.987 20:06:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.987 20:06:39 -- accel/accel.sh@20 -- # IFS=: 00:06:01.987 20:06:39 -- accel/accel.sh@20 -- # read -r var val 00:06:01.987 20:06:39 -- accel/accel.sh@21 -- # val= 00:06:01.987 20:06:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.987 20:06:39 -- accel/accel.sh@20 -- # IFS=: 00:06:01.987 20:06:39 -- accel/accel.sh@20 -- # read -r var val 00:06:01.987 20:06:39 -- accel/accel.sh@21 -- # val= 00:06:01.987 20:06:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.987 20:06:39 -- accel/accel.sh@20 -- # IFS=: 00:06:01.987 20:06:39 -- accel/accel.sh@20 -- # read -r var val 00:06:01.987 20:06:39 -- accel/accel.sh@21 -- # val= 00:06:01.987 20:06:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.987 20:06:39 -- accel/accel.sh@20 -- # IFS=: 00:06:01.987 20:06:39 -- accel/accel.sh@20 -- # read -r var val 00:06:01.987 20:06:39 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:01.987 20:06:39 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:01.987 20:06:39 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:01.987 00:06:01.987 real 0m2.686s 00:06:01.987 user 0m2.475s 00:06:01.987 sys 0m0.222s 00:06:01.987 20:06:39 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:01.987 20:06:39 -- common/autotest_common.sh@10 -- # set +x 00:06:01.987 ************************************ 00:06:01.987 END TEST accel_copy_crc32c_C2 00:06:01.987 ************************************ 00:06:01.987 20:06:39 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:01.987 20:06:39 -- common/autotest_common.sh@1075 -- # '[' 7 -le 1 ']' 00:06:01.987 20:06:39 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:01.987 20:06:39 -- common/autotest_common.sh@10 -- # set +x 00:06:01.987 ************************************ 00:06:01.987 START TEST accel_dualcast 00:06:01.987 ************************************ 00:06:01.987 20:06:39 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w dualcast -y 00:06:01.987 20:06:39 -- accel/accel.sh@16 -- # local accel_opc 00:06:01.987 20:06:39 -- accel/accel.sh@17 -- # local accel_module 00:06:01.987 20:06:39 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:06:01.987 20:06:39 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:01.987 20:06:39 -- accel/accel.sh@12 -- # build_accel_config 00:06:01.987 20:06:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:01.987 20:06:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:01.987 20:06:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:01.987 20:06:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:01.987 20:06:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:01.987 20:06:39 -- accel/accel.sh@41 -- # local IFS=, 00:06:01.987 20:06:39 -- accel/accel.sh@42 -- # jq -r . 00:06:01.987 [2024-02-14 20:06:39.204002] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:06:01.987 [2024-02-14 20:06:39.204062] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1607925 ] 00:06:01.987 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.987 [2024-02-14 20:06:39.262398] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.987 [2024-02-14 20:06:39.330763] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.987 [2024-02-14 20:06:39.330816] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:06:03.367 [2024-02-14 20:06:40.374789] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:06:03.367 20:06:40 -- accel/accel.sh@18 -- # out=' 00:06:03.367 SPDK Configuration: 00:06:03.367 Core mask: 0x1 00:06:03.367 00:06:03.367 Accel Perf Configuration: 00:06:03.367 Workload Type: dualcast 00:06:03.367 Transfer size: 4096 bytes 00:06:03.367 Vector count 1 00:06:03.367 Module: software 00:06:03.367 Queue depth: 32 00:06:03.367 Allocate depth: 32 00:06:03.367 # threads/core: 1 00:06:03.367 Run time: 1 seconds 00:06:03.367 Verify: Yes 00:06:03.367 00:06:03.367 Running for 1 seconds... 00:06:03.367 00:06:03.367 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:03.367 ------------------------------------------------------------------------------------ 00:06:03.367 0,0 502368/s 1962 MiB/s 0 0 00:06:03.367 ==================================================================================== 00:06:03.367 Total 502368/s 1962 MiB/s 0 0' 00:06:03.367 20:06:40 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:03.367 20:06:40 -- accel/accel.sh@20 -- # IFS=: 00:06:03.367 20:06:40 -- accel/accel.sh@20 -- # read -r var val 00:06:03.367 20:06:40 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:03.367 20:06:40 -- accel/accel.sh@12 -- # build_accel_config 00:06:03.367 20:06:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:03.367 20:06:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:03.367 20:06:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:03.367 20:06:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:03.367 20:06:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:03.367 20:06:40 -- accel/accel.sh@41 -- # local IFS=, 00:06:03.367 20:06:40 -- accel/accel.sh@42 -- # jq -r . 00:06:03.367 [2024-02-14 20:06:40.542940] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:06:03.367 [2024-02-14 20:06:40.542997] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1608148 ] 00:06:03.367 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.367 [2024-02-14 20:06:40.601297] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.367 [2024-02-14 20:06:40.668505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.367 [2024-02-14 20:06:40.668560] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:06:03.367 20:06:40 -- accel/accel.sh@21 -- # val= 00:06:03.367 20:06:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.367 20:06:40 -- accel/accel.sh@20 -- # IFS=: 00:06:03.367 20:06:40 -- accel/accel.sh@20 -- # read -r var val 00:06:03.367 20:06:40 -- accel/accel.sh@21 -- # val= 00:06:03.367 20:06:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.367 20:06:40 -- accel/accel.sh@20 -- # IFS=: 00:06:03.367 20:06:40 -- accel/accel.sh@20 -- # read -r var val 00:06:03.367 20:06:40 -- accel/accel.sh@21 -- # val=0x1 00:06:03.367 20:06:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.367 20:06:40 -- accel/accel.sh@20 -- # IFS=: 00:06:03.367 20:06:40 -- accel/accel.sh@20 -- # read -r var val 00:06:03.367 20:06:40 -- accel/accel.sh@21 -- # val= 00:06:03.367 20:06:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.367 20:06:40 -- accel/accel.sh@20 -- # IFS=: 00:06:03.367 20:06:40 -- accel/accel.sh@20 -- # read -r var val 00:06:03.367 20:06:40 -- accel/accel.sh@21 -- # val= 00:06:03.367 20:06:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.367 20:06:40 -- accel/accel.sh@20 -- # IFS=: 00:06:03.367 20:06:40 -- accel/accel.sh@20 -- # read -r var val 00:06:03.367 20:06:40 -- accel/accel.sh@21 -- # val=dualcast 00:06:03.367 20:06:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.367 20:06:40 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:06:03.367 20:06:40 -- accel/accel.sh@20 -- # IFS=: 00:06:03.367 20:06:40 -- accel/accel.sh@20 -- # read -r var val 00:06:03.367 20:06:40 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:03.367 20:06:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.367 20:06:40 -- accel/accel.sh@20 -- # IFS=: 00:06:03.367 20:06:40 -- accel/accel.sh@20 -- # read -r var val 00:06:03.367 20:06:40 -- accel/accel.sh@21 -- # val= 00:06:03.367 20:06:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.367 20:06:40 -- accel/accel.sh@20 -- # IFS=: 00:06:03.367 20:06:40 -- accel/accel.sh@20 -- # read -r var val 00:06:03.367 20:06:40 -- accel/accel.sh@21 -- # val=software 00:06:03.367 20:06:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.367 20:06:40 -- accel/accel.sh@23 -- # accel_module=software 00:06:03.367 20:06:40 -- accel/accel.sh@20 -- # IFS=: 00:06:03.367 20:06:40 -- accel/accel.sh@20 -- # read -r var val 00:06:03.367 20:06:40 -- accel/accel.sh@21 -- # val=32 00:06:03.367 20:06:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.367 20:06:40 -- accel/accel.sh@20 -- # IFS=: 00:06:03.367 20:06:40 -- accel/accel.sh@20 -- # read -r var val 00:06:03.367 20:06:40 -- accel/accel.sh@21 -- # val=32 00:06:03.367 20:06:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.367 20:06:40 -- accel/accel.sh@20 -- # IFS=: 00:06:03.367 20:06:40 -- accel/accel.sh@20 -- # read -r var val 00:06:03.367 20:06:40 -- accel/accel.sh@21 -- # val=1 00:06:03.367 20:06:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.367 20:06:40 -- accel/accel.sh@20 -- # IFS=: 00:06:03.367 20:06:40 -- accel/accel.sh@20 -- # read -r var val 00:06:03.367 20:06:40 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:03.367 20:06:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.367 20:06:40 -- accel/accel.sh@20 -- # IFS=: 00:06:03.367 20:06:40 -- accel/accel.sh@20 -- # read -r var val 00:06:03.367 20:06:40 -- accel/accel.sh@21 -- # val=Yes 00:06:03.367 20:06:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.367 20:06:40 -- accel/accel.sh@20 -- # IFS=: 00:06:03.367 20:06:40 -- accel/accel.sh@20 -- # read -r var val 00:06:03.367 20:06:40 -- accel/accel.sh@21 -- # val= 00:06:03.367 20:06:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.367 20:06:40 -- accel/accel.sh@20 -- # IFS=: 00:06:03.367 20:06:40 -- accel/accel.sh@20 -- # read -r var val 00:06:03.367 20:06:40 -- accel/accel.sh@21 -- # val= 00:06:03.367 20:06:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.367 20:06:40 -- accel/accel.sh@20 -- # IFS=: 00:06:03.367 20:06:40 -- accel/accel.sh@20 -- # read -r var val 00:06:04.306 [2024-02-14 20:06:41.712643] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:06:04.566 20:06:41 -- accel/accel.sh@21 -- # val= 00:06:04.566 20:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.566 20:06:41 -- accel/accel.sh@20 -- # IFS=: 00:06:04.566 20:06:41 -- accel/accel.sh@20 -- # read -r var val 00:06:04.566 20:06:41 -- accel/accel.sh@21 -- # val= 00:06:04.566 20:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.566 20:06:41 -- accel/accel.sh@20 -- # IFS=: 00:06:04.566 20:06:41 -- accel/accel.sh@20 -- # read -r var val 00:06:04.566 20:06:41 -- accel/accel.sh@21 -- # val= 00:06:04.566 20:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.566 20:06:41 -- accel/accel.sh@20 -- # IFS=: 00:06:04.566 20:06:41 -- accel/accel.sh@20 -- # read -r var val 00:06:04.566 20:06:41 -- accel/accel.sh@21 -- # val= 00:06:04.566 20:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.566 20:06:41 -- accel/accel.sh@20 -- # IFS=: 00:06:04.566 20:06:41 -- accel/accel.sh@20 -- # read -r var val 00:06:04.566 20:06:41 -- accel/accel.sh@21 -- # val= 00:06:04.566 20:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.566 20:06:41 -- accel/accel.sh@20 -- # IFS=: 00:06:04.566 20:06:41 -- accel/accel.sh@20 -- # read -r var val 00:06:04.566 20:06:41 -- accel/accel.sh@21 -- # val= 00:06:04.566 20:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.566 20:06:41 -- accel/accel.sh@20 -- # IFS=: 00:06:04.566 20:06:41 -- accel/accel.sh@20 -- # read -r var val 00:06:04.566 20:06:41 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:04.566 20:06:41 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:06:04.566 20:06:41 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:04.566 00:06:04.566 real 0m2.691s 00:06:04.566 user 0m2.476s 00:06:04.566 sys 0m0.223s 00:06:04.566 20:06:41 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:04.566 20:06:41 -- common/autotest_common.sh@10 -- # set +x 00:06:04.566 ************************************ 00:06:04.566 END TEST accel_dualcast 00:06:04.566 ************************************ 00:06:04.566 20:06:41 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:04.566 20:06:41 -- common/autotest_common.sh@1075 -- # '[' 7 -le 1 ']' 00:06:04.566 20:06:41 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:04.566 20:06:41 -- common/autotest_common.sh@10 -- # set +x 00:06:04.566 ************************************ 00:06:04.566 START TEST accel_compare 00:06:04.566 ************************************ 00:06:04.566 20:06:41 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w compare -y 00:06:04.566 20:06:41 -- accel/accel.sh@16 -- # local accel_opc 00:06:04.566 20:06:41 -- accel/accel.sh@17 -- # local accel_module 00:06:04.566 20:06:41 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:06:04.566 20:06:41 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:04.566 20:06:41 -- accel/accel.sh@12 -- # build_accel_config 00:06:04.566 20:06:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:04.566 20:06:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:04.566 20:06:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:04.566 20:06:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:04.566 20:06:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:04.566 20:06:41 -- accel/accel.sh@41 -- # local IFS=, 00:06:04.566 20:06:41 -- accel/accel.sh@42 -- # jq -r . 00:06:04.566 [2024-02-14 20:06:41.929713] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:06:04.566 [2024-02-14 20:06:41.929784] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1608407 ] 00:06:04.566 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.826 [2024-02-14 20:06:41.992093] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.826 [2024-02-14 20:06:42.064744] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.826 [2024-02-14 20:06:42.064796] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:06:05.765 [2024-02-14 20:06:43.109542] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:06:06.027 20:06:43 -- accel/accel.sh@18 -- # out=' 00:06:06.027 SPDK Configuration: 00:06:06.027 Core mask: 0x1 00:06:06.027 00:06:06.027 Accel Perf Configuration: 00:06:06.027 Workload Type: compare 00:06:06.027 Transfer size: 4096 bytes 00:06:06.027 Vector count 1 00:06:06.027 Module: software 00:06:06.027 Queue depth: 32 00:06:06.027 Allocate depth: 32 00:06:06.027 # threads/core: 1 00:06:06.027 Run time: 1 seconds 00:06:06.027 Verify: Yes 00:06:06.027 00:06:06.027 Running for 1 seconds... 00:06:06.027 00:06:06.027 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:06.027 ------------------------------------------------------------------------------------ 00:06:06.027 0,0 626272/s 2446 MiB/s 0 0 00:06:06.027 ==================================================================================== 00:06:06.027 Total 626272/s 2446 MiB/s 0 0' 00:06:06.027 20:06:43 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:06.027 20:06:43 -- accel/accel.sh@20 -- # IFS=: 00:06:06.027 20:06:43 -- accel/accel.sh@20 -- # read -r var val 00:06:06.027 20:06:43 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:06.027 20:06:43 -- accel/accel.sh@12 -- # build_accel_config 00:06:06.027 20:06:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:06.027 20:06:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:06.027 20:06:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:06.027 20:06:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:06.027 20:06:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:06.027 20:06:43 -- accel/accel.sh@41 -- # local IFS=, 00:06:06.027 20:06:43 -- accel/accel.sh@42 -- # jq -r . 00:06:06.027 [2024-02-14 20:06:43.271458] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:06:06.027 [2024-02-14 20:06:43.271508] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1608634 ] 00:06:06.027 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.027 [2024-02-14 20:06:43.327579] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.027 [2024-02-14 20:06:43.396239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.027 [2024-02-14 20:06:43.396293] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:06:06.027 20:06:43 -- accel/accel.sh@21 -- # val= 00:06:06.027 20:06:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.027 20:06:43 -- accel/accel.sh@20 -- # IFS=: 00:06:06.027 20:06:43 -- accel/accel.sh@20 -- # read -r var val 00:06:06.027 20:06:43 -- accel/accel.sh@21 -- # val= 00:06:06.027 20:06:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.027 20:06:43 -- accel/accel.sh@20 -- # IFS=: 00:06:06.027 20:06:43 -- accel/accel.sh@20 -- # read -r var val 00:06:06.027 20:06:43 -- accel/accel.sh@21 -- # val=0x1 00:06:06.027 20:06:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.027 20:06:43 -- accel/accel.sh@20 -- # IFS=: 00:06:06.027 20:06:43 -- accel/accel.sh@20 -- # read -r var val 00:06:06.027 20:06:43 -- accel/accel.sh@21 -- # val= 00:06:06.027 20:06:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.027 20:06:43 -- accel/accel.sh@20 -- # IFS=: 00:06:06.027 20:06:43 -- accel/accel.sh@20 -- # read -r var val 00:06:06.027 20:06:43 -- accel/accel.sh@21 -- # val= 00:06:06.027 20:06:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.027 20:06:43 -- accel/accel.sh@20 -- # IFS=: 00:06:06.027 20:06:43 -- accel/accel.sh@20 -- # read -r var val 00:06:06.027 20:06:43 -- accel/accel.sh@21 -- # val=compare 00:06:06.027 20:06:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.028 20:06:43 -- accel/accel.sh@24 -- # accel_opc=compare 00:06:06.028 20:06:43 -- accel/accel.sh@20 -- # IFS=: 00:06:06.028 20:06:43 -- accel/accel.sh@20 -- # read -r var val 00:06:06.293 20:06:43 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:06.293 20:06:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.293 20:06:43 -- accel/accel.sh@20 -- # IFS=: 00:06:06.293 20:06:43 -- accel/accel.sh@20 -- # read -r var val 00:06:06.293 20:06:43 -- accel/accel.sh@21 -- # val= 00:06:06.293 20:06:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.293 20:06:43 -- accel/accel.sh@20 -- # IFS=: 00:06:06.293 20:06:43 -- accel/accel.sh@20 -- # read -r var val 00:06:06.293 20:06:43 -- accel/accel.sh@21 -- # val=software 00:06:06.293 20:06:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.293 20:06:43 -- accel/accel.sh@23 -- # accel_module=software 00:06:06.293 20:06:43 -- accel/accel.sh@20 -- # IFS=: 00:06:06.293 20:06:43 -- accel/accel.sh@20 -- # read -r var val 00:06:06.293 20:06:43 -- accel/accel.sh@21 -- # val=32 00:06:06.293 20:06:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.293 20:06:43 -- accel/accel.sh@20 -- # IFS=: 00:06:06.293 20:06:43 -- accel/accel.sh@20 -- # read -r var val 00:06:06.293 20:06:43 -- accel/accel.sh@21 -- # val=32 00:06:06.293 20:06:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.293 20:06:43 -- accel/accel.sh@20 -- # IFS=: 00:06:06.293 20:06:43 -- accel/accel.sh@20 -- # read -r var val 00:06:06.293 20:06:43 -- accel/accel.sh@21 -- # val=1 00:06:06.293 20:06:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.293 20:06:43 -- accel/accel.sh@20 -- # IFS=: 00:06:06.293 20:06:43 -- accel/accel.sh@20 -- # read -r var val 00:06:06.293 20:06:43 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:06.293 20:06:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.293 20:06:43 -- accel/accel.sh@20 -- # IFS=: 00:06:06.293 20:06:43 -- accel/accel.sh@20 -- # read -r var val 00:06:06.293 20:06:43 -- accel/accel.sh@21 -- # val=Yes 00:06:06.293 20:06:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.293 20:06:43 -- accel/accel.sh@20 -- # IFS=: 00:06:06.293 20:06:43 -- accel/accel.sh@20 -- # read -r var val 00:06:06.293 20:06:43 -- accel/accel.sh@21 -- # val= 00:06:06.293 20:06:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.293 20:06:43 -- accel/accel.sh@20 -- # IFS=: 00:06:06.293 20:06:43 -- accel/accel.sh@20 -- # read -r var val 00:06:06.293 20:06:43 -- accel/accel.sh@21 -- # val= 00:06:06.293 20:06:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.293 20:06:43 -- accel/accel.sh@20 -- # IFS=: 00:06:06.293 20:06:43 -- accel/accel.sh@20 -- # read -r var val 00:06:07.232 [2024-02-14 20:06:44.441313] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:06:07.232 20:06:44 -- accel/accel.sh@21 -- # val= 00:06:07.232 20:06:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.232 20:06:44 -- accel/accel.sh@20 -- # IFS=: 00:06:07.232 20:06:44 -- accel/accel.sh@20 -- # read -r var val 00:06:07.232 20:06:44 -- accel/accel.sh@21 -- # val= 00:06:07.232 20:06:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.232 20:06:44 -- accel/accel.sh@20 -- # IFS=: 00:06:07.232 20:06:44 -- accel/accel.sh@20 -- # read -r var val 00:06:07.232 20:06:44 -- accel/accel.sh@21 -- # val= 00:06:07.232 20:06:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.232 20:06:44 -- accel/accel.sh@20 -- # IFS=: 00:06:07.232 20:06:44 -- accel/accel.sh@20 -- # read -r var val 00:06:07.232 20:06:44 -- accel/accel.sh@21 -- # val= 00:06:07.232 20:06:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.232 20:06:44 -- accel/accel.sh@20 -- # IFS=: 00:06:07.232 20:06:44 -- accel/accel.sh@20 -- # read -r var val 00:06:07.233 20:06:44 -- accel/accel.sh@21 -- # val= 00:06:07.233 20:06:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.233 20:06:44 -- accel/accel.sh@20 -- # IFS=: 00:06:07.233 20:06:44 -- accel/accel.sh@20 -- # read -r var val 00:06:07.233 20:06:44 -- accel/accel.sh@21 -- # val= 00:06:07.233 20:06:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.233 20:06:44 -- accel/accel.sh@20 -- # IFS=: 00:06:07.233 20:06:44 -- accel/accel.sh@20 -- # read -r var val 00:06:07.233 20:06:44 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:07.233 20:06:44 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:06:07.233 20:06:44 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:07.233 00:06:07.233 real 0m2.693s 00:06:07.233 user 0m2.467s 00:06:07.233 sys 0m0.233s 00:06:07.233 20:06:44 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:07.233 20:06:44 -- common/autotest_common.sh@10 -- # set +x 00:06:07.233 ************************************ 00:06:07.233 END TEST accel_compare 00:06:07.233 ************************************ 00:06:07.233 20:06:44 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:07.233 20:06:44 -- common/autotest_common.sh@1075 -- # '[' 7 -le 1 ']' 00:06:07.233 20:06:44 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:07.233 20:06:44 -- common/autotest_common.sh@10 -- # set +x 00:06:07.233 ************************************ 00:06:07.233 START TEST accel_xor 00:06:07.233 ************************************ 00:06:07.233 20:06:44 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w xor -y 00:06:07.233 20:06:44 -- accel/accel.sh@16 -- # local accel_opc 00:06:07.233 20:06:44 -- accel/accel.sh@17 -- # local accel_module 00:06:07.233 20:06:44 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:06:07.233 20:06:44 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:07.233 20:06:44 -- accel/accel.sh@12 -- # build_accel_config 00:06:07.233 20:06:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:07.233 20:06:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:07.233 20:06:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:07.233 20:06:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:07.233 20:06:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:07.233 20:06:44 -- accel/accel.sh@41 -- # local IFS=, 00:06:07.233 20:06:44 -- accel/accel.sh@42 -- # jq -r . 00:06:07.492 [2024-02-14 20:06:44.656621] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:06:07.492 [2024-02-14 20:06:44.656692] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1608882 ] 00:06:07.492 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.492 [2024-02-14 20:06:44.719322] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.492 [2024-02-14 20:06:44.788568] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.492 [2024-02-14 20:06:44.788625] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:06:08.431 [2024-02-14 20:06:45.833375] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:06:08.691 20:06:45 -- accel/accel.sh@18 -- # out=' 00:06:08.691 SPDK Configuration: 00:06:08.691 Core mask: 0x1 00:06:08.691 00:06:08.691 Accel Perf Configuration: 00:06:08.691 Workload Type: xor 00:06:08.691 Source buffers: 2 00:06:08.691 Transfer size: 4096 bytes 00:06:08.691 Vector count 1 00:06:08.691 Module: software 00:06:08.691 Queue depth: 32 00:06:08.691 Allocate depth: 32 00:06:08.691 # threads/core: 1 00:06:08.691 Run time: 1 seconds 00:06:08.691 Verify: Yes 00:06:08.691 00:06:08.691 Running for 1 seconds... 00:06:08.691 00:06:08.691 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:08.691 ------------------------------------------------------------------------------------ 00:06:08.691 0,0 500384/s 1954 MiB/s 0 0 00:06:08.691 ==================================================================================== 00:06:08.691 Total 500384/s 1954 MiB/s 0 0' 00:06:08.691 20:06:45 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:08.691 20:06:45 -- accel/accel.sh@20 -- # IFS=: 00:06:08.691 20:06:45 -- accel/accel.sh@20 -- # read -r var val 00:06:08.691 20:06:45 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:08.691 20:06:45 -- accel/accel.sh@12 -- # build_accel_config 00:06:08.691 20:06:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:08.691 20:06:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:08.691 20:06:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:08.691 20:06:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:08.691 20:06:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:08.691 20:06:45 -- accel/accel.sh@41 -- # local IFS=, 00:06:08.691 20:06:45 -- accel/accel.sh@42 -- # jq -r . 00:06:08.691 [2024-02-14 20:06:46.001303] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:06:08.691 [2024-02-14 20:06:46.001356] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1609106 ] 00:06:08.691 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.691 [2024-02-14 20:06:46.059341] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.950 [2024-02-14 20:06:46.128282] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.950 [2024-02-14 20:06:46.128332] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:06:08.950 20:06:46 -- accel/accel.sh@21 -- # val= 00:06:08.951 20:06:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.951 20:06:46 -- accel/accel.sh@20 -- # IFS=: 00:06:08.951 20:06:46 -- accel/accel.sh@20 -- # read -r var val 00:06:08.951 20:06:46 -- accel/accel.sh@21 -- # val= 00:06:08.951 20:06:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.951 20:06:46 -- accel/accel.sh@20 -- # IFS=: 00:06:08.951 20:06:46 -- accel/accel.sh@20 -- # read -r var val 00:06:08.951 20:06:46 -- accel/accel.sh@21 -- # val=0x1 00:06:08.951 20:06:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.951 20:06:46 -- accel/accel.sh@20 -- # IFS=: 00:06:08.951 20:06:46 -- accel/accel.sh@20 -- # read -r var val 00:06:08.951 20:06:46 -- accel/accel.sh@21 -- # val= 00:06:08.951 20:06:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.951 20:06:46 -- accel/accel.sh@20 -- # IFS=: 00:06:08.951 20:06:46 -- accel/accel.sh@20 -- # read -r var val 00:06:08.951 20:06:46 -- accel/accel.sh@21 -- # val= 00:06:08.951 20:06:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.951 20:06:46 -- accel/accel.sh@20 -- # IFS=: 00:06:08.951 20:06:46 -- accel/accel.sh@20 -- # read -r var val 00:06:08.951 20:06:46 -- accel/accel.sh@21 -- # val=xor 00:06:08.951 20:06:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.951 20:06:46 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:08.951 20:06:46 -- accel/accel.sh@20 -- # IFS=: 00:06:08.951 20:06:46 -- accel/accel.sh@20 -- # read -r var val 00:06:08.951 20:06:46 -- accel/accel.sh@21 -- # val=2 00:06:08.951 20:06:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.951 20:06:46 -- accel/accel.sh@20 -- # IFS=: 00:06:08.951 20:06:46 -- accel/accel.sh@20 -- # read -r var val 00:06:08.951 20:06:46 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:08.951 20:06:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.951 20:06:46 -- accel/accel.sh@20 -- # IFS=: 00:06:08.951 20:06:46 -- accel/accel.sh@20 -- # read -r var val 00:06:08.951 20:06:46 -- accel/accel.sh@21 -- # val= 00:06:08.951 20:06:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.951 20:06:46 -- accel/accel.sh@20 -- # IFS=: 00:06:08.951 20:06:46 -- accel/accel.sh@20 -- # read -r var val 00:06:08.951 20:06:46 -- accel/accel.sh@21 -- # val=software 00:06:08.951 20:06:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.951 20:06:46 -- accel/accel.sh@23 -- # accel_module=software 00:06:08.951 20:06:46 -- accel/accel.sh@20 -- # IFS=: 00:06:08.951 20:06:46 -- accel/accel.sh@20 -- # read -r var val 00:06:08.951 20:06:46 -- accel/accel.sh@21 -- # val=32 00:06:08.951 20:06:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.951 20:06:46 -- accel/accel.sh@20 -- # IFS=: 00:06:08.951 20:06:46 -- accel/accel.sh@20 -- # read -r var val 00:06:08.951 20:06:46 -- accel/accel.sh@21 -- # val=32 00:06:08.951 20:06:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.951 20:06:46 -- accel/accel.sh@20 -- # IFS=: 00:06:08.951 20:06:46 -- accel/accel.sh@20 -- # read -r var val 00:06:08.951 20:06:46 -- accel/accel.sh@21 -- # val=1 00:06:08.951 20:06:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.951 20:06:46 -- accel/accel.sh@20 -- # IFS=: 00:06:08.951 20:06:46 -- accel/accel.sh@20 -- # read -r var val 00:06:08.951 20:06:46 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:08.951 20:06:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.951 20:06:46 -- accel/accel.sh@20 -- # IFS=: 00:06:08.951 20:06:46 -- accel/accel.sh@20 -- # read -r var val 00:06:08.951 20:06:46 -- accel/accel.sh@21 -- # val=Yes 00:06:08.951 20:06:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.951 20:06:46 -- accel/accel.sh@20 -- # IFS=: 00:06:08.951 20:06:46 -- accel/accel.sh@20 -- # read -r var val 00:06:08.951 20:06:46 -- accel/accel.sh@21 -- # val= 00:06:08.951 20:06:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.951 20:06:46 -- accel/accel.sh@20 -- # IFS=: 00:06:08.951 20:06:46 -- accel/accel.sh@20 -- # read -r var val 00:06:08.951 20:06:46 -- accel/accel.sh@21 -- # val= 00:06:08.951 20:06:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.951 20:06:46 -- accel/accel.sh@20 -- # IFS=: 00:06:08.951 20:06:46 -- accel/accel.sh@20 -- # read -r var val 00:06:09.933 [2024-02-14 20:06:47.173514] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:06:09.933 20:06:47 -- accel/accel.sh@21 -- # val= 00:06:09.933 20:06:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.933 20:06:47 -- accel/accel.sh@20 -- # IFS=: 00:06:09.933 20:06:47 -- accel/accel.sh@20 -- # read -r var val 00:06:09.933 20:06:47 -- accel/accel.sh@21 -- # val= 00:06:09.933 20:06:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.933 20:06:47 -- accel/accel.sh@20 -- # IFS=: 00:06:09.933 20:06:47 -- accel/accel.sh@20 -- # read -r var val 00:06:09.933 20:06:47 -- accel/accel.sh@21 -- # val= 00:06:09.933 20:06:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.933 20:06:47 -- accel/accel.sh@20 -- # IFS=: 00:06:09.933 20:06:47 -- accel/accel.sh@20 -- # read -r var val 00:06:09.933 20:06:47 -- accel/accel.sh@21 -- # val= 00:06:09.933 20:06:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.933 20:06:47 -- accel/accel.sh@20 -- # IFS=: 00:06:09.933 20:06:47 -- accel/accel.sh@20 -- # read -r var val 00:06:09.933 20:06:47 -- accel/accel.sh@21 -- # val= 00:06:09.933 20:06:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.933 20:06:47 -- accel/accel.sh@20 -- # IFS=: 00:06:09.934 20:06:47 -- accel/accel.sh@20 -- # read -r var val 00:06:09.934 20:06:47 -- accel/accel.sh@21 -- # val= 00:06:09.934 20:06:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.934 20:06:47 -- accel/accel.sh@20 -- # IFS=: 00:06:09.934 20:06:47 -- accel/accel.sh@20 -- # read -r var val 00:06:09.934 20:06:47 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:09.934 20:06:47 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:09.934 20:06:47 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:09.934 00:06:09.934 real 0m2.701s 00:06:09.934 user 0m2.483s 00:06:09.934 sys 0m0.225s 00:06:09.934 20:06:47 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:09.934 20:06:47 -- common/autotest_common.sh@10 -- # set +x 00:06:09.934 ************************************ 00:06:09.934 END TEST accel_xor 00:06:09.934 ************************************ 00:06:10.193 20:06:47 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:10.193 20:06:47 -- common/autotest_common.sh@1075 -- # '[' 9 -le 1 ']' 00:06:10.193 20:06:47 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:10.193 20:06:47 -- common/autotest_common.sh@10 -- # set +x 00:06:10.193 ************************************ 00:06:10.194 START TEST accel_xor 00:06:10.194 ************************************ 00:06:10.194 20:06:47 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w xor -y -x 3 00:06:10.194 20:06:47 -- accel/accel.sh@16 -- # local accel_opc 00:06:10.194 20:06:47 -- accel/accel.sh@17 -- # local accel_module 00:06:10.194 20:06:47 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:06:10.194 20:06:47 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:10.194 20:06:47 -- accel/accel.sh@12 -- # build_accel_config 00:06:10.194 20:06:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:10.194 20:06:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:10.194 20:06:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:10.194 20:06:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:10.194 20:06:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:10.194 20:06:47 -- accel/accel.sh@41 -- # local IFS=, 00:06:10.194 20:06:47 -- accel/accel.sh@42 -- # jq -r . 00:06:10.194 [2024-02-14 20:06:47.399050] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:06:10.194 [2024-02-14 20:06:47.399128] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1609369 ] 00:06:10.194 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.194 [2024-02-14 20:06:47.462652] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.194 [2024-02-14 20:06:47.530096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.194 [2024-02-14 20:06:47.530154] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:06:11.206 [2024-02-14 20:06:48.574828] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:06:11.465 20:06:48 -- accel/accel.sh@18 -- # out=' 00:06:11.465 SPDK Configuration: 00:06:11.465 Core mask: 0x1 00:06:11.465 00:06:11.465 Accel Perf Configuration: 00:06:11.465 Workload Type: xor 00:06:11.465 Source buffers: 3 00:06:11.465 Transfer size: 4096 bytes 00:06:11.465 Vector count 1 00:06:11.465 Module: software 00:06:11.465 Queue depth: 32 00:06:11.465 Allocate depth: 32 00:06:11.465 # threads/core: 1 00:06:11.465 Run time: 1 seconds 00:06:11.465 Verify: Yes 00:06:11.465 00:06:11.465 Running for 1 seconds... 00:06:11.465 00:06:11.465 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:11.465 ------------------------------------------------------------------------------------ 00:06:11.465 0,0 476192/s 1860 MiB/s 0 0 00:06:11.465 ==================================================================================== 00:06:11.465 Total 476192/s 1860 MiB/s 0 0' 00:06:11.465 20:06:48 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:11.465 20:06:48 -- accel/accel.sh@20 -- # IFS=: 00:06:11.465 20:06:48 -- accel/accel.sh@20 -- # read -r var val 00:06:11.465 20:06:48 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:11.465 20:06:48 -- accel/accel.sh@12 -- # build_accel_config 00:06:11.465 20:06:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:11.465 20:06:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:11.465 20:06:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:11.465 20:06:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:11.465 20:06:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:11.465 20:06:48 -- accel/accel.sh@41 -- # local IFS=, 00:06:11.465 20:06:48 -- accel/accel.sh@42 -- # jq -r . 00:06:11.465 [2024-02-14 20:06:48.740514] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:06:11.465 [2024-02-14 20:06:48.740564] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1609594 ] 00:06:11.465 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.465 [2024-02-14 20:06:48.797471] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.465 [2024-02-14 20:06:48.864329] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.465 [2024-02-14 20:06:48.864384] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:06:11.725 20:06:48 -- accel/accel.sh@21 -- # val= 00:06:11.725 20:06:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.725 20:06:48 -- accel/accel.sh@20 -- # IFS=: 00:06:11.725 20:06:48 -- accel/accel.sh@20 -- # read -r var val 00:06:11.725 20:06:48 -- accel/accel.sh@21 -- # val= 00:06:11.725 20:06:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.725 20:06:48 -- accel/accel.sh@20 -- # IFS=: 00:06:11.725 20:06:48 -- accel/accel.sh@20 -- # read -r var val 00:06:11.725 20:06:48 -- accel/accel.sh@21 -- # val=0x1 00:06:11.725 20:06:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.725 20:06:48 -- accel/accel.sh@20 -- # IFS=: 00:06:11.725 20:06:48 -- accel/accel.sh@20 -- # read -r var val 00:06:11.725 20:06:48 -- accel/accel.sh@21 -- # val= 00:06:11.725 20:06:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.725 20:06:48 -- accel/accel.sh@20 -- # IFS=: 00:06:11.725 20:06:48 -- accel/accel.sh@20 -- # read -r var val 00:06:11.725 20:06:48 -- accel/accel.sh@21 -- # val= 00:06:11.725 20:06:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.725 20:06:48 -- accel/accel.sh@20 -- # IFS=: 00:06:11.725 20:06:48 -- accel/accel.sh@20 -- # read -r var val 00:06:11.725 20:06:48 -- accel/accel.sh@21 -- # val=xor 00:06:11.725 20:06:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.725 20:06:48 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:11.725 20:06:48 -- accel/accel.sh@20 -- # IFS=: 00:06:11.725 20:06:48 -- accel/accel.sh@20 -- # read -r var val 00:06:11.725 20:06:48 -- accel/accel.sh@21 -- # val=3 00:06:11.725 20:06:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.725 20:06:48 -- accel/accel.sh@20 -- # IFS=: 00:06:11.725 20:06:48 -- accel/accel.sh@20 -- # read -r var val 00:06:11.725 20:06:48 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:11.725 20:06:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.725 20:06:48 -- accel/accel.sh@20 -- # IFS=: 00:06:11.725 20:06:48 -- accel/accel.sh@20 -- # read -r var val 00:06:11.725 20:06:48 -- accel/accel.sh@21 -- # val= 00:06:11.725 20:06:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.725 20:06:48 -- accel/accel.sh@20 -- # IFS=: 00:06:11.725 20:06:48 -- accel/accel.sh@20 -- # read -r var val 00:06:11.725 20:06:48 -- accel/accel.sh@21 -- # val=software 00:06:11.725 20:06:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.725 20:06:48 -- accel/accel.sh@23 -- # accel_module=software 00:06:11.725 20:06:48 -- accel/accel.sh@20 -- # IFS=: 00:06:11.725 20:06:48 -- accel/accel.sh@20 -- # read -r var val 00:06:11.725 20:06:48 -- accel/accel.sh@21 -- # val=32 00:06:11.725 20:06:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.725 20:06:48 -- accel/accel.sh@20 -- # IFS=: 00:06:11.725 20:06:48 -- accel/accel.sh@20 -- # read -r var val 00:06:11.725 20:06:48 -- accel/accel.sh@21 -- # val=32 00:06:11.725 20:06:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.725 20:06:48 -- accel/accel.sh@20 -- # IFS=: 00:06:11.725 20:06:48 -- accel/accel.sh@20 -- # read -r var val 00:06:11.725 20:06:48 -- accel/accel.sh@21 -- # val=1 00:06:11.725 20:06:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.725 20:06:48 -- accel/accel.sh@20 -- # IFS=: 00:06:11.725 20:06:48 -- accel/accel.sh@20 -- # read -r var val 00:06:11.725 20:06:48 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:11.725 20:06:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.725 20:06:48 -- accel/accel.sh@20 -- # IFS=: 00:06:11.725 20:06:48 -- accel/accel.sh@20 -- # read -r var val 00:06:11.725 20:06:48 -- accel/accel.sh@21 -- # val=Yes 00:06:11.725 20:06:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.725 20:06:48 -- accel/accel.sh@20 -- # IFS=: 00:06:11.725 20:06:48 -- accel/accel.sh@20 -- # read -r var val 00:06:11.725 20:06:48 -- accel/accel.sh@21 -- # val= 00:06:11.725 20:06:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.725 20:06:48 -- accel/accel.sh@20 -- # IFS=: 00:06:11.725 20:06:48 -- accel/accel.sh@20 -- # read -r var val 00:06:11.725 20:06:48 -- accel/accel.sh@21 -- # val= 00:06:11.725 20:06:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.725 20:06:48 -- accel/accel.sh@20 -- # IFS=: 00:06:11.725 20:06:48 -- accel/accel.sh@20 -- # read -r var val 00:06:12.663 [2024-02-14 20:06:49.909161] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:06:12.663 20:06:50 -- accel/accel.sh@21 -- # val= 00:06:12.663 20:06:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.663 20:06:50 -- accel/accel.sh@20 -- # IFS=: 00:06:12.663 20:06:50 -- accel/accel.sh@20 -- # read -r var val 00:06:12.663 20:06:50 -- accel/accel.sh@21 -- # val= 00:06:12.663 20:06:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.663 20:06:50 -- accel/accel.sh@20 -- # IFS=: 00:06:12.663 20:06:50 -- accel/accel.sh@20 -- # read -r var val 00:06:12.663 20:06:50 -- accel/accel.sh@21 -- # val= 00:06:12.663 20:06:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.663 20:06:50 -- accel/accel.sh@20 -- # IFS=: 00:06:12.664 20:06:50 -- accel/accel.sh@20 -- # read -r var val 00:06:12.664 20:06:50 -- accel/accel.sh@21 -- # val= 00:06:12.664 20:06:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.664 20:06:50 -- accel/accel.sh@20 -- # IFS=: 00:06:12.664 20:06:50 -- accel/accel.sh@20 -- # read -r var val 00:06:12.664 20:06:50 -- accel/accel.sh@21 -- # val= 00:06:12.664 20:06:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.664 20:06:50 -- accel/accel.sh@20 -- # IFS=: 00:06:12.664 20:06:50 -- accel/accel.sh@20 -- # read -r var val 00:06:12.664 20:06:50 -- accel/accel.sh@21 -- # val= 00:06:12.664 20:06:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.664 20:06:50 -- accel/accel.sh@20 -- # IFS=: 00:06:12.664 20:06:50 -- accel/accel.sh@20 -- # read -r var val 00:06:12.664 20:06:50 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:12.664 20:06:50 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:12.664 20:06:50 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:12.664 00:06:12.664 real 0m2.699s 00:06:12.664 user 0m2.477s 00:06:12.664 sys 0m0.229s 00:06:12.664 20:06:50 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:12.664 20:06:50 -- common/autotest_common.sh@10 -- # set +x 00:06:12.664 ************************************ 00:06:12.664 END TEST accel_xor 00:06:12.664 ************************************ 00:06:12.924 20:06:50 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:12.924 20:06:50 -- common/autotest_common.sh@1075 -- # '[' 6 -le 1 ']' 00:06:12.924 20:06:50 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:12.924 20:06:50 -- common/autotest_common.sh@10 -- # set +x 00:06:12.924 ************************************ 00:06:12.924 START TEST accel_dif_verify 00:06:12.924 ************************************ 00:06:12.924 20:06:50 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w dif_verify 00:06:12.924 20:06:50 -- accel/accel.sh@16 -- # local accel_opc 00:06:12.924 20:06:50 -- accel/accel.sh@17 -- # local accel_module 00:06:12.924 20:06:50 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:06:12.924 20:06:50 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:12.924 20:06:50 -- accel/accel.sh@12 -- # build_accel_config 00:06:12.924 20:06:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:12.924 20:06:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:12.924 20:06:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:12.924 20:06:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:12.924 20:06:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:12.924 20:06:50 -- accel/accel.sh@41 -- # local IFS=, 00:06:12.924 20:06:50 -- accel/accel.sh@42 -- # jq -r . 00:06:12.924 [2024-02-14 20:06:50.133846] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:06:12.924 [2024-02-14 20:06:50.133924] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1609855 ] 00:06:12.924 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.924 [2024-02-14 20:06:50.193587] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.924 [2024-02-14 20:06:50.264112] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.924 [2024-02-14 20:06:50.264166] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:06:14.303 [2024-02-14 20:06:51.309613] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:06:14.303 20:06:51 -- accel/accel.sh@18 -- # out=' 00:06:14.303 SPDK Configuration: 00:06:14.303 Core mask: 0x1 00:06:14.303 00:06:14.303 Accel Perf Configuration: 00:06:14.303 Workload Type: dif_verify 00:06:14.303 Vector size: 4096 bytes 00:06:14.303 Transfer size: 4096 bytes 00:06:14.303 Block size: 512 bytes 00:06:14.303 Metadata size: 8 bytes 00:06:14.303 Vector count 1 00:06:14.303 Module: software 00:06:14.303 Queue depth: 32 00:06:14.303 Allocate depth: 32 00:06:14.303 # threads/core: 1 00:06:14.303 Run time: 1 seconds 00:06:14.303 Verify: No 00:06:14.303 00:06:14.303 Running for 1 seconds... 00:06:14.303 00:06:14.303 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:14.303 ------------------------------------------------------------------------------------ 00:06:14.303 0,0 134240/s 532 MiB/s 0 0 00:06:14.303 ==================================================================================== 00:06:14.303 Total 134240/s 524 MiB/s 0 0' 00:06:14.303 20:06:51 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:14.303 20:06:51 -- accel/accel.sh@20 -- # IFS=: 00:06:14.303 20:06:51 -- accel/accel.sh@20 -- # read -r var val 00:06:14.303 20:06:51 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:14.303 20:06:51 -- accel/accel.sh@12 -- # build_accel_config 00:06:14.303 20:06:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:14.303 20:06:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:14.303 20:06:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:14.303 20:06:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:14.303 20:06:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:14.303 20:06:51 -- accel/accel.sh@41 -- # local IFS=, 00:06:14.303 20:06:51 -- accel/accel.sh@42 -- # jq -r . 00:06:14.303 [2024-02-14 20:06:51.472201] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:06:14.303 [2024-02-14 20:06:51.472252] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1610075 ] 00:06:14.303 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.303 [2024-02-14 20:06:51.523569] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.303 [2024-02-14 20:06:51.590947] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.303 [2024-02-14 20:06:51.591002] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:06:14.303 20:06:51 -- accel/accel.sh@21 -- # val= 00:06:14.303 20:06:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.303 20:06:51 -- accel/accel.sh@20 -- # IFS=: 00:06:14.303 20:06:51 -- accel/accel.sh@20 -- # read -r var val 00:06:14.303 20:06:51 -- accel/accel.sh@21 -- # val= 00:06:14.303 20:06:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.303 20:06:51 -- accel/accel.sh@20 -- # IFS=: 00:06:14.303 20:06:51 -- accel/accel.sh@20 -- # read -r var val 00:06:14.303 20:06:51 -- accel/accel.sh@21 -- # val=0x1 00:06:14.303 20:06:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.303 20:06:51 -- accel/accel.sh@20 -- # IFS=: 00:06:14.303 20:06:51 -- accel/accel.sh@20 -- # read -r var val 00:06:14.303 20:06:51 -- accel/accel.sh@21 -- # val= 00:06:14.303 20:06:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.303 20:06:51 -- accel/accel.sh@20 -- # IFS=: 00:06:14.303 20:06:51 -- accel/accel.sh@20 -- # read -r var val 00:06:14.303 20:06:51 -- accel/accel.sh@21 -- # val= 00:06:14.303 20:06:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.303 20:06:51 -- accel/accel.sh@20 -- # IFS=: 00:06:14.303 20:06:51 -- accel/accel.sh@20 -- # read -r var val 00:06:14.303 20:06:51 -- accel/accel.sh@21 -- # val=dif_verify 00:06:14.303 20:06:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.303 20:06:51 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:06:14.303 20:06:51 -- accel/accel.sh@20 -- # IFS=: 00:06:14.303 20:06:51 -- accel/accel.sh@20 -- # read -r var val 00:06:14.303 20:06:51 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:14.303 20:06:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.303 20:06:51 -- accel/accel.sh@20 -- # IFS=: 00:06:14.303 20:06:51 -- accel/accel.sh@20 -- # read -r var val 00:06:14.303 20:06:51 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:14.303 20:06:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.303 20:06:51 -- accel/accel.sh@20 -- # IFS=: 00:06:14.304 20:06:51 -- accel/accel.sh@20 -- # read -r var val 00:06:14.304 20:06:51 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:14.304 20:06:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.304 20:06:51 -- accel/accel.sh@20 -- # IFS=: 00:06:14.304 20:06:51 -- accel/accel.sh@20 -- # read -r var val 00:06:14.304 20:06:51 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:14.304 20:06:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.304 20:06:51 -- accel/accel.sh@20 -- # IFS=: 00:06:14.304 20:06:51 -- accel/accel.sh@20 -- # read -r var val 00:06:14.304 20:06:51 -- accel/accel.sh@21 -- # val= 00:06:14.304 20:06:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.304 20:06:51 -- accel/accel.sh@20 -- # IFS=: 00:06:14.304 20:06:51 -- accel/accel.sh@20 -- # read -r var val 00:06:14.304 20:06:51 -- accel/accel.sh@21 -- # val=software 00:06:14.304 20:06:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.304 20:06:51 -- accel/accel.sh@23 -- # accel_module=software 00:06:14.304 20:06:51 -- accel/accel.sh@20 -- # IFS=: 00:06:14.304 20:06:51 -- accel/accel.sh@20 -- # read -r var val 00:06:14.304 20:06:51 -- accel/accel.sh@21 -- # val=32 00:06:14.304 20:06:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.304 20:06:51 -- accel/accel.sh@20 -- # IFS=: 00:06:14.304 20:06:51 -- accel/accel.sh@20 -- # read -r var val 00:06:14.304 20:06:51 -- accel/accel.sh@21 -- # val=32 00:06:14.304 20:06:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.304 20:06:51 -- accel/accel.sh@20 -- # IFS=: 00:06:14.304 20:06:51 -- accel/accel.sh@20 -- # read -r var val 00:06:14.304 20:06:51 -- accel/accel.sh@21 -- # val=1 00:06:14.304 20:06:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.304 20:06:51 -- accel/accel.sh@20 -- # IFS=: 00:06:14.304 20:06:51 -- accel/accel.sh@20 -- # read -r var val 00:06:14.304 20:06:51 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:14.304 20:06:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.304 20:06:51 -- accel/accel.sh@20 -- # IFS=: 00:06:14.304 20:06:51 -- accel/accel.sh@20 -- # read -r var val 00:06:14.304 20:06:51 -- accel/accel.sh@21 -- # val=No 00:06:14.304 20:06:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.304 20:06:51 -- accel/accel.sh@20 -- # IFS=: 00:06:14.304 20:06:51 -- accel/accel.sh@20 -- # read -r var val 00:06:14.304 20:06:51 -- accel/accel.sh@21 -- # val= 00:06:14.304 20:06:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.304 20:06:51 -- accel/accel.sh@20 -- # IFS=: 00:06:14.304 20:06:51 -- accel/accel.sh@20 -- # read -r var val 00:06:14.304 20:06:51 -- accel/accel.sh@21 -- # val= 00:06:14.304 20:06:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.304 20:06:51 -- accel/accel.sh@20 -- # IFS=: 00:06:14.304 20:06:51 -- accel/accel.sh@20 -- # read -r var val 00:06:15.242 [2024-02-14 20:06:52.635678] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:06:15.502 20:06:52 -- accel/accel.sh@21 -- # val= 00:06:15.502 20:06:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.502 20:06:52 -- accel/accel.sh@20 -- # IFS=: 00:06:15.502 20:06:52 -- accel/accel.sh@20 -- # read -r var val 00:06:15.502 20:06:52 -- accel/accel.sh@21 -- # val= 00:06:15.502 20:06:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.502 20:06:52 -- accel/accel.sh@20 -- # IFS=: 00:06:15.502 20:06:52 -- accel/accel.sh@20 -- # read -r var val 00:06:15.502 20:06:52 -- accel/accel.sh@21 -- # val= 00:06:15.502 20:06:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.502 20:06:52 -- accel/accel.sh@20 -- # IFS=: 00:06:15.502 20:06:52 -- accel/accel.sh@20 -- # read -r var val 00:06:15.502 20:06:52 -- accel/accel.sh@21 -- # val= 00:06:15.502 20:06:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.502 20:06:52 -- accel/accel.sh@20 -- # IFS=: 00:06:15.502 20:06:52 -- accel/accel.sh@20 -- # read -r var val 00:06:15.502 20:06:52 -- accel/accel.sh@21 -- # val= 00:06:15.502 20:06:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.502 20:06:52 -- accel/accel.sh@20 -- # IFS=: 00:06:15.502 20:06:52 -- accel/accel.sh@20 -- # read -r var val 00:06:15.502 20:06:52 -- accel/accel.sh@21 -- # val= 00:06:15.502 20:06:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.502 20:06:52 -- accel/accel.sh@20 -- # IFS=: 00:06:15.502 20:06:52 -- accel/accel.sh@20 -- # read -r var val 00:06:15.502 20:06:52 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:15.502 20:06:52 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:06:15.502 20:06:52 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:15.502 00:06:15.502 real 0m2.684s 00:06:15.502 user 0m2.477s 00:06:15.502 sys 0m0.217s 00:06:15.502 20:06:52 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:15.502 20:06:52 -- common/autotest_common.sh@10 -- # set +x 00:06:15.502 ************************************ 00:06:15.502 END TEST accel_dif_verify 00:06:15.502 ************************************ 00:06:15.502 20:06:52 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:15.502 20:06:52 -- common/autotest_common.sh@1075 -- # '[' 6 -le 1 ']' 00:06:15.502 20:06:52 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:15.502 20:06:52 -- common/autotest_common.sh@10 -- # set +x 00:06:15.502 ************************************ 00:06:15.502 START TEST accel_dif_generate 00:06:15.502 ************************************ 00:06:15.502 20:06:52 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w dif_generate 00:06:15.502 20:06:52 -- accel/accel.sh@16 -- # local accel_opc 00:06:15.502 20:06:52 -- accel/accel.sh@17 -- # local accel_module 00:06:15.502 20:06:52 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:06:15.502 20:06:52 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:15.502 20:06:52 -- accel/accel.sh@12 -- # build_accel_config 00:06:15.502 20:06:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:15.502 20:06:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:15.502 20:06:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:15.502 20:06:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:15.502 20:06:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:15.502 20:06:52 -- accel/accel.sh@41 -- # local IFS=, 00:06:15.502 20:06:52 -- accel/accel.sh@42 -- # jq -r . 00:06:15.502 [2024-02-14 20:06:52.843173] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:06:15.502 [2024-02-14 20:06:52.843225] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1610325 ] 00:06:15.502 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.502 [2024-02-14 20:06:52.896017] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.761 [2024-02-14 20:06:52.966361] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.761 [2024-02-14 20:06:52.966414] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:06:16.700 [2024-02-14 20:06:54.010952] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:06:16.960 20:06:54 -- accel/accel.sh@18 -- # out=' 00:06:16.960 SPDK Configuration: 00:06:16.960 Core mask: 0x1 00:06:16.960 00:06:16.960 Accel Perf Configuration: 00:06:16.960 Workload Type: dif_generate 00:06:16.960 Vector size: 4096 bytes 00:06:16.960 Transfer size: 4096 bytes 00:06:16.960 Block size: 512 bytes 00:06:16.960 Metadata size: 8 bytes 00:06:16.960 Vector count 1 00:06:16.960 Module: software 00:06:16.960 Queue depth: 32 00:06:16.960 Allocate depth: 32 00:06:16.960 # threads/core: 1 00:06:16.960 Run time: 1 seconds 00:06:16.960 Verify: No 00:06:16.960 00:06:16.960 Running for 1 seconds... 00:06:16.960 00:06:16.960 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:16.960 ------------------------------------------------------------------------------------ 00:06:16.960 0,0 163744/s 649 MiB/s 0 0 00:06:16.960 ==================================================================================== 00:06:16.960 Total 163744/s 639 MiB/s 0 0' 00:06:16.960 20:06:54 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:16.960 20:06:54 -- accel/accel.sh@20 -- # IFS=: 00:06:16.960 20:06:54 -- accel/accel.sh@20 -- # read -r var val 00:06:16.960 20:06:54 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:16.960 20:06:54 -- accel/accel.sh@12 -- # build_accel_config 00:06:16.960 20:06:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:16.960 20:06:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:16.960 20:06:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:16.960 20:06:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:16.960 20:06:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:16.960 20:06:54 -- accel/accel.sh@41 -- # local IFS=, 00:06:16.960 20:06:54 -- accel/accel.sh@42 -- # jq -r . 00:06:16.960 [2024-02-14 20:06:54.177189] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:06:16.960 [2024-02-14 20:06:54.177241] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1610537 ] 00:06:16.960 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.960 [2024-02-14 20:06:54.234216] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.960 [2024-02-14 20:06:54.300527] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.960 [2024-02-14 20:06:54.300582] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:06:16.960 20:06:54 -- accel/accel.sh@21 -- # val= 00:06:16.960 20:06:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.960 20:06:54 -- accel/accel.sh@20 -- # IFS=: 00:06:16.960 20:06:54 -- accel/accel.sh@20 -- # read -r var val 00:06:16.960 20:06:54 -- accel/accel.sh@21 -- # val= 00:06:16.960 20:06:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.960 20:06:54 -- accel/accel.sh@20 -- # IFS=: 00:06:16.960 20:06:54 -- accel/accel.sh@20 -- # read -r var val 00:06:16.960 20:06:54 -- accel/accel.sh@21 -- # val=0x1 00:06:16.960 20:06:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.960 20:06:54 -- accel/accel.sh@20 -- # IFS=: 00:06:16.960 20:06:54 -- accel/accel.sh@20 -- # read -r var val 00:06:16.960 20:06:54 -- accel/accel.sh@21 -- # val= 00:06:16.960 20:06:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.960 20:06:54 -- accel/accel.sh@20 -- # IFS=: 00:06:16.960 20:06:54 -- accel/accel.sh@20 -- # read -r var val 00:06:16.960 20:06:54 -- accel/accel.sh@21 -- # val= 00:06:16.960 20:06:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.960 20:06:54 -- accel/accel.sh@20 -- # IFS=: 00:06:16.960 20:06:54 -- accel/accel.sh@20 -- # read -r var val 00:06:16.960 20:06:54 -- accel/accel.sh@21 -- # val=dif_generate 00:06:16.960 20:06:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.960 20:06:54 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:06:16.960 20:06:54 -- accel/accel.sh@20 -- # IFS=: 00:06:16.960 20:06:54 -- accel/accel.sh@20 -- # read -r var val 00:06:16.960 20:06:54 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:16.960 20:06:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.960 20:06:54 -- accel/accel.sh@20 -- # IFS=: 00:06:16.960 20:06:54 -- accel/accel.sh@20 -- # read -r var val 00:06:16.960 20:06:54 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:16.960 20:06:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.960 20:06:54 -- accel/accel.sh@20 -- # IFS=: 00:06:16.960 20:06:54 -- accel/accel.sh@20 -- # read -r var val 00:06:16.960 20:06:54 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:16.960 20:06:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.960 20:06:54 -- accel/accel.sh@20 -- # IFS=: 00:06:16.960 20:06:54 -- accel/accel.sh@20 -- # read -r var val 00:06:16.960 20:06:54 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:16.960 20:06:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.960 20:06:54 -- accel/accel.sh@20 -- # IFS=: 00:06:16.960 20:06:54 -- accel/accel.sh@20 -- # read -r var val 00:06:16.960 20:06:54 -- accel/accel.sh@21 -- # val= 00:06:16.960 20:06:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.960 20:06:54 -- accel/accel.sh@20 -- # IFS=: 00:06:16.960 20:06:54 -- accel/accel.sh@20 -- # read -r var val 00:06:16.960 20:06:54 -- accel/accel.sh@21 -- # val=software 00:06:16.960 20:06:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.960 20:06:54 -- accel/accel.sh@23 -- # accel_module=software 00:06:16.960 20:06:54 -- accel/accel.sh@20 -- # IFS=: 00:06:16.960 20:06:54 -- accel/accel.sh@20 -- # read -r var val 00:06:16.960 20:06:54 -- accel/accel.sh@21 -- # val=32 00:06:16.960 20:06:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.960 20:06:54 -- accel/accel.sh@20 -- # IFS=: 00:06:16.960 20:06:54 -- accel/accel.sh@20 -- # read -r var val 00:06:16.960 20:06:54 -- accel/accel.sh@21 -- # val=32 00:06:16.960 20:06:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.961 20:06:54 -- accel/accel.sh@20 -- # IFS=: 00:06:16.961 20:06:54 -- accel/accel.sh@20 -- # read -r var val 00:06:16.961 20:06:54 -- accel/accel.sh@21 -- # val=1 00:06:16.961 20:06:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.961 20:06:54 -- accel/accel.sh@20 -- # IFS=: 00:06:16.961 20:06:54 -- accel/accel.sh@20 -- # read -r var val 00:06:16.961 20:06:54 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:16.961 20:06:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.961 20:06:54 -- accel/accel.sh@20 -- # IFS=: 00:06:16.961 20:06:54 -- accel/accel.sh@20 -- # read -r var val 00:06:16.961 20:06:54 -- accel/accel.sh@21 -- # val=No 00:06:16.961 20:06:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.961 20:06:54 -- accel/accel.sh@20 -- # IFS=: 00:06:16.961 20:06:54 -- accel/accel.sh@20 -- # read -r var val 00:06:16.961 20:06:54 -- accel/accel.sh@21 -- # val= 00:06:16.961 20:06:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.961 20:06:54 -- accel/accel.sh@20 -- # IFS=: 00:06:16.961 20:06:54 -- accel/accel.sh@20 -- # read -r var val 00:06:16.961 20:06:54 -- accel/accel.sh@21 -- # val= 00:06:16.961 20:06:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.961 20:06:54 -- accel/accel.sh@20 -- # IFS=: 00:06:16.961 20:06:54 -- accel/accel.sh@20 -- # read -r var val 00:06:18.342 [2024-02-14 20:06:55.344481] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:06:18.342 20:06:55 -- accel/accel.sh@21 -- # val= 00:06:18.342 20:06:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.342 20:06:55 -- accel/accel.sh@20 -- # IFS=: 00:06:18.342 20:06:55 -- accel/accel.sh@20 -- # read -r var val 00:06:18.342 20:06:55 -- accel/accel.sh@21 -- # val= 00:06:18.342 20:06:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.342 20:06:55 -- accel/accel.sh@20 -- # IFS=: 00:06:18.342 20:06:55 -- accel/accel.sh@20 -- # read -r var val 00:06:18.342 20:06:55 -- accel/accel.sh@21 -- # val= 00:06:18.342 20:06:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.342 20:06:55 -- accel/accel.sh@20 -- # IFS=: 00:06:18.342 20:06:55 -- accel/accel.sh@20 -- # read -r var val 00:06:18.342 20:06:55 -- accel/accel.sh@21 -- # val= 00:06:18.342 20:06:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.342 20:06:55 -- accel/accel.sh@20 -- # IFS=: 00:06:18.342 20:06:55 -- accel/accel.sh@20 -- # read -r var val 00:06:18.342 20:06:55 -- accel/accel.sh@21 -- # val= 00:06:18.342 20:06:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.342 20:06:55 -- accel/accel.sh@20 -- # IFS=: 00:06:18.342 20:06:55 -- accel/accel.sh@20 -- # read -r var val 00:06:18.342 20:06:55 -- accel/accel.sh@21 -- # val= 00:06:18.342 20:06:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.342 20:06:55 -- accel/accel.sh@20 -- # IFS=: 00:06:18.342 20:06:55 -- accel/accel.sh@20 -- # read -r var val 00:06:18.342 20:06:55 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:18.342 20:06:55 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:06:18.342 20:06:55 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:18.342 00:06:18.342 real 0m2.673s 00:06:18.342 user 0m2.476s 00:06:18.342 sys 0m0.208s 00:06:18.342 20:06:55 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:18.342 20:06:55 -- common/autotest_common.sh@10 -- # set +x 00:06:18.342 ************************************ 00:06:18.342 END TEST accel_dif_generate 00:06:18.342 ************************************ 00:06:18.342 20:06:55 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:18.342 20:06:55 -- common/autotest_common.sh@1075 -- # '[' 6 -le 1 ']' 00:06:18.342 20:06:55 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:18.342 20:06:55 -- common/autotest_common.sh@10 -- # set +x 00:06:18.342 ************************************ 00:06:18.342 START TEST accel_dif_generate_copy 00:06:18.342 ************************************ 00:06:18.342 20:06:55 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w dif_generate_copy 00:06:18.342 20:06:55 -- accel/accel.sh@16 -- # local accel_opc 00:06:18.342 20:06:55 -- accel/accel.sh@17 -- # local accel_module 00:06:18.342 20:06:55 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:06:18.342 20:06:55 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:18.342 20:06:55 -- accel/accel.sh@12 -- # build_accel_config 00:06:18.342 20:06:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:18.342 20:06:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:18.342 20:06:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:18.342 20:06:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:18.342 20:06:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:18.342 20:06:55 -- accel/accel.sh@41 -- # local IFS=, 00:06:18.342 20:06:55 -- accel/accel.sh@42 -- # jq -r . 00:06:18.342 [2024-02-14 20:06:55.558053] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:06:18.342 [2024-02-14 20:06:55.558130] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1610791 ] 00:06:18.342 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.342 [2024-02-14 20:06:55.617339] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.342 [2024-02-14 20:06:55.684672] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.342 [2024-02-14 20:06:55.684728] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:06:19.723 [2024-02-14 20:06:56.728684] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:06:19.723 20:06:56 -- accel/accel.sh@18 -- # out=' 00:06:19.723 SPDK Configuration: 00:06:19.723 Core mask: 0x1 00:06:19.723 00:06:19.723 Accel Perf Configuration: 00:06:19.723 Workload Type: dif_generate_copy 00:06:19.723 Vector size: 4096 bytes 00:06:19.723 Transfer size: 4096 bytes 00:06:19.723 Vector count 1 00:06:19.723 Module: software 00:06:19.723 Queue depth: 32 00:06:19.723 Allocate depth: 32 00:06:19.723 # threads/core: 1 00:06:19.723 Run time: 1 seconds 00:06:19.723 Verify: No 00:06:19.723 00:06:19.723 Running for 1 seconds... 00:06:19.723 00:06:19.723 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:19.723 ------------------------------------------------------------------------------------ 00:06:19.723 0,0 126496/s 501 MiB/s 0 0 00:06:19.723 ==================================================================================== 00:06:19.723 Total 126496/s 494 MiB/s 0 0' 00:06:19.723 20:06:56 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:19.723 20:06:56 -- accel/accel.sh@20 -- # IFS=: 00:06:19.723 20:06:56 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:19.723 20:06:56 -- accel/accel.sh@20 -- # read -r var val 00:06:19.723 20:06:56 -- accel/accel.sh@12 -- # build_accel_config 00:06:19.723 20:06:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:19.723 20:06:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.723 20:06:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.723 20:06:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:19.723 20:06:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:19.723 20:06:56 -- accel/accel.sh@41 -- # local IFS=, 00:06:19.723 20:06:56 -- accel/accel.sh@42 -- # jq -r . 00:06:19.723 [2024-02-14 20:06:56.891152] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:06:19.723 [2024-02-14 20:06:56.891200] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1611016 ] 00:06:19.723 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.723 [2024-02-14 20:06:56.943058] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.723 [2024-02-14 20:06:57.012479] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.723 [2024-02-14 20:06:57.012533] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:06:19.723 20:06:57 -- accel/accel.sh@21 -- # val= 00:06:19.723 20:06:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.723 20:06:57 -- accel/accel.sh@20 -- # IFS=: 00:06:19.723 20:06:57 -- accel/accel.sh@20 -- # read -r var val 00:06:19.723 20:06:57 -- accel/accel.sh@21 -- # val= 00:06:19.723 20:06:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.723 20:06:57 -- accel/accel.sh@20 -- # IFS=: 00:06:19.723 20:06:57 -- accel/accel.sh@20 -- # read -r var val 00:06:19.723 20:06:57 -- accel/accel.sh@21 -- # val=0x1 00:06:19.723 20:06:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.723 20:06:57 -- accel/accel.sh@20 -- # IFS=: 00:06:19.723 20:06:57 -- accel/accel.sh@20 -- # read -r var val 00:06:19.723 20:06:57 -- accel/accel.sh@21 -- # val= 00:06:19.723 20:06:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.723 20:06:57 -- accel/accel.sh@20 -- # IFS=: 00:06:19.723 20:06:57 -- accel/accel.sh@20 -- # read -r var val 00:06:19.723 20:06:57 -- accel/accel.sh@21 -- # val= 00:06:19.723 20:06:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.723 20:06:57 -- accel/accel.sh@20 -- # IFS=: 00:06:19.723 20:06:57 -- accel/accel.sh@20 -- # read -r var val 00:06:19.723 20:06:57 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:06:19.723 20:06:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.723 20:06:57 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:06:19.723 20:06:57 -- accel/accel.sh@20 -- # IFS=: 00:06:19.723 20:06:57 -- accel/accel.sh@20 -- # read -r var val 00:06:19.723 20:06:57 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:19.723 20:06:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.723 20:06:57 -- accel/accel.sh@20 -- # IFS=: 00:06:19.723 20:06:57 -- accel/accel.sh@20 -- # read -r var val 00:06:19.723 20:06:57 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:19.723 20:06:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.723 20:06:57 -- accel/accel.sh@20 -- # IFS=: 00:06:19.723 20:06:57 -- accel/accel.sh@20 -- # read -r var val 00:06:19.723 20:06:57 -- accel/accel.sh@21 -- # val= 00:06:19.723 20:06:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.723 20:06:57 -- accel/accel.sh@20 -- # IFS=: 00:06:19.723 20:06:57 -- accel/accel.sh@20 -- # read -r var val 00:06:19.723 20:06:57 -- accel/accel.sh@21 -- # val=software 00:06:19.723 20:06:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.723 20:06:57 -- accel/accel.sh@23 -- # accel_module=software 00:06:19.723 20:06:57 -- accel/accel.sh@20 -- # IFS=: 00:06:19.723 20:06:57 -- accel/accel.sh@20 -- # read -r var val 00:06:19.723 20:06:57 -- accel/accel.sh@21 -- # val=32 00:06:19.723 20:06:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.723 20:06:57 -- accel/accel.sh@20 -- # IFS=: 00:06:19.723 20:06:57 -- accel/accel.sh@20 -- # read -r var val 00:06:19.723 20:06:57 -- accel/accel.sh@21 -- # val=32 00:06:19.723 20:06:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.723 20:06:57 -- accel/accel.sh@20 -- # IFS=: 00:06:19.724 20:06:57 -- accel/accel.sh@20 -- # read -r var val 00:06:19.724 20:06:57 -- accel/accel.sh@21 -- # val=1 00:06:19.724 20:06:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.724 20:06:57 -- accel/accel.sh@20 -- # IFS=: 00:06:19.724 20:06:57 -- accel/accel.sh@20 -- # read -r var val 00:06:19.724 20:06:57 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:19.724 20:06:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.724 20:06:57 -- accel/accel.sh@20 -- # IFS=: 00:06:19.724 20:06:57 -- accel/accel.sh@20 -- # read -r var val 00:06:19.724 20:06:57 -- accel/accel.sh@21 -- # val=No 00:06:19.724 20:06:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.724 20:06:57 -- accel/accel.sh@20 -- # IFS=: 00:06:19.724 20:06:57 -- accel/accel.sh@20 -- # read -r var val 00:06:19.724 20:06:57 -- accel/accel.sh@21 -- # val= 00:06:19.724 20:06:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.724 20:06:57 -- accel/accel.sh@20 -- # IFS=: 00:06:19.724 20:06:57 -- accel/accel.sh@20 -- # read -r var val 00:06:19.724 20:06:57 -- accel/accel.sh@21 -- # val= 00:06:19.724 20:06:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.724 20:06:57 -- accel/accel.sh@20 -- # IFS=: 00:06:19.724 20:06:57 -- accel/accel.sh@20 -- # read -r var val 00:06:20.663 [2024-02-14 20:06:58.058587] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:06:20.923 20:06:58 -- accel/accel.sh@21 -- # val= 00:06:20.923 20:06:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.923 20:06:58 -- accel/accel.sh@20 -- # IFS=: 00:06:20.923 20:06:58 -- accel/accel.sh@20 -- # read -r var val 00:06:20.923 20:06:58 -- accel/accel.sh@21 -- # val= 00:06:20.923 20:06:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.923 20:06:58 -- accel/accel.sh@20 -- # IFS=: 00:06:20.923 20:06:58 -- accel/accel.sh@20 -- # read -r var val 00:06:20.923 20:06:58 -- accel/accel.sh@21 -- # val= 00:06:20.923 20:06:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.923 20:06:58 -- accel/accel.sh@20 -- # IFS=: 00:06:20.923 20:06:58 -- accel/accel.sh@20 -- # read -r var val 00:06:20.923 20:06:58 -- accel/accel.sh@21 -- # val= 00:06:20.923 20:06:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.923 20:06:58 -- accel/accel.sh@20 -- # IFS=: 00:06:20.923 20:06:58 -- accel/accel.sh@20 -- # read -r var val 00:06:20.923 20:06:58 -- accel/accel.sh@21 -- # val= 00:06:20.923 20:06:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.923 20:06:58 -- accel/accel.sh@20 -- # IFS=: 00:06:20.923 20:06:58 -- accel/accel.sh@20 -- # read -r var val 00:06:20.923 20:06:58 -- accel/accel.sh@21 -- # val= 00:06:20.923 20:06:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.923 20:06:58 -- accel/accel.sh@20 -- # IFS=: 00:06:20.923 20:06:58 -- accel/accel.sh@20 -- # read -r var val 00:06:20.923 20:06:58 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:20.923 20:06:58 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:06:20.923 20:06:58 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:20.923 00:06:20.923 real 0m2.681s 00:06:20.923 user 0m2.478s 00:06:20.923 sys 0m0.210s 00:06:20.923 20:06:58 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:20.923 20:06:58 -- common/autotest_common.sh@10 -- # set +x 00:06:20.923 ************************************ 00:06:20.923 END TEST accel_dif_generate_copy 00:06:20.923 ************************************ 00:06:20.923 20:06:58 -- accel/accel.sh@107 -- # [[ y == y ]] 00:06:20.923 20:06:58 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:20.923 20:06:58 -- common/autotest_common.sh@1075 -- # '[' 8 -le 1 ']' 00:06:20.923 20:06:58 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:20.923 20:06:58 -- common/autotest_common.sh@10 -- # set +x 00:06:20.923 ************************************ 00:06:20.923 START TEST accel_comp 00:06:20.923 ************************************ 00:06:20.923 20:06:58 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:20.923 20:06:58 -- accel/accel.sh@16 -- # local accel_opc 00:06:20.923 20:06:58 -- accel/accel.sh@17 -- # local accel_module 00:06:20.923 20:06:58 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:20.923 20:06:58 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:20.923 20:06:58 -- accel/accel.sh@12 -- # build_accel_config 00:06:20.923 20:06:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:20.923 20:06:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:20.923 20:06:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:20.923 20:06:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:20.923 20:06:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:20.923 20:06:58 -- accel/accel.sh@41 -- # local IFS=, 00:06:20.923 20:06:58 -- accel/accel.sh@42 -- # jq -r . 00:06:20.923 [2024-02-14 20:06:58.273052] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:06:20.923 [2024-02-14 20:06:58.273125] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1611273 ] 00:06:20.923 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.923 [2024-02-14 20:06:58.334840] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.183 [2024-02-14 20:06:58.403435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.183 [2024-02-14 20:06:58.403490] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:06:22.121 [2024-02-14 20:06:59.450580] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:06:22.381 20:06:59 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:22.381 00:06:22.381 SPDK Configuration: 00:06:22.381 Core mask: 0x1 00:06:22.381 00:06:22.381 Accel Perf Configuration: 00:06:22.381 Workload Type: compress 00:06:22.381 Transfer size: 4096 bytes 00:06:22.381 Vector count 1 00:06:22.381 Module: software 00:06:22.381 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:22.382 Queue depth: 32 00:06:22.382 Allocate depth: 32 00:06:22.382 # threads/core: 1 00:06:22.382 Run time: 1 seconds 00:06:22.382 Verify: No 00:06:22.382 00:06:22.382 Running for 1 seconds... 00:06:22.382 00:06:22.382 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:22.382 ------------------------------------------------------------------------------------ 00:06:22.382 0,0 65280/s 272 MiB/s 0 0 00:06:22.382 ==================================================================================== 00:06:22.382 Total 65280/s 255 MiB/s 0 0' 00:06:22.382 20:06:59 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:22.382 20:06:59 -- accel/accel.sh@20 -- # IFS=: 00:06:22.382 20:06:59 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:22.382 20:06:59 -- accel/accel.sh@20 -- # read -r var val 00:06:22.382 20:06:59 -- accel/accel.sh@12 -- # build_accel_config 00:06:22.382 20:06:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:22.382 20:06:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:22.382 20:06:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:22.382 20:06:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:22.382 20:06:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:22.382 20:06:59 -- accel/accel.sh@41 -- # local IFS=, 00:06:22.382 20:06:59 -- accel/accel.sh@42 -- # jq -r . 00:06:22.382 [2024-02-14 20:06:59.612426] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:06:22.382 [2024-02-14 20:06:59.612474] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1611498 ] 00:06:22.382 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.382 [2024-02-14 20:06:59.663698] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.382 [2024-02-14 20:06:59.730905] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.382 [2024-02-14 20:06:59.730962] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:06:22.382 20:06:59 -- accel/accel.sh@21 -- # val= 00:06:22.382 20:06:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.382 20:06:59 -- accel/accel.sh@20 -- # IFS=: 00:06:22.382 20:06:59 -- accel/accel.sh@20 -- # read -r var val 00:06:22.382 20:06:59 -- accel/accel.sh@21 -- # val= 00:06:22.382 20:06:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.382 20:06:59 -- accel/accel.sh@20 -- # IFS=: 00:06:22.382 20:06:59 -- accel/accel.sh@20 -- # read -r var val 00:06:22.382 20:06:59 -- accel/accel.sh@21 -- # val= 00:06:22.382 20:06:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.382 20:06:59 -- accel/accel.sh@20 -- # IFS=: 00:06:22.382 20:06:59 -- accel/accel.sh@20 -- # read -r var val 00:06:22.382 20:06:59 -- accel/accel.sh@21 -- # val=0x1 00:06:22.382 20:06:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.382 20:06:59 -- accel/accel.sh@20 -- # IFS=: 00:06:22.382 20:06:59 -- accel/accel.sh@20 -- # read -r var val 00:06:22.382 20:06:59 -- accel/accel.sh@21 -- # val= 00:06:22.382 20:06:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.382 20:06:59 -- accel/accel.sh@20 -- # IFS=: 00:06:22.382 20:06:59 -- accel/accel.sh@20 -- # read -r var val 00:06:22.382 20:06:59 -- accel/accel.sh@21 -- # val= 00:06:22.382 20:06:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.382 20:06:59 -- accel/accel.sh@20 -- # IFS=: 00:06:22.382 20:06:59 -- accel/accel.sh@20 -- # read -r var val 00:06:22.382 20:06:59 -- accel/accel.sh@21 -- # val=compress 00:06:22.382 20:06:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.382 20:06:59 -- accel/accel.sh@24 -- # accel_opc=compress 00:06:22.382 20:06:59 -- accel/accel.sh@20 -- # IFS=: 00:06:22.382 20:06:59 -- accel/accel.sh@20 -- # read -r var val 00:06:22.382 20:06:59 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:22.382 20:06:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.382 20:06:59 -- accel/accel.sh@20 -- # IFS=: 00:06:22.382 20:06:59 -- accel/accel.sh@20 -- # read -r var val 00:06:22.382 20:06:59 -- accel/accel.sh@21 -- # val= 00:06:22.382 20:06:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.382 20:06:59 -- accel/accel.sh@20 -- # IFS=: 00:06:22.382 20:06:59 -- accel/accel.sh@20 -- # read -r var val 00:06:22.382 20:06:59 -- accel/accel.sh@21 -- # val=software 00:06:22.382 20:06:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.382 20:06:59 -- accel/accel.sh@23 -- # accel_module=software 00:06:22.382 20:06:59 -- accel/accel.sh@20 -- # IFS=: 00:06:22.382 20:06:59 -- accel/accel.sh@20 -- # read -r var val 00:06:22.382 20:06:59 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:22.382 20:06:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.382 20:06:59 -- accel/accel.sh@20 -- # IFS=: 00:06:22.382 20:06:59 -- accel/accel.sh@20 -- # read -r var val 00:06:22.382 20:06:59 -- accel/accel.sh@21 -- # val=32 00:06:22.382 20:06:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.382 20:06:59 -- accel/accel.sh@20 -- # IFS=: 00:06:22.382 20:06:59 -- accel/accel.sh@20 -- # read -r var val 00:06:22.382 20:06:59 -- accel/accel.sh@21 -- # val=32 00:06:22.382 20:06:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.382 20:06:59 -- accel/accel.sh@20 -- # IFS=: 00:06:22.382 20:06:59 -- accel/accel.sh@20 -- # read -r var val 00:06:22.382 20:06:59 -- accel/accel.sh@21 -- # val=1 00:06:22.382 20:06:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.382 20:06:59 -- accel/accel.sh@20 -- # IFS=: 00:06:22.382 20:06:59 -- accel/accel.sh@20 -- # read -r var val 00:06:22.382 20:06:59 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:22.382 20:06:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.382 20:06:59 -- accel/accel.sh@20 -- # IFS=: 00:06:22.382 20:06:59 -- accel/accel.sh@20 -- # read -r var val 00:06:22.382 20:06:59 -- accel/accel.sh@21 -- # val=No 00:06:22.382 20:06:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.382 20:06:59 -- accel/accel.sh@20 -- # IFS=: 00:06:22.382 20:06:59 -- accel/accel.sh@20 -- # read -r var val 00:06:22.382 20:06:59 -- accel/accel.sh@21 -- # val= 00:06:22.382 20:06:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.382 20:06:59 -- accel/accel.sh@20 -- # IFS=: 00:06:22.382 20:06:59 -- accel/accel.sh@20 -- # read -r var val 00:06:22.382 20:06:59 -- accel/accel.sh@21 -- # val= 00:06:22.382 20:06:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.382 20:06:59 -- accel/accel.sh@20 -- # IFS=: 00:06:22.382 20:06:59 -- accel/accel.sh@20 -- # read -r var val 00:06:23.763 [2024-02-14 20:07:00.777462] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:06:23.763 20:07:00 -- accel/accel.sh@21 -- # val= 00:06:23.763 20:07:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.763 20:07:00 -- accel/accel.sh@20 -- # IFS=: 00:06:23.763 20:07:00 -- accel/accel.sh@20 -- # read -r var val 00:06:23.763 20:07:00 -- accel/accel.sh@21 -- # val= 00:06:23.763 20:07:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.763 20:07:00 -- accel/accel.sh@20 -- # IFS=: 00:06:23.763 20:07:00 -- accel/accel.sh@20 -- # read -r var val 00:06:23.763 20:07:00 -- accel/accel.sh@21 -- # val= 00:06:23.763 20:07:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.763 20:07:00 -- accel/accel.sh@20 -- # IFS=: 00:06:23.763 20:07:00 -- accel/accel.sh@20 -- # read -r var val 00:06:23.763 20:07:00 -- accel/accel.sh@21 -- # val= 00:06:23.763 20:07:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.763 20:07:00 -- accel/accel.sh@20 -- # IFS=: 00:06:23.763 20:07:00 -- accel/accel.sh@20 -- # read -r var val 00:06:23.763 20:07:00 -- accel/accel.sh@21 -- # val= 00:06:23.763 20:07:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.763 20:07:00 -- accel/accel.sh@20 -- # IFS=: 00:06:23.763 20:07:00 -- accel/accel.sh@20 -- # read -r var val 00:06:23.763 20:07:00 -- accel/accel.sh@21 -- # val= 00:06:23.763 20:07:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.763 20:07:00 -- accel/accel.sh@20 -- # IFS=: 00:06:23.763 20:07:00 -- accel/accel.sh@20 -- # read -r var val 00:06:23.763 20:07:00 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:23.763 20:07:00 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:06:23.764 20:07:00 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:23.764 00:06:23.764 real 0m2.687s 00:06:23.764 user 0m2.474s 00:06:23.764 sys 0m0.220s 00:06:23.764 20:07:00 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:23.764 20:07:00 -- common/autotest_common.sh@10 -- # set +x 00:06:23.764 ************************************ 00:06:23.764 END TEST accel_comp 00:06:23.764 ************************************ 00:06:23.764 20:07:00 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:23.764 20:07:00 -- common/autotest_common.sh@1075 -- # '[' 9 -le 1 ']' 00:06:23.764 20:07:00 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:23.764 20:07:00 -- common/autotest_common.sh@10 -- # set +x 00:06:23.764 ************************************ 00:06:23.764 START TEST accel_decomp 00:06:23.764 ************************************ 00:06:23.764 20:07:00 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:23.764 20:07:00 -- accel/accel.sh@16 -- # local accel_opc 00:06:23.764 20:07:00 -- accel/accel.sh@17 -- # local accel_module 00:06:23.764 20:07:00 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:23.764 20:07:00 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:23.764 20:07:00 -- accel/accel.sh@12 -- # build_accel_config 00:06:23.764 20:07:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:23.764 20:07:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:23.764 20:07:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:23.764 20:07:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:23.764 20:07:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:23.764 20:07:00 -- accel/accel.sh@41 -- # local IFS=, 00:06:23.764 20:07:00 -- accel/accel.sh@42 -- # jq -r . 00:06:23.764 [2024-02-14 20:07:00.988889] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:06:23.764 [2024-02-14 20:07:00.988952] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1611738 ] 00:06:23.764 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.764 [2024-02-14 20:07:01.048639] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.764 [2024-02-14 20:07:01.117388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.764 [2024-02-14 20:07:01.117443] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:06:25.145 [2024-02-14 20:07:02.164450] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:06:25.145 20:07:02 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:25.145 00:06:25.145 SPDK Configuration: 00:06:25.145 Core mask: 0x1 00:06:25.145 00:06:25.145 Accel Perf Configuration: 00:06:25.145 Workload Type: decompress 00:06:25.145 Transfer size: 4096 bytes 00:06:25.145 Vector count 1 00:06:25.145 Module: software 00:06:25.145 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:25.145 Queue depth: 32 00:06:25.145 Allocate depth: 32 00:06:25.145 # threads/core: 1 00:06:25.145 Run time: 1 seconds 00:06:25.145 Verify: Yes 00:06:25.145 00:06:25.145 Running for 1 seconds... 00:06:25.145 00:06:25.145 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:25.145 ------------------------------------------------------------------------------------ 00:06:25.145 0,0 74112/s 136 MiB/s 0 0 00:06:25.145 ==================================================================================== 00:06:25.145 Total 74112/s 289 MiB/s 0 0' 00:06:25.145 20:07:02 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:25.145 20:07:02 -- accel/accel.sh@20 -- # IFS=: 00:06:25.145 20:07:02 -- accel/accel.sh@20 -- # read -r var val 00:06:25.145 20:07:02 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:25.145 20:07:02 -- accel/accel.sh@12 -- # build_accel_config 00:06:25.145 20:07:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:25.145 20:07:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:25.145 20:07:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:25.145 20:07:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:25.145 20:07:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:25.145 20:07:02 -- accel/accel.sh@41 -- # local IFS=, 00:06:25.145 20:07:02 -- accel/accel.sh@42 -- # jq -r . 00:06:25.145 [2024-02-14 20:07:02.332337] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:06:25.145 [2024-02-14 20:07:02.332403] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1611971 ] 00:06:25.145 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.145 [2024-02-14 20:07:02.391593] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.145 [2024-02-14 20:07:02.459560] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.145 [2024-02-14 20:07:02.459618] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:06:25.145 20:07:02 -- accel/accel.sh@21 -- # val= 00:06:25.145 20:07:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.145 20:07:02 -- accel/accel.sh@20 -- # IFS=: 00:06:25.145 20:07:02 -- accel/accel.sh@20 -- # read -r var val 00:06:25.145 20:07:02 -- accel/accel.sh@21 -- # val= 00:06:25.145 20:07:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.145 20:07:02 -- accel/accel.sh@20 -- # IFS=: 00:06:25.145 20:07:02 -- accel/accel.sh@20 -- # read -r var val 00:06:25.145 20:07:02 -- accel/accel.sh@21 -- # val= 00:06:25.145 20:07:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.145 20:07:02 -- accel/accel.sh@20 -- # IFS=: 00:06:25.145 20:07:02 -- accel/accel.sh@20 -- # read -r var val 00:06:25.145 20:07:02 -- accel/accel.sh@21 -- # val=0x1 00:06:25.145 20:07:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.145 20:07:02 -- accel/accel.sh@20 -- # IFS=: 00:06:25.145 20:07:02 -- accel/accel.sh@20 -- # read -r var val 00:06:25.145 20:07:02 -- accel/accel.sh@21 -- # val= 00:06:25.145 20:07:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.145 20:07:02 -- accel/accel.sh@20 -- # IFS=: 00:06:25.145 20:07:02 -- accel/accel.sh@20 -- # read -r var val 00:06:25.145 20:07:02 -- accel/accel.sh@21 -- # val= 00:06:25.145 20:07:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.145 20:07:02 -- accel/accel.sh@20 -- # IFS=: 00:06:25.145 20:07:02 -- accel/accel.sh@20 -- # read -r var val 00:06:25.145 20:07:02 -- accel/accel.sh@21 -- # val=decompress 00:06:25.145 20:07:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.145 20:07:02 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:25.145 20:07:02 -- accel/accel.sh@20 -- # IFS=: 00:06:25.145 20:07:02 -- accel/accel.sh@20 -- # read -r var val 00:06:25.145 20:07:02 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:25.145 20:07:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.145 20:07:02 -- accel/accel.sh@20 -- # IFS=: 00:06:25.145 20:07:02 -- accel/accel.sh@20 -- # read -r var val 00:06:25.145 20:07:02 -- accel/accel.sh@21 -- # val= 00:06:25.145 20:07:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.145 20:07:02 -- accel/accel.sh@20 -- # IFS=: 00:06:25.146 20:07:02 -- accel/accel.sh@20 -- # read -r var val 00:06:25.146 20:07:02 -- accel/accel.sh@21 -- # val=software 00:06:25.146 20:07:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.146 20:07:02 -- accel/accel.sh@23 -- # accel_module=software 00:06:25.146 20:07:02 -- accel/accel.sh@20 -- # IFS=: 00:06:25.146 20:07:02 -- accel/accel.sh@20 -- # read -r var val 00:06:25.146 20:07:02 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:25.146 20:07:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.146 20:07:02 -- accel/accel.sh@20 -- # IFS=: 00:06:25.146 20:07:02 -- accel/accel.sh@20 -- # read -r var val 00:06:25.146 20:07:02 -- accel/accel.sh@21 -- # val=32 00:06:25.146 20:07:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.146 20:07:02 -- accel/accel.sh@20 -- # IFS=: 00:06:25.146 20:07:02 -- accel/accel.sh@20 -- # read -r var val 00:06:25.146 20:07:02 -- accel/accel.sh@21 -- # val=32 00:06:25.146 20:07:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.146 20:07:02 -- accel/accel.sh@20 -- # IFS=: 00:06:25.146 20:07:02 -- accel/accel.sh@20 -- # read -r var val 00:06:25.146 20:07:02 -- accel/accel.sh@21 -- # val=1 00:06:25.146 20:07:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.146 20:07:02 -- accel/accel.sh@20 -- # IFS=: 00:06:25.146 20:07:02 -- accel/accel.sh@20 -- # read -r var val 00:06:25.146 20:07:02 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:25.146 20:07:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.146 20:07:02 -- accel/accel.sh@20 -- # IFS=: 00:06:25.146 20:07:02 -- accel/accel.sh@20 -- # read -r var val 00:06:25.146 20:07:02 -- accel/accel.sh@21 -- # val=Yes 00:06:25.146 20:07:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.146 20:07:02 -- accel/accel.sh@20 -- # IFS=: 00:06:25.146 20:07:02 -- accel/accel.sh@20 -- # read -r var val 00:06:25.146 20:07:02 -- accel/accel.sh@21 -- # val= 00:06:25.146 20:07:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.146 20:07:02 -- accel/accel.sh@20 -- # IFS=: 00:06:25.146 20:07:02 -- accel/accel.sh@20 -- # read -r var val 00:06:25.146 20:07:02 -- accel/accel.sh@21 -- # val= 00:06:25.146 20:07:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.146 20:07:02 -- accel/accel.sh@20 -- # IFS=: 00:06:25.146 20:07:02 -- accel/accel.sh@20 -- # read -r var val 00:06:26.526 [2024-02-14 20:07:03.506285] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:06:26.526 20:07:03 -- accel/accel.sh@21 -- # val= 00:06:26.526 20:07:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.526 20:07:03 -- accel/accel.sh@20 -- # IFS=: 00:06:26.526 20:07:03 -- accel/accel.sh@20 -- # read -r var val 00:06:26.526 20:07:03 -- accel/accel.sh@21 -- # val= 00:06:26.526 20:07:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.526 20:07:03 -- accel/accel.sh@20 -- # IFS=: 00:06:26.526 20:07:03 -- accel/accel.sh@20 -- # read -r var val 00:06:26.526 20:07:03 -- accel/accel.sh@21 -- # val= 00:06:26.526 20:07:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.526 20:07:03 -- accel/accel.sh@20 -- # IFS=: 00:06:26.526 20:07:03 -- accel/accel.sh@20 -- # read -r var val 00:06:26.526 20:07:03 -- accel/accel.sh@21 -- # val= 00:06:26.526 20:07:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.526 20:07:03 -- accel/accel.sh@20 -- # IFS=: 00:06:26.526 20:07:03 -- accel/accel.sh@20 -- # read -r var val 00:06:26.526 20:07:03 -- accel/accel.sh@21 -- # val= 00:06:26.526 20:07:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.526 20:07:03 -- accel/accel.sh@20 -- # IFS=: 00:06:26.526 20:07:03 -- accel/accel.sh@20 -- # read -r var val 00:06:26.526 20:07:03 -- accel/accel.sh@21 -- # val= 00:06:26.526 20:07:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.526 20:07:03 -- accel/accel.sh@20 -- # IFS=: 00:06:26.526 20:07:03 -- accel/accel.sh@20 -- # read -r var val 00:06:26.526 20:07:03 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:26.526 20:07:03 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:26.526 20:07:03 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:26.526 00:06:26.526 real 0m2.699s 00:06:26.526 user 0m2.476s 00:06:26.526 sys 0m0.230s 00:06:26.526 20:07:03 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:26.526 20:07:03 -- common/autotest_common.sh@10 -- # set +x 00:06:26.526 ************************************ 00:06:26.526 END TEST accel_decomp 00:06:26.526 ************************************ 00:06:26.526 20:07:03 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:26.526 20:07:03 -- common/autotest_common.sh@1075 -- # '[' 11 -le 1 ']' 00:06:26.526 20:07:03 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:26.526 20:07:03 -- common/autotest_common.sh@10 -- # set +x 00:06:26.526 ************************************ 00:06:26.526 START TEST accel_decmop_full 00:06:26.526 ************************************ 00:06:26.526 20:07:03 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:26.526 20:07:03 -- accel/accel.sh@16 -- # local accel_opc 00:06:26.526 20:07:03 -- accel/accel.sh@17 -- # local accel_module 00:06:26.526 20:07:03 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:26.526 20:07:03 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:26.526 20:07:03 -- accel/accel.sh@12 -- # build_accel_config 00:06:26.526 20:07:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:26.526 20:07:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:26.526 20:07:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:26.526 20:07:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:26.526 20:07:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:26.526 20:07:03 -- accel/accel.sh@41 -- # local IFS=, 00:06:26.526 20:07:03 -- accel/accel.sh@42 -- # jq -r . 00:06:26.526 [2024-02-14 20:07:03.720540] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:06:26.526 [2024-02-14 20:07:03.720607] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1612220 ] 00:06:26.526 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.526 [2024-02-14 20:07:03.779873] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.526 [2024-02-14 20:07:03.848261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.526 [2024-02-14 20:07:03.848315] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:06:27.907 [2024-02-14 20:07:04.903948] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:06:27.907 20:07:05 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:27.907 00:06:27.907 SPDK Configuration: 00:06:27.907 Core mask: 0x1 00:06:27.907 00:06:27.907 Accel Perf Configuration: 00:06:27.907 Workload Type: decompress 00:06:27.907 Transfer size: 111250 bytes 00:06:27.907 Vector count 1 00:06:27.907 Module: software 00:06:27.907 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:27.907 Queue depth: 32 00:06:27.907 Allocate depth: 32 00:06:27.907 # threads/core: 1 00:06:27.907 Run time: 1 seconds 00:06:27.907 Verify: Yes 00:06:27.907 00:06:27.907 Running for 1 seconds... 00:06:27.907 00:06:27.907 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:27.907 ------------------------------------------------------------------------------------ 00:06:27.907 0,0 4992/s 206 MiB/s 0 0 00:06:27.907 ==================================================================================== 00:06:27.907 Total 4992/s 529 MiB/s 0 0' 00:06:27.907 20:07:05 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:27.907 20:07:05 -- accel/accel.sh@20 -- # IFS=: 00:06:27.907 20:07:05 -- accel/accel.sh@20 -- # read -r var val 00:06:27.907 20:07:05 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:27.907 20:07:05 -- accel/accel.sh@12 -- # build_accel_config 00:06:27.907 20:07:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:27.907 20:07:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:27.907 20:07:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:27.907 20:07:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:27.907 20:07:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:27.907 20:07:05 -- accel/accel.sh@41 -- # local IFS=, 00:06:27.907 20:07:05 -- accel/accel.sh@42 -- # jq -r . 00:06:27.907 [2024-02-14 20:07:05.067083] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:06:27.907 [2024-02-14 20:07:05.067133] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1612447 ] 00:06:27.907 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.907 [2024-02-14 20:07:05.119969] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.907 [2024-02-14 20:07:05.189739] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.907 [2024-02-14 20:07:05.189793] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:06:27.907 20:07:05 -- accel/accel.sh@21 -- # val= 00:06:27.907 20:07:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.907 20:07:05 -- accel/accel.sh@20 -- # IFS=: 00:06:27.907 20:07:05 -- accel/accel.sh@20 -- # read -r var val 00:06:27.907 20:07:05 -- accel/accel.sh@21 -- # val= 00:06:27.907 20:07:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.907 20:07:05 -- accel/accel.sh@20 -- # IFS=: 00:06:27.907 20:07:05 -- accel/accel.sh@20 -- # read -r var val 00:06:27.907 20:07:05 -- accel/accel.sh@21 -- # val= 00:06:27.907 20:07:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.907 20:07:05 -- accel/accel.sh@20 -- # IFS=: 00:06:27.907 20:07:05 -- accel/accel.sh@20 -- # read -r var val 00:06:27.907 20:07:05 -- accel/accel.sh@21 -- # val=0x1 00:06:27.907 20:07:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.907 20:07:05 -- accel/accel.sh@20 -- # IFS=: 00:06:27.907 20:07:05 -- accel/accel.sh@20 -- # read -r var val 00:06:27.907 20:07:05 -- accel/accel.sh@21 -- # val= 00:06:27.907 20:07:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.907 20:07:05 -- accel/accel.sh@20 -- # IFS=: 00:06:27.907 20:07:05 -- accel/accel.sh@20 -- # read -r var val 00:06:27.907 20:07:05 -- accel/accel.sh@21 -- # val= 00:06:27.907 20:07:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.907 20:07:05 -- accel/accel.sh@20 -- # IFS=: 00:06:27.907 20:07:05 -- accel/accel.sh@20 -- # read -r var val 00:06:27.907 20:07:05 -- accel/accel.sh@21 -- # val=decompress 00:06:27.907 20:07:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.907 20:07:05 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:27.907 20:07:05 -- accel/accel.sh@20 -- # IFS=: 00:06:27.907 20:07:05 -- accel/accel.sh@20 -- # read -r var val 00:06:27.907 20:07:05 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:27.907 20:07:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.907 20:07:05 -- accel/accel.sh@20 -- # IFS=: 00:06:27.907 20:07:05 -- accel/accel.sh@20 -- # read -r var val 00:06:27.907 20:07:05 -- accel/accel.sh@21 -- # val= 00:06:27.907 20:07:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.907 20:07:05 -- accel/accel.sh@20 -- # IFS=: 00:06:27.907 20:07:05 -- accel/accel.sh@20 -- # read -r var val 00:06:27.907 20:07:05 -- accel/accel.sh@21 -- # val=software 00:06:27.907 20:07:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.907 20:07:05 -- accel/accel.sh@23 -- # accel_module=software 00:06:27.907 20:07:05 -- accel/accel.sh@20 -- # IFS=: 00:06:27.907 20:07:05 -- accel/accel.sh@20 -- # read -r var val 00:06:27.907 20:07:05 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:27.907 20:07:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.907 20:07:05 -- accel/accel.sh@20 -- # IFS=: 00:06:27.907 20:07:05 -- accel/accel.sh@20 -- # read -r var val 00:06:27.907 20:07:05 -- accel/accel.sh@21 -- # val=32 00:06:27.907 20:07:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.907 20:07:05 -- accel/accel.sh@20 -- # IFS=: 00:06:27.907 20:07:05 -- accel/accel.sh@20 -- # read -r var val 00:06:27.907 20:07:05 -- accel/accel.sh@21 -- # val=32 00:06:27.907 20:07:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.907 20:07:05 -- accel/accel.sh@20 -- # IFS=: 00:06:27.907 20:07:05 -- accel/accel.sh@20 -- # read -r var val 00:06:27.907 20:07:05 -- accel/accel.sh@21 -- # val=1 00:06:27.907 20:07:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.907 20:07:05 -- accel/accel.sh@20 -- # IFS=: 00:06:27.907 20:07:05 -- accel/accel.sh@20 -- # read -r var val 00:06:27.907 20:07:05 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:27.907 20:07:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.907 20:07:05 -- accel/accel.sh@20 -- # IFS=: 00:06:27.907 20:07:05 -- accel/accel.sh@20 -- # read -r var val 00:06:27.907 20:07:05 -- accel/accel.sh@21 -- # val=Yes 00:06:27.907 20:07:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.907 20:07:05 -- accel/accel.sh@20 -- # IFS=: 00:06:27.907 20:07:05 -- accel/accel.sh@20 -- # read -r var val 00:06:27.907 20:07:05 -- accel/accel.sh@21 -- # val= 00:06:27.907 20:07:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.907 20:07:05 -- accel/accel.sh@20 -- # IFS=: 00:06:27.907 20:07:05 -- accel/accel.sh@20 -- # read -r var val 00:06:27.907 20:07:05 -- accel/accel.sh@21 -- # val= 00:06:27.907 20:07:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.907 20:07:05 -- accel/accel.sh@20 -- # IFS=: 00:06:27.907 20:07:05 -- accel/accel.sh@20 -- # read -r var val 00:06:28.846 [2024-02-14 20:07:06.245384] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:06:29.106 20:07:06 -- accel/accel.sh@21 -- # val= 00:06:29.106 20:07:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.106 20:07:06 -- accel/accel.sh@20 -- # IFS=: 00:06:29.106 20:07:06 -- accel/accel.sh@20 -- # read -r var val 00:06:29.106 20:07:06 -- accel/accel.sh@21 -- # val= 00:06:29.106 20:07:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.106 20:07:06 -- accel/accel.sh@20 -- # IFS=: 00:06:29.106 20:07:06 -- accel/accel.sh@20 -- # read -r var val 00:06:29.106 20:07:06 -- accel/accel.sh@21 -- # val= 00:06:29.106 20:07:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.106 20:07:06 -- accel/accel.sh@20 -- # IFS=: 00:06:29.106 20:07:06 -- accel/accel.sh@20 -- # read -r var val 00:06:29.106 20:07:06 -- accel/accel.sh@21 -- # val= 00:06:29.106 20:07:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.106 20:07:06 -- accel/accel.sh@20 -- # IFS=: 00:06:29.106 20:07:06 -- accel/accel.sh@20 -- # read -r var val 00:06:29.106 20:07:06 -- accel/accel.sh@21 -- # val= 00:06:29.106 20:07:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.106 20:07:06 -- accel/accel.sh@20 -- # IFS=: 00:06:29.106 20:07:06 -- accel/accel.sh@20 -- # read -r var val 00:06:29.106 20:07:06 -- accel/accel.sh@21 -- # val= 00:06:29.106 20:07:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.106 20:07:06 -- accel/accel.sh@20 -- # IFS=: 00:06:29.106 20:07:06 -- accel/accel.sh@20 -- # read -r var val 00:06:29.106 20:07:06 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:29.106 20:07:06 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:29.106 20:07:06 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:29.106 00:06:29.106 real 0m2.706s 00:06:29.106 user 0m2.484s 00:06:29.106 sys 0m0.228s 00:06:29.106 20:07:06 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:29.106 20:07:06 -- common/autotest_common.sh@10 -- # set +x 00:06:29.106 ************************************ 00:06:29.106 END TEST accel_decmop_full 00:06:29.106 ************************************ 00:06:29.106 20:07:06 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:29.106 20:07:06 -- common/autotest_common.sh@1075 -- # '[' 11 -le 1 ']' 00:06:29.106 20:07:06 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:29.106 20:07:06 -- common/autotest_common.sh@10 -- # set +x 00:06:29.106 ************************************ 00:06:29.106 START TEST accel_decomp_mcore 00:06:29.106 ************************************ 00:06:29.106 20:07:06 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:29.106 20:07:06 -- accel/accel.sh@16 -- # local accel_opc 00:06:29.106 20:07:06 -- accel/accel.sh@17 -- # local accel_module 00:06:29.106 20:07:06 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:29.106 20:07:06 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:29.106 20:07:06 -- accel/accel.sh@12 -- # build_accel_config 00:06:29.106 20:07:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:29.106 20:07:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.106 20:07:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.106 20:07:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:29.106 20:07:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:29.106 20:07:06 -- accel/accel.sh@41 -- # local IFS=, 00:06:29.106 20:07:06 -- accel/accel.sh@42 -- # jq -r . 00:06:29.106 [2024-02-14 20:07:06.461925] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:06:29.106 [2024-02-14 20:07:06.461999] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1612700 ] 00:06:29.106 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.366 [2024-02-14 20:07:06.524956] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:29.366 [2024-02-14 20:07:06.596221] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:29.366 [2024-02-14 20:07:06.596235] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:29.366 [2024-02-14 20:07:06.596252] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:29.366 [2024-02-14 20:07:06.596253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.366 [2024-02-14 20:07:06.596600] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:06:30.305 [2024-02-14 20:07:07.646561] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:06:30.596 20:07:07 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:30.596 00:06:30.596 SPDK Configuration: 00:06:30.596 Core mask: 0xf 00:06:30.596 00:06:30.596 Accel Perf Configuration: 00:06:30.596 Workload Type: decompress 00:06:30.596 Transfer size: 4096 bytes 00:06:30.596 Vector count 1 00:06:30.596 Module: software 00:06:30.596 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:30.596 Queue depth: 32 00:06:30.596 Allocate depth: 32 00:06:30.596 # threads/core: 1 00:06:30.596 Run time: 1 seconds 00:06:30.596 Verify: Yes 00:06:30.596 00:06:30.596 Running for 1 seconds... 00:06:30.596 00:06:30.596 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:30.596 ------------------------------------------------------------------------------------ 00:06:30.596 0,0 61376/s 113 MiB/s 0 0 00:06:30.596 3,0 63424/s 116 MiB/s 0 0 00:06:30.596 2,0 63424/s 116 MiB/s 0 0 00:06:30.596 1,0 63360/s 116 MiB/s 0 0 00:06:30.596 ==================================================================================== 00:06:30.596 Total 251584/s 982 MiB/s 0 0' 00:06:30.596 20:07:07 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:30.596 20:07:07 -- accel/accel.sh@20 -- # IFS=: 00:06:30.596 20:07:07 -- accel/accel.sh@20 -- # read -r var val 00:06:30.596 20:07:07 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:30.596 20:07:07 -- accel/accel.sh@12 -- # build_accel_config 00:06:30.596 20:07:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:30.596 20:07:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:30.596 20:07:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:30.596 20:07:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:30.596 20:07:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:30.596 20:07:07 -- accel/accel.sh@41 -- # local IFS=, 00:06:30.596 20:07:07 -- accel/accel.sh@42 -- # jq -r . 00:06:30.596 [2024-02-14 20:07:07.813512] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:06:30.596 [2024-02-14 20:07:07.813567] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1612934 ] 00:06:30.596 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.596 [2024-02-14 20:07:07.872495] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:30.596 [2024-02-14 20:07:07.942169] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:30.596 [2024-02-14 20:07:07.942267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:30.596 [2024-02-14 20:07:07.942363] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:30.596 [2024-02-14 20:07:07.942364] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.596 [2024-02-14 20:07:07.942451] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:06:30.596 20:07:07 -- accel/accel.sh@21 -- # val= 00:06:30.596 20:07:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.596 20:07:07 -- accel/accel.sh@20 -- # IFS=: 00:06:30.596 20:07:07 -- accel/accel.sh@20 -- # read -r var val 00:06:30.596 20:07:07 -- accel/accel.sh@21 -- # val= 00:06:30.596 20:07:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.596 20:07:07 -- accel/accel.sh@20 -- # IFS=: 00:06:30.596 20:07:07 -- accel/accel.sh@20 -- # read -r var val 00:06:30.596 20:07:07 -- accel/accel.sh@21 -- # val= 00:06:30.596 20:07:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.596 20:07:07 -- accel/accel.sh@20 -- # IFS=: 00:06:30.596 20:07:07 -- accel/accel.sh@20 -- # read -r var val 00:06:30.596 20:07:07 -- accel/accel.sh@21 -- # val=0xf 00:06:30.596 20:07:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.596 20:07:07 -- accel/accel.sh@20 -- # IFS=: 00:06:30.596 20:07:07 -- accel/accel.sh@20 -- # read -r var val 00:06:30.596 20:07:07 -- accel/accel.sh@21 -- # val= 00:06:30.596 20:07:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.596 20:07:07 -- accel/accel.sh@20 -- # IFS=: 00:06:30.596 20:07:07 -- accel/accel.sh@20 -- # read -r var val 00:06:30.596 20:07:07 -- accel/accel.sh@21 -- # val= 00:06:30.596 20:07:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.596 20:07:07 -- accel/accel.sh@20 -- # IFS=: 00:06:30.596 20:07:07 -- accel/accel.sh@20 -- # read -r var val 00:06:30.596 20:07:07 -- accel/accel.sh@21 -- # val=decompress 00:06:30.596 20:07:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.596 20:07:07 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:30.596 20:07:07 -- accel/accel.sh@20 -- # IFS=: 00:06:30.596 20:07:07 -- accel/accel.sh@20 -- # read -r var val 00:06:30.596 20:07:07 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:30.596 20:07:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.596 20:07:07 -- accel/accel.sh@20 -- # IFS=: 00:06:30.596 20:07:07 -- accel/accel.sh@20 -- # read -r var val 00:06:30.596 20:07:07 -- accel/accel.sh@21 -- # val= 00:06:30.596 20:07:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.596 20:07:07 -- accel/accel.sh@20 -- # IFS=: 00:06:30.596 20:07:07 -- accel/accel.sh@20 -- # read -r var val 00:06:30.596 20:07:07 -- accel/accel.sh@21 -- # val=software 00:06:30.596 20:07:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.596 20:07:07 -- accel/accel.sh@23 -- # accel_module=software 00:06:30.596 20:07:07 -- accel/accel.sh@20 -- # IFS=: 00:06:30.596 20:07:07 -- accel/accel.sh@20 -- # read -r var val 00:06:30.596 20:07:07 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:30.596 20:07:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.596 20:07:07 -- accel/accel.sh@20 -- # IFS=: 00:06:30.596 20:07:07 -- accel/accel.sh@20 -- # read -r var val 00:06:30.596 20:07:07 -- accel/accel.sh@21 -- # val=32 00:06:30.596 20:07:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.596 20:07:07 -- accel/accel.sh@20 -- # IFS=: 00:06:30.596 20:07:07 -- accel/accel.sh@20 -- # read -r var val 00:06:30.596 20:07:07 -- accel/accel.sh@21 -- # val=32 00:06:30.596 20:07:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.596 20:07:07 -- accel/accel.sh@20 -- # IFS=: 00:06:30.596 20:07:07 -- accel/accel.sh@20 -- # read -r var val 00:06:30.596 20:07:07 -- accel/accel.sh@21 -- # val=1 00:06:30.596 20:07:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.596 20:07:07 -- accel/accel.sh@20 -- # IFS=: 00:06:30.596 20:07:07 -- accel/accel.sh@20 -- # read -r var val 00:06:30.596 20:07:07 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:30.596 20:07:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.596 20:07:07 -- accel/accel.sh@20 -- # IFS=: 00:06:30.596 20:07:07 -- accel/accel.sh@20 -- # read -r var val 00:06:30.596 20:07:07 -- accel/accel.sh@21 -- # val=Yes 00:06:30.596 20:07:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.596 20:07:07 -- accel/accel.sh@20 -- # IFS=: 00:06:30.596 20:07:07 -- accel/accel.sh@20 -- # read -r var val 00:06:30.596 20:07:07 -- accel/accel.sh@21 -- # val= 00:06:30.596 20:07:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.596 20:07:08 -- accel/accel.sh@20 -- # IFS=: 00:06:30.596 20:07:08 -- accel/accel.sh@20 -- # read -r var val 00:06:30.596 20:07:08 -- accel/accel.sh@21 -- # val= 00:06:30.596 20:07:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.596 20:07:08 -- accel/accel.sh@20 -- # IFS=: 00:06:30.596 20:07:08 -- accel/accel.sh@20 -- # read -r var val 00:06:31.978 [2024-02-14 20:07:08.993142] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:06:31.978 20:07:09 -- accel/accel.sh@21 -- # val= 00:06:31.978 20:07:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.978 20:07:09 -- accel/accel.sh@20 -- # IFS=: 00:06:31.978 20:07:09 -- accel/accel.sh@20 -- # read -r var val 00:06:31.978 20:07:09 -- accel/accel.sh@21 -- # val= 00:06:31.978 20:07:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.978 20:07:09 -- accel/accel.sh@20 -- # IFS=: 00:06:31.978 20:07:09 -- accel/accel.sh@20 -- # read -r var val 00:06:31.978 20:07:09 -- accel/accel.sh@21 -- # val= 00:06:31.978 20:07:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.978 20:07:09 -- accel/accel.sh@20 -- # IFS=: 00:06:31.978 20:07:09 -- accel/accel.sh@20 -- # read -r var val 00:06:31.978 20:07:09 -- accel/accel.sh@21 -- # val= 00:06:31.978 20:07:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.978 20:07:09 -- accel/accel.sh@20 -- # IFS=: 00:06:31.978 20:07:09 -- accel/accel.sh@20 -- # read -r var val 00:06:31.979 20:07:09 -- accel/accel.sh@21 -- # val= 00:06:31.979 20:07:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.979 20:07:09 -- accel/accel.sh@20 -- # IFS=: 00:06:31.979 20:07:09 -- accel/accel.sh@20 -- # read -r var val 00:06:31.979 20:07:09 -- accel/accel.sh@21 -- # val= 00:06:31.979 20:07:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.979 20:07:09 -- accel/accel.sh@20 -- # IFS=: 00:06:31.979 20:07:09 -- accel/accel.sh@20 -- # read -r var val 00:06:31.979 20:07:09 -- accel/accel.sh@21 -- # val= 00:06:31.979 20:07:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.979 20:07:09 -- accel/accel.sh@20 -- # IFS=: 00:06:31.979 20:07:09 -- accel/accel.sh@20 -- # read -r var val 00:06:31.979 20:07:09 -- accel/accel.sh@21 -- # val= 00:06:31.979 20:07:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.979 20:07:09 -- accel/accel.sh@20 -- # IFS=: 00:06:31.979 20:07:09 -- accel/accel.sh@20 -- # read -r var val 00:06:31.979 20:07:09 -- accel/accel.sh@21 -- # val= 00:06:31.979 20:07:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.979 20:07:09 -- accel/accel.sh@20 -- # IFS=: 00:06:31.979 20:07:09 -- accel/accel.sh@20 -- # read -r var val 00:06:31.979 20:07:09 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:31.979 20:07:09 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:31.979 20:07:09 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:31.979 00:06:31.979 real 0m2.718s 00:06:31.979 user 0m9.140s 00:06:31.979 sys 0m0.241s 00:06:31.979 20:07:09 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:31.979 20:07:09 -- common/autotest_common.sh@10 -- # set +x 00:06:31.979 ************************************ 00:06:31.979 END TEST accel_decomp_mcore 00:06:31.979 ************************************ 00:06:31.979 20:07:09 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:31.979 20:07:09 -- common/autotest_common.sh@1075 -- # '[' 13 -le 1 ']' 00:06:31.979 20:07:09 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:31.979 20:07:09 -- common/autotest_common.sh@10 -- # set +x 00:06:31.979 ************************************ 00:06:31.979 START TEST accel_decomp_full_mcore 00:06:31.979 ************************************ 00:06:31.979 20:07:09 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:31.979 20:07:09 -- accel/accel.sh@16 -- # local accel_opc 00:06:31.979 20:07:09 -- accel/accel.sh@17 -- # local accel_module 00:06:31.979 20:07:09 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:31.979 20:07:09 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:31.979 20:07:09 -- accel/accel.sh@12 -- # build_accel_config 00:06:31.979 20:07:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:31.979 20:07:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:31.979 20:07:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:31.979 20:07:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:31.979 20:07:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:31.979 20:07:09 -- accel/accel.sh@41 -- # local IFS=, 00:06:31.979 20:07:09 -- accel/accel.sh@42 -- # jq -r . 00:06:31.979 [2024-02-14 20:07:09.214435] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:06:31.979 [2024-02-14 20:07:09.214508] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1613188 ] 00:06:31.979 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.979 [2024-02-14 20:07:09.278574] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:31.979 [2024-02-14 20:07:09.350742] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:31.979 [2024-02-14 20:07:09.350841] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:31.979 [2024-02-14 20:07:09.350932] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:31.979 [2024-02-14 20:07:09.350934] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.979 [2024-02-14 20:07:09.351017] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:06:33.360 [2024-02-14 20:07:10.412253] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:06:33.360 20:07:10 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:33.360 00:06:33.360 SPDK Configuration: 00:06:33.360 Core mask: 0xf 00:06:33.360 00:06:33.360 Accel Perf Configuration: 00:06:33.360 Workload Type: decompress 00:06:33.360 Transfer size: 111250 bytes 00:06:33.360 Vector count 1 00:06:33.360 Module: software 00:06:33.360 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:33.360 Queue depth: 32 00:06:33.360 Allocate depth: 32 00:06:33.360 # threads/core: 1 00:06:33.360 Run time: 1 seconds 00:06:33.360 Verify: Yes 00:06:33.360 00:06:33.360 Running for 1 seconds... 00:06:33.360 00:06:33.360 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:33.360 ------------------------------------------------------------------------------------ 00:06:33.360 0,0 4576/s 189 MiB/s 0 0 00:06:33.360 3,0 4800/s 198 MiB/s 0 0 00:06:33.360 2,0 4800/s 198 MiB/s 0 0 00:06:33.360 1,0 4800/s 198 MiB/s 0 0 00:06:33.360 ==================================================================================== 00:06:33.360 Total 18976/s 2013 MiB/s 0 0' 00:06:33.360 20:07:10 -- accel/accel.sh@20 -- # IFS=: 00:06:33.360 20:07:10 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:33.360 20:07:10 -- accel/accel.sh@20 -- # read -r var val 00:06:33.360 20:07:10 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:33.360 20:07:10 -- accel/accel.sh@12 -- # build_accel_config 00:06:33.360 20:07:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:33.360 20:07:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:33.360 20:07:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:33.360 20:07:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:33.360 20:07:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:33.360 20:07:10 -- accel/accel.sh@41 -- # local IFS=, 00:06:33.360 20:07:10 -- accel/accel.sh@42 -- # jq -r . 00:06:33.360 [2024-02-14 20:07:10.579040] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:06:33.360 [2024-02-14 20:07:10.579093] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1613435 ] 00:06:33.360 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.360 [2024-02-14 20:07:10.635726] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:33.360 [2024-02-14 20:07:10.712932] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:33.360 [2024-02-14 20:07:10.713031] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:33.360 [2024-02-14 20:07:10.713123] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:33.360 [2024-02-14 20:07:10.713125] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.360 [2024-02-14 20:07:10.713208] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:06:33.360 20:07:10 -- accel/accel.sh@21 -- # val= 00:06:33.360 20:07:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.360 20:07:10 -- accel/accel.sh@20 -- # IFS=: 00:06:33.360 20:07:10 -- accel/accel.sh@20 -- # read -r var val 00:06:33.360 20:07:10 -- accel/accel.sh@21 -- # val= 00:06:33.360 20:07:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.360 20:07:10 -- accel/accel.sh@20 -- # IFS=: 00:06:33.360 20:07:10 -- accel/accel.sh@20 -- # read -r var val 00:06:33.360 20:07:10 -- accel/accel.sh@21 -- # val= 00:06:33.360 20:07:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.360 20:07:10 -- accel/accel.sh@20 -- # IFS=: 00:06:33.360 20:07:10 -- accel/accel.sh@20 -- # read -r var val 00:06:33.360 20:07:10 -- accel/accel.sh@21 -- # val=0xf 00:06:33.360 20:07:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.360 20:07:10 -- accel/accel.sh@20 -- # IFS=: 00:06:33.360 20:07:10 -- accel/accel.sh@20 -- # read -r var val 00:06:33.360 20:07:10 -- accel/accel.sh@21 -- # val= 00:06:33.360 20:07:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.360 20:07:10 -- accel/accel.sh@20 -- # IFS=: 00:06:33.360 20:07:10 -- accel/accel.sh@20 -- # read -r var val 00:06:33.360 20:07:10 -- accel/accel.sh@21 -- # val= 00:06:33.360 20:07:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.360 20:07:10 -- accel/accel.sh@20 -- # IFS=: 00:06:33.360 20:07:10 -- accel/accel.sh@20 -- # read -r var val 00:06:33.360 20:07:10 -- accel/accel.sh@21 -- # val=decompress 00:06:33.360 20:07:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.360 20:07:10 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:33.360 20:07:10 -- accel/accel.sh@20 -- # IFS=: 00:06:33.360 20:07:10 -- accel/accel.sh@20 -- # read -r var val 00:06:33.360 20:07:10 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:33.360 20:07:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.360 20:07:10 -- accel/accel.sh@20 -- # IFS=: 00:06:33.360 20:07:10 -- accel/accel.sh@20 -- # read -r var val 00:06:33.360 20:07:10 -- accel/accel.sh@21 -- # val= 00:06:33.360 20:07:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.360 20:07:10 -- accel/accel.sh@20 -- # IFS=: 00:06:33.360 20:07:10 -- accel/accel.sh@20 -- # read -r var val 00:06:33.360 20:07:10 -- accel/accel.sh@21 -- # val=software 00:06:33.360 20:07:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.360 20:07:10 -- accel/accel.sh@23 -- # accel_module=software 00:06:33.360 20:07:10 -- accel/accel.sh@20 -- # IFS=: 00:06:33.360 20:07:10 -- accel/accel.sh@20 -- # read -r var val 00:06:33.360 20:07:10 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:33.360 20:07:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.360 20:07:10 -- accel/accel.sh@20 -- # IFS=: 00:06:33.360 20:07:10 -- accel/accel.sh@20 -- # read -r var val 00:06:33.360 20:07:10 -- accel/accel.sh@21 -- # val=32 00:06:33.360 20:07:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.360 20:07:10 -- accel/accel.sh@20 -- # IFS=: 00:06:33.360 20:07:10 -- accel/accel.sh@20 -- # read -r var val 00:06:33.360 20:07:10 -- accel/accel.sh@21 -- # val=32 00:06:33.360 20:07:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.360 20:07:10 -- accel/accel.sh@20 -- # IFS=: 00:06:33.360 20:07:10 -- accel/accel.sh@20 -- # read -r var val 00:06:33.360 20:07:10 -- accel/accel.sh@21 -- # val=1 00:06:33.360 20:07:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.360 20:07:10 -- accel/accel.sh@20 -- # IFS=: 00:06:33.360 20:07:10 -- accel/accel.sh@20 -- # read -r var val 00:06:33.360 20:07:10 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:33.360 20:07:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.360 20:07:10 -- accel/accel.sh@20 -- # IFS=: 00:06:33.360 20:07:10 -- accel/accel.sh@20 -- # read -r var val 00:06:33.360 20:07:10 -- accel/accel.sh@21 -- # val=Yes 00:06:33.360 20:07:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.360 20:07:10 -- accel/accel.sh@20 -- # IFS=: 00:06:33.360 20:07:10 -- accel/accel.sh@20 -- # read -r var val 00:06:33.360 20:07:10 -- accel/accel.sh@21 -- # val= 00:06:33.360 20:07:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.360 20:07:10 -- accel/accel.sh@20 -- # IFS=: 00:06:33.360 20:07:10 -- accel/accel.sh@20 -- # read -r var val 00:06:33.360 20:07:10 -- accel/accel.sh@21 -- # val= 00:06:33.360 20:07:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.360 20:07:10 -- accel/accel.sh@20 -- # IFS=: 00:06:33.360 20:07:10 -- accel/accel.sh@20 -- # read -r var val 00:06:34.741 [2024-02-14 20:07:11.773439] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:06:34.741 20:07:11 -- accel/accel.sh@21 -- # val= 00:06:34.741 20:07:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.741 20:07:11 -- accel/accel.sh@20 -- # IFS=: 00:06:34.741 20:07:11 -- accel/accel.sh@20 -- # read -r var val 00:06:34.741 20:07:11 -- accel/accel.sh@21 -- # val= 00:06:34.741 20:07:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.741 20:07:11 -- accel/accel.sh@20 -- # IFS=: 00:06:34.741 20:07:11 -- accel/accel.sh@20 -- # read -r var val 00:06:34.741 20:07:11 -- accel/accel.sh@21 -- # val= 00:06:34.741 20:07:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.741 20:07:11 -- accel/accel.sh@20 -- # IFS=: 00:06:34.741 20:07:11 -- accel/accel.sh@20 -- # read -r var val 00:06:34.741 20:07:11 -- accel/accel.sh@21 -- # val= 00:06:34.741 20:07:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.741 20:07:11 -- accel/accel.sh@20 -- # IFS=: 00:06:34.741 20:07:11 -- accel/accel.sh@20 -- # read -r var val 00:06:34.741 20:07:11 -- accel/accel.sh@21 -- # val= 00:06:34.741 20:07:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.741 20:07:11 -- accel/accel.sh@20 -- # IFS=: 00:06:34.741 20:07:11 -- accel/accel.sh@20 -- # read -r var val 00:06:34.741 20:07:11 -- accel/accel.sh@21 -- # val= 00:06:34.741 20:07:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.741 20:07:11 -- accel/accel.sh@20 -- # IFS=: 00:06:34.741 20:07:11 -- accel/accel.sh@20 -- # read -r var val 00:06:34.741 20:07:11 -- accel/accel.sh@21 -- # val= 00:06:34.741 20:07:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.741 20:07:11 -- accel/accel.sh@20 -- # IFS=: 00:06:34.741 20:07:11 -- accel/accel.sh@20 -- # read -r var val 00:06:34.741 20:07:11 -- accel/accel.sh@21 -- # val= 00:06:34.741 20:07:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.741 20:07:11 -- accel/accel.sh@20 -- # IFS=: 00:06:34.741 20:07:11 -- accel/accel.sh@20 -- # read -r var val 00:06:34.741 20:07:11 -- accel/accel.sh@21 -- # val= 00:06:34.741 20:07:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.741 20:07:11 -- accel/accel.sh@20 -- # IFS=: 00:06:34.741 20:07:11 -- accel/accel.sh@20 -- # read -r var val 00:06:34.741 20:07:11 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:34.741 20:07:11 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:34.741 20:07:11 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:34.741 00:06:34.741 real 0m2.740s 00:06:34.741 user 0m9.218s 00:06:34.741 sys 0m0.234s 00:06:34.741 20:07:11 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:34.741 20:07:11 -- common/autotest_common.sh@10 -- # set +x 00:06:34.741 ************************************ 00:06:34.741 END TEST accel_decomp_full_mcore 00:06:34.741 ************************************ 00:06:34.741 20:07:11 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:34.741 20:07:11 -- common/autotest_common.sh@1075 -- # '[' 11 -le 1 ']' 00:06:34.742 20:07:11 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:34.742 20:07:11 -- common/autotest_common.sh@10 -- # set +x 00:06:34.742 ************************************ 00:06:34.742 START TEST accel_decomp_mthread 00:06:34.742 ************************************ 00:06:34.742 20:07:11 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:34.742 20:07:11 -- accel/accel.sh@16 -- # local accel_opc 00:06:34.742 20:07:11 -- accel/accel.sh@17 -- # local accel_module 00:06:34.742 20:07:11 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:34.742 20:07:11 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:34.742 20:07:11 -- accel/accel.sh@12 -- # build_accel_config 00:06:34.742 20:07:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:34.742 20:07:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.742 20:07:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.742 20:07:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:34.742 20:07:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:34.742 20:07:11 -- accel/accel.sh@41 -- # local IFS=, 00:06:34.742 20:07:11 -- accel/accel.sh@42 -- # jq -r . 00:06:34.742 [2024-02-14 20:07:11.984867] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:06:34.742 [2024-02-14 20:07:11.984935] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1613688 ] 00:06:34.742 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.742 [2024-02-14 20:07:12.049540] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.742 [2024-02-14 20:07:12.121472] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.742 [2024-02-14 20:07:12.121527] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:06:36.121 [2024-02-14 20:07:13.171636] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:06:36.121 20:07:13 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:36.121 00:06:36.121 SPDK Configuration: 00:06:36.121 Core mask: 0x1 00:06:36.121 00:06:36.121 Accel Perf Configuration: 00:06:36.121 Workload Type: decompress 00:06:36.121 Transfer size: 4096 bytes 00:06:36.121 Vector count 1 00:06:36.121 Module: software 00:06:36.121 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:36.121 Queue depth: 32 00:06:36.121 Allocate depth: 32 00:06:36.121 # threads/core: 2 00:06:36.121 Run time: 1 seconds 00:06:36.121 Verify: Yes 00:06:36.121 00:06:36.121 Running for 1 seconds... 00:06:36.121 00:06:36.121 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:36.121 ------------------------------------------------------------------------------------ 00:06:36.121 0,1 38240/s 70 MiB/s 0 0 00:06:36.121 0,0 38112/s 70 MiB/s 0 0 00:06:36.121 ==================================================================================== 00:06:36.121 Total 76352/s 298 MiB/s 0 0' 00:06:36.121 20:07:13 -- accel/accel.sh@20 -- # IFS=: 00:06:36.121 20:07:13 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:36.121 20:07:13 -- accel/accel.sh@20 -- # read -r var val 00:06:36.121 20:07:13 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:36.121 20:07:13 -- accel/accel.sh@12 -- # build_accel_config 00:06:36.121 20:07:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:36.121 20:07:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.121 20:07:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.121 20:07:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:36.121 20:07:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:36.121 20:07:13 -- accel/accel.sh@41 -- # local IFS=, 00:06:36.121 20:07:13 -- accel/accel.sh@42 -- # jq -r . 00:06:36.121 [2024-02-14 20:07:13.333098] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:06:36.121 [2024-02-14 20:07:13.333150] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1613923 ] 00:06:36.121 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.121 [2024-02-14 20:07:13.392445] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.121 [2024-02-14 20:07:13.462148] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.121 [2024-02-14 20:07:13.462197] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:06:36.121 20:07:13 -- accel/accel.sh@21 -- # val= 00:06:36.121 20:07:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.121 20:07:13 -- accel/accel.sh@20 -- # IFS=: 00:06:36.121 20:07:13 -- accel/accel.sh@20 -- # read -r var val 00:06:36.121 20:07:13 -- accel/accel.sh@21 -- # val= 00:06:36.121 20:07:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.121 20:07:13 -- accel/accel.sh@20 -- # IFS=: 00:06:36.121 20:07:13 -- accel/accel.sh@20 -- # read -r var val 00:06:36.121 20:07:13 -- accel/accel.sh@21 -- # val= 00:06:36.121 20:07:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.121 20:07:13 -- accel/accel.sh@20 -- # IFS=: 00:06:36.121 20:07:13 -- accel/accel.sh@20 -- # read -r var val 00:06:36.121 20:07:13 -- accel/accel.sh@21 -- # val=0x1 00:06:36.121 20:07:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.121 20:07:13 -- accel/accel.sh@20 -- # IFS=: 00:06:36.121 20:07:13 -- accel/accel.sh@20 -- # read -r var val 00:06:36.121 20:07:13 -- accel/accel.sh@21 -- # val= 00:06:36.121 20:07:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.121 20:07:13 -- accel/accel.sh@20 -- # IFS=: 00:06:36.121 20:07:13 -- accel/accel.sh@20 -- # read -r var val 00:06:36.121 20:07:13 -- accel/accel.sh@21 -- # val= 00:06:36.121 20:07:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.121 20:07:13 -- accel/accel.sh@20 -- # IFS=: 00:06:36.121 20:07:13 -- accel/accel.sh@20 -- # read -r var val 00:06:36.121 20:07:13 -- accel/accel.sh@21 -- # val=decompress 00:06:36.121 20:07:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.121 20:07:13 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:36.121 20:07:13 -- accel/accel.sh@20 -- # IFS=: 00:06:36.121 20:07:13 -- accel/accel.sh@20 -- # read -r var val 00:06:36.121 20:07:13 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:36.121 20:07:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.121 20:07:13 -- accel/accel.sh@20 -- # IFS=: 00:06:36.121 20:07:13 -- accel/accel.sh@20 -- # read -r var val 00:06:36.121 20:07:13 -- accel/accel.sh@21 -- # val= 00:06:36.121 20:07:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.121 20:07:13 -- accel/accel.sh@20 -- # IFS=: 00:06:36.121 20:07:13 -- accel/accel.sh@20 -- # read -r var val 00:06:36.121 20:07:13 -- accel/accel.sh@21 -- # val=software 00:06:36.121 20:07:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.121 20:07:13 -- accel/accel.sh@23 -- # accel_module=software 00:06:36.121 20:07:13 -- accel/accel.sh@20 -- # IFS=: 00:06:36.121 20:07:13 -- accel/accel.sh@20 -- # read -r var val 00:06:36.121 20:07:13 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:36.121 20:07:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.121 20:07:13 -- accel/accel.sh@20 -- # IFS=: 00:06:36.121 20:07:13 -- accel/accel.sh@20 -- # read -r var val 00:06:36.121 20:07:13 -- accel/accel.sh@21 -- # val=32 00:06:36.121 20:07:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.121 20:07:13 -- accel/accel.sh@20 -- # IFS=: 00:06:36.121 20:07:13 -- accel/accel.sh@20 -- # read -r var val 00:06:36.121 20:07:13 -- accel/accel.sh@21 -- # val=32 00:06:36.121 20:07:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.121 20:07:13 -- accel/accel.sh@20 -- # IFS=: 00:06:36.121 20:07:13 -- accel/accel.sh@20 -- # read -r var val 00:06:36.121 20:07:13 -- accel/accel.sh@21 -- # val=2 00:06:36.121 20:07:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.121 20:07:13 -- accel/accel.sh@20 -- # IFS=: 00:06:36.122 20:07:13 -- accel/accel.sh@20 -- # read -r var val 00:06:36.122 20:07:13 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:36.122 20:07:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.122 20:07:13 -- accel/accel.sh@20 -- # IFS=: 00:06:36.122 20:07:13 -- accel/accel.sh@20 -- # read -r var val 00:06:36.122 20:07:13 -- accel/accel.sh@21 -- # val=Yes 00:06:36.122 20:07:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.122 20:07:13 -- accel/accel.sh@20 -- # IFS=: 00:06:36.122 20:07:13 -- accel/accel.sh@20 -- # read -r var val 00:06:36.122 20:07:13 -- accel/accel.sh@21 -- # val= 00:06:36.122 20:07:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.122 20:07:13 -- accel/accel.sh@20 -- # IFS=: 00:06:36.122 20:07:13 -- accel/accel.sh@20 -- # read -r var val 00:06:36.122 20:07:13 -- accel/accel.sh@21 -- # val= 00:06:36.122 20:07:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.122 20:07:13 -- accel/accel.sh@20 -- # IFS=: 00:06:36.122 20:07:13 -- accel/accel.sh@20 -- # read -r var val 00:06:37.502 [2024-02-14 20:07:14.511651] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:06:37.502 20:07:14 -- accel/accel.sh@21 -- # val= 00:06:37.502 20:07:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.502 20:07:14 -- accel/accel.sh@20 -- # IFS=: 00:06:37.502 20:07:14 -- accel/accel.sh@20 -- # read -r var val 00:06:37.502 20:07:14 -- accel/accel.sh@21 -- # val= 00:06:37.502 20:07:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.502 20:07:14 -- accel/accel.sh@20 -- # IFS=: 00:06:37.502 20:07:14 -- accel/accel.sh@20 -- # read -r var val 00:06:37.502 20:07:14 -- accel/accel.sh@21 -- # val= 00:06:37.502 20:07:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.502 20:07:14 -- accel/accel.sh@20 -- # IFS=: 00:06:37.502 20:07:14 -- accel/accel.sh@20 -- # read -r var val 00:06:37.502 20:07:14 -- accel/accel.sh@21 -- # val= 00:06:37.502 20:07:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.502 20:07:14 -- accel/accel.sh@20 -- # IFS=: 00:06:37.502 20:07:14 -- accel/accel.sh@20 -- # read -r var val 00:06:37.502 20:07:14 -- accel/accel.sh@21 -- # val= 00:06:37.502 20:07:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.502 20:07:14 -- accel/accel.sh@20 -- # IFS=: 00:06:37.502 20:07:14 -- accel/accel.sh@20 -- # read -r var val 00:06:37.502 20:07:14 -- accel/accel.sh@21 -- # val= 00:06:37.502 20:07:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.502 20:07:14 -- accel/accel.sh@20 -- # IFS=: 00:06:37.502 20:07:14 -- accel/accel.sh@20 -- # read -r var val 00:06:37.502 20:07:14 -- accel/accel.sh@21 -- # val= 00:06:37.502 20:07:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.502 20:07:14 -- accel/accel.sh@20 -- # IFS=: 00:06:37.502 20:07:14 -- accel/accel.sh@20 -- # read -r var val 00:06:37.502 20:07:14 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:37.502 20:07:14 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:37.502 20:07:14 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:37.502 00:06:37.502 real 0m2.704s 00:06:37.502 user 0m2.476s 00:06:37.502 sys 0m0.229s 00:06:37.502 20:07:14 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:37.502 20:07:14 -- common/autotest_common.sh@10 -- # set +x 00:06:37.502 ************************************ 00:06:37.502 END TEST accel_decomp_mthread 00:06:37.502 ************************************ 00:06:37.502 20:07:14 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:37.502 20:07:14 -- common/autotest_common.sh@1075 -- # '[' 13 -le 1 ']' 00:06:37.502 20:07:14 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:37.502 20:07:14 -- common/autotest_common.sh@10 -- # set +x 00:06:37.502 ************************************ 00:06:37.502 START TEST accel_deomp_full_mthread 00:06:37.502 ************************************ 00:06:37.503 20:07:14 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:37.503 20:07:14 -- accel/accel.sh@16 -- # local accel_opc 00:06:37.503 20:07:14 -- accel/accel.sh@17 -- # local accel_module 00:06:37.503 20:07:14 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:37.503 20:07:14 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:37.503 20:07:14 -- accel/accel.sh@12 -- # build_accel_config 00:06:37.503 20:07:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:37.503 20:07:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.503 20:07:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.503 20:07:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:37.503 20:07:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:37.503 20:07:14 -- accel/accel.sh@41 -- # local IFS=, 00:06:37.503 20:07:14 -- accel/accel.sh@42 -- # jq -r . 00:06:37.503 [2024-02-14 20:07:14.720609] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:06:37.503 [2024-02-14 20:07:14.720699] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1614170 ] 00:06:37.503 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.503 [2024-02-14 20:07:14.779905] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.503 [2024-02-14 20:07:14.848676] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.503 [2024-02-14 20:07:14.848731] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:06:38.884 [2024-02-14 20:07:15.920821] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:06:38.884 20:07:16 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:38.884 00:06:38.884 SPDK Configuration: 00:06:38.884 Core mask: 0x1 00:06:38.884 00:06:38.884 Accel Perf Configuration: 00:06:38.884 Workload Type: decompress 00:06:38.884 Transfer size: 111250 bytes 00:06:38.884 Vector count 1 00:06:38.884 Module: software 00:06:38.884 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:38.884 Queue depth: 32 00:06:38.884 Allocate depth: 32 00:06:38.884 # threads/core: 2 00:06:38.884 Run time: 1 seconds 00:06:38.884 Verify: Yes 00:06:38.884 00:06:38.884 Running for 1 seconds... 00:06:38.884 00:06:38.884 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:38.884 ------------------------------------------------------------------------------------ 00:06:38.884 0,1 2560/s 105 MiB/s 0 0 00:06:38.884 0,0 2528/s 104 MiB/s 0 0 00:06:38.884 ==================================================================================== 00:06:38.884 Total 5088/s 539 MiB/s 0 0' 00:06:38.884 20:07:16 -- accel/accel.sh@20 -- # IFS=: 00:06:38.884 20:07:16 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:38.884 20:07:16 -- accel/accel.sh@20 -- # read -r var val 00:06:38.884 20:07:16 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:38.884 20:07:16 -- accel/accel.sh@12 -- # build_accel_config 00:06:38.884 20:07:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:38.884 20:07:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.884 20:07:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.884 20:07:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:38.884 20:07:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:38.884 20:07:16 -- accel/accel.sh@41 -- # local IFS=, 00:06:38.884 20:07:16 -- accel/accel.sh@42 -- # jq -r . 00:06:38.884 [2024-02-14 20:07:16.084411] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:06:38.884 [2024-02-14 20:07:16.084463] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1614402 ] 00:06:38.884 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.884 [2024-02-14 20:07:16.142772] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.884 [2024-02-14 20:07:16.211062] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.884 [2024-02-14 20:07:16.211116] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:06:38.884 20:07:16 -- accel/accel.sh@21 -- # val= 00:06:38.884 20:07:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.884 20:07:16 -- accel/accel.sh@20 -- # IFS=: 00:06:38.884 20:07:16 -- accel/accel.sh@20 -- # read -r var val 00:06:38.884 20:07:16 -- accel/accel.sh@21 -- # val= 00:06:38.884 20:07:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.884 20:07:16 -- accel/accel.sh@20 -- # IFS=: 00:06:38.884 20:07:16 -- accel/accel.sh@20 -- # read -r var val 00:06:38.884 20:07:16 -- accel/accel.sh@21 -- # val= 00:06:38.884 20:07:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.884 20:07:16 -- accel/accel.sh@20 -- # IFS=: 00:06:38.884 20:07:16 -- accel/accel.sh@20 -- # read -r var val 00:06:38.884 20:07:16 -- accel/accel.sh@21 -- # val=0x1 00:06:38.884 20:07:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.884 20:07:16 -- accel/accel.sh@20 -- # IFS=: 00:06:38.884 20:07:16 -- accel/accel.sh@20 -- # read -r var val 00:06:38.884 20:07:16 -- accel/accel.sh@21 -- # val= 00:06:38.884 20:07:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.884 20:07:16 -- accel/accel.sh@20 -- # IFS=: 00:06:38.884 20:07:16 -- accel/accel.sh@20 -- # read -r var val 00:06:38.884 20:07:16 -- accel/accel.sh@21 -- # val= 00:06:38.884 20:07:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.884 20:07:16 -- accel/accel.sh@20 -- # IFS=: 00:06:38.884 20:07:16 -- accel/accel.sh@20 -- # read -r var val 00:06:38.884 20:07:16 -- accel/accel.sh@21 -- # val=decompress 00:06:38.884 20:07:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.884 20:07:16 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:38.884 20:07:16 -- accel/accel.sh@20 -- # IFS=: 00:06:38.884 20:07:16 -- accel/accel.sh@20 -- # read -r var val 00:06:38.884 20:07:16 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:38.884 20:07:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.884 20:07:16 -- accel/accel.sh@20 -- # IFS=: 00:06:38.884 20:07:16 -- accel/accel.sh@20 -- # read -r var val 00:06:38.884 20:07:16 -- accel/accel.sh@21 -- # val= 00:06:38.884 20:07:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.884 20:07:16 -- accel/accel.sh@20 -- # IFS=: 00:06:38.884 20:07:16 -- accel/accel.sh@20 -- # read -r var val 00:06:38.884 20:07:16 -- accel/accel.sh@21 -- # val=software 00:06:38.884 20:07:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.884 20:07:16 -- accel/accel.sh@23 -- # accel_module=software 00:06:38.884 20:07:16 -- accel/accel.sh@20 -- # IFS=: 00:06:38.884 20:07:16 -- accel/accel.sh@20 -- # read -r var val 00:06:38.884 20:07:16 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:38.884 20:07:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.884 20:07:16 -- accel/accel.sh@20 -- # IFS=: 00:06:38.884 20:07:16 -- accel/accel.sh@20 -- # read -r var val 00:06:38.884 20:07:16 -- accel/accel.sh@21 -- # val=32 00:06:38.884 20:07:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.884 20:07:16 -- accel/accel.sh@20 -- # IFS=: 00:06:38.884 20:07:16 -- accel/accel.sh@20 -- # read -r var val 00:06:38.884 20:07:16 -- accel/accel.sh@21 -- # val=32 00:06:38.884 20:07:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.884 20:07:16 -- accel/accel.sh@20 -- # IFS=: 00:06:38.884 20:07:16 -- accel/accel.sh@20 -- # read -r var val 00:06:38.884 20:07:16 -- accel/accel.sh@21 -- # val=2 00:06:38.884 20:07:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.884 20:07:16 -- accel/accel.sh@20 -- # IFS=: 00:06:38.884 20:07:16 -- accel/accel.sh@20 -- # read -r var val 00:06:38.884 20:07:16 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:38.884 20:07:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.884 20:07:16 -- accel/accel.sh@20 -- # IFS=: 00:06:38.884 20:07:16 -- accel/accel.sh@20 -- # read -r var val 00:06:38.884 20:07:16 -- accel/accel.sh@21 -- # val=Yes 00:06:38.884 20:07:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.884 20:07:16 -- accel/accel.sh@20 -- # IFS=: 00:06:38.884 20:07:16 -- accel/accel.sh@20 -- # read -r var val 00:06:38.884 20:07:16 -- accel/accel.sh@21 -- # val= 00:06:38.884 20:07:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.884 20:07:16 -- accel/accel.sh@20 -- # IFS=: 00:06:38.884 20:07:16 -- accel/accel.sh@20 -- # read -r var val 00:06:38.884 20:07:16 -- accel/accel.sh@21 -- # val= 00:06:38.884 20:07:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.884 20:07:16 -- accel/accel.sh@20 -- # IFS=: 00:06:38.884 20:07:16 -- accel/accel.sh@20 -- # read -r var val 00:06:40.266 [2024-02-14 20:07:17.285069] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:06:40.266 20:07:17 -- accel/accel.sh@21 -- # val= 00:06:40.266 20:07:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.266 20:07:17 -- accel/accel.sh@20 -- # IFS=: 00:06:40.266 20:07:17 -- accel/accel.sh@20 -- # read -r var val 00:06:40.266 20:07:17 -- accel/accel.sh@21 -- # val= 00:06:40.266 20:07:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.266 20:07:17 -- accel/accel.sh@20 -- # IFS=: 00:06:40.266 20:07:17 -- accel/accel.sh@20 -- # read -r var val 00:06:40.266 20:07:17 -- accel/accel.sh@21 -- # val= 00:06:40.266 20:07:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.266 20:07:17 -- accel/accel.sh@20 -- # IFS=: 00:06:40.266 20:07:17 -- accel/accel.sh@20 -- # read -r var val 00:06:40.266 20:07:17 -- accel/accel.sh@21 -- # val= 00:06:40.266 20:07:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.266 20:07:17 -- accel/accel.sh@20 -- # IFS=: 00:06:40.266 20:07:17 -- accel/accel.sh@20 -- # read -r var val 00:06:40.266 20:07:17 -- accel/accel.sh@21 -- # val= 00:06:40.266 20:07:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.266 20:07:17 -- accel/accel.sh@20 -- # IFS=: 00:06:40.266 20:07:17 -- accel/accel.sh@20 -- # read -r var val 00:06:40.266 20:07:17 -- accel/accel.sh@21 -- # val= 00:06:40.266 20:07:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.266 20:07:17 -- accel/accel.sh@20 -- # IFS=: 00:06:40.266 20:07:17 -- accel/accel.sh@20 -- # read -r var val 00:06:40.266 20:07:17 -- accel/accel.sh@21 -- # val= 00:06:40.266 20:07:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.266 20:07:17 -- accel/accel.sh@20 -- # IFS=: 00:06:40.266 20:07:17 -- accel/accel.sh@20 -- # read -r var val 00:06:40.266 20:07:17 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:40.266 20:07:17 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:40.266 20:07:17 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:40.266 00:06:40.266 real 0m2.745s 00:06:40.266 user 0m2.522s 00:06:40.266 sys 0m0.221s 00:06:40.266 20:07:17 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:40.266 20:07:17 -- common/autotest_common.sh@10 -- # set +x 00:06:40.266 ************************************ 00:06:40.266 END TEST accel_deomp_full_mthread 00:06:40.266 ************************************ 00:06:40.266 20:07:17 -- accel/accel.sh@116 -- # [[ n == y ]] 00:06:40.266 20:07:17 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:40.266 20:07:17 -- common/autotest_common.sh@1075 -- # '[' 4 -le 1 ']' 00:06:40.266 20:07:17 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:40.266 20:07:17 -- common/autotest_common.sh@10 -- # set +x 00:06:40.266 20:07:17 -- accel/accel.sh@129 -- # build_accel_config 00:06:40.266 20:07:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:40.266 20:07:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.266 20:07:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.266 20:07:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:40.266 20:07:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:40.266 20:07:17 -- accel/accel.sh@41 -- # local IFS=, 00:06:40.266 20:07:17 -- accel/accel.sh@42 -- # jq -r . 00:06:40.266 ************************************ 00:06:40.266 START TEST accel_dif_functional_tests 00:06:40.266 ************************************ 00:06:40.266 20:07:17 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:40.266 [2024-02-14 20:07:17.513058] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:06:40.266 [2024-02-14 20:07:17.513107] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1614672 ] 00:06:40.266 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.266 [2024-02-14 20:07:17.569788] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:40.266 [2024-02-14 20:07:17.640029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:40.266 [2024-02-14 20:07:17.640129] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:40.266 [2024-02-14 20:07:17.640131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.266 [2024-02-14 20:07:17.640207] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:06:40.527 00:06:40.527 00:06:40.527 CUnit - A unit testing framework for C - Version 2.1-3 00:06:40.527 http://cunit.sourceforge.net/ 00:06:40.527 00:06:40.527 00:06:40.527 Suite: accel_dif 00:06:40.527 Test: verify: DIF generated, GUARD check ...passed 00:06:40.527 Test: verify: DIF generated, APPTAG check ...passed 00:06:40.527 Test: verify: DIF generated, REFTAG check ...passed 00:06:40.527 Test: verify: DIF not generated, GUARD check ...[2024-02-14 20:07:17.707799] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:40.527 [2024-02-14 20:07:17.707844] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:40.527 passed 00:06:40.527 Test: verify: DIF not generated, APPTAG check ...[2024-02-14 20:07:17.707875] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:40.527 [2024-02-14 20:07:17.707888] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:40.527 passed 00:06:40.527 Test: verify: DIF not generated, REFTAG check ...[2024-02-14 20:07:17.707904] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:40.527 [2024-02-14 20:07:17.707918] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:40.527 passed 00:06:40.527 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:40.527 Test: verify: APPTAG incorrect, APPTAG check ...[2024-02-14 20:07:17.707954] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:40.527 passed 00:06:40.527 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:40.527 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:40.527 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:40.527 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-02-14 20:07:17.708045] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:40.527 passed 00:06:40.527 Test: generate copy: DIF generated, GUARD check ...passed 00:06:40.527 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:40.527 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:40.527 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:40.527 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:40.527 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:40.527 Test: generate copy: iovecs-len validate ...[2024-02-14 20:07:17.708194] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:40.527 passed 00:06:40.527 Test: generate copy: buffer alignment validate ...passed 00:06:40.527 00:06:40.527 Run Summary: Type Total Ran Passed Failed Inactive 00:06:40.527 suites 1 1 n/a 0 0 00:06:40.527 tests 20 20 20 0 0 00:06:40.527 asserts 204 204 204 0 n/a 00:06:40.527 00:06:40.527 Elapsed time = 0.000 seconds 00:06:40.528 [2024-02-14 20:07:17.708351] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:06:40.528 00:06:40.528 real 0m0.425s 00:06:40.528 user 0m0.635s 00:06:40.528 sys 0m0.143s 00:06:40.528 20:07:17 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:40.528 20:07:17 -- common/autotest_common.sh@10 -- # set +x 00:06:40.528 ************************************ 00:06:40.528 END TEST accel_dif_functional_tests 00:06:40.528 ************************************ 00:06:40.528 00:06:40.528 real 0m57.401s 00:06:40.528 user 1m6.077s 00:06:40.528 sys 0m5.945s 00:06:40.528 20:07:17 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:40.528 20:07:17 -- common/autotest_common.sh@10 -- # set +x 00:06:40.528 ************************************ 00:06:40.528 END TEST accel 00:06:40.528 ************************************ 00:06:40.788 20:07:17 -- spdk/autotest.sh@190 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:40.788 20:07:17 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:06:40.788 20:07:17 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:40.788 20:07:17 -- common/autotest_common.sh@10 -- # set +x 00:06:40.788 ************************************ 00:06:40.788 START TEST accel_rpc 00:06:40.788 ************************************ 00:06:40.788 20:07:17 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:40.788 * Looking for test storage... 00:06:40.788 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:40.788 20:07:18 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:40.788 20:07:18 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=1614909 00:06:40.788 20:07:18 -- accel/accel_rpc.sh@15 -- # waitforlisten 1614909 00:06:40.788 20:07:18 -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:40.788 20:07:18 -- common/autotest_common.sh@817 -- # '[' -z 1614909 ']' 00:06:40.788 20:07:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.788 20:07:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:40.788 20:07:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.788 20:07:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:40.788 20:07:18 -- common/autotest_common.sh@10 -- # set +x 00:06:40.788 [2024-02-14 20:07:18.095467] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:06:40.788 [2024-02-14 20:07:18.095520] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1614909 ] 00:06:40.788 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.788 [2024-02-14 20:07:18.156000] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.048 [2024-02-14 20:07:18.231917] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:41.048 [2024-02-14 20:07:18.232041] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.618 20:07:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:41.618 20:07:18 -- common/autotest_common.sh@850 -- # return 0 00:06:41.618 20:07:18 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:41.618 20:07:18 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:41.618 20:07:18 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:41.618 20:07:18 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:41.618 20:07:18 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:41.618 20:07:18 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:06:41.618 20:07:18 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:41.618 20:07:18 -- common/autotest_common.sh@10 -- # set +x 00:06:41.618 ************************************ 00:06:41.618 START TEST accel_assign_opcode 00:06:41.618 ************************************ 00:06:41.618 20:07:18 -- common/autotest_common.sh@1102 -- # accel_assign_opcode_test_suite 00:06:41.618 20:07:18 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:41.618 20:07:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:41.618 20:07:18 -- common/autotest_common.sh@10 -- # set +x 00:06:41.618 [2024-02-14 20:07:18.889951] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:41.618 20:07:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:41.618 20:07:18 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:41.618 20:07:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:41.618 20:07:18 -- common/autotest_common.sh@10 -- # set +x 00:06:41.618 [2024-02-14 20:07:18.897967] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:41.618 20:07:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:41.618 20:07:18 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:41.618 20:07:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:41.618 20:07:18 -- common/autotest_common.sh@10 -- # set +x 00:06:41.878 20:07:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:41.878 20:07:19 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:41.878 20:07:19 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:41.878 20:07:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:41.878 20:07:19 -- accel/accel_rpc.sh@42 -- # grep software 00:06:41.878 20:07:19 -- common/autotest_common.sh@10 -- # set +x 00:06:41.878 20:07:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:41.878 software 00:06:41.878 00:06:41.878 real 0m0.235s 00:06:41.878 user 0m0.045s 00:06:41.878 sys 0m0.011s 00:06:41.878 20:07:19 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:41.878 20:07:19 -- common/autotest_common.sh@10 -- # set +x 00:06:41.878 ************************************ 00:06:41.878 END TEST accel_assign_opcode 00:06:41.878 ************************************ 00:06:41.878 20:07:19 -- accel/accel_rpc.sh@55 -- # killprocess 1614909 00:06:41.878 20:07:19 -- common/autotest_common.sh@924 -- # '[' -z 1614909 ']' 00:06:41.878 20:07:19 -- common/autotest_common.sh@928 -- # kill -0 1614909 00:06:41.878 20:07:19 -- common/autotest_common.sh@929 -- # uname 00:06:41.878 20:07:19 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:06:41.878 20:07:19 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 1614909 00:06:41.878 20:07:19 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:06:41.878 20:07:19 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:06:41.878 20:07:19 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 1614909' 00:06:41.878 killing process with pid 1614909 00:06:41.878 20:07:19 -- common/autotest_common.sh@943 -- # kill 1614909 00:06:41.878 20:07:19 -- common/autotest_common.sh@948 -- # wait 1614909 00:06:42.139 00:06:42.139 real 0m1.557s 00:06:42.139 user 0m1.604s 00:06:42.139 sys 0m0.401s 00:06:42.139 20:07:19 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:42.139 20:07:19 -- common/autotest_common.sh@10 -- # set +x 00:06:42.139 ************************************ 00:06:42.139 END TEST accel_rpc 00:06:42.139 ************************************ 00:06:42.139 20:07:19 -- spdk/autotest.sh@191 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:42.139 20:07:19 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:06:42.139 20:07:19 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:42.139 20:07:19 -- common/autotest_common.sh@10 -- # set +x 00:06:42.398 ************************************ 00:06:42.398 START TEST app_cmdline 00:06:42.398 ************************************ 00:06:42.398 20:07:19 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:42.398 * Looking for test storage... 00:06:42.398 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:42.398 20:07:19 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:42.398 20:07:19 -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:42.398 20:07:19 -- app/cmdline.sh@17 -- # spdk_tgt_pid=1615220 00:06:42.398 20:07:19 -- app/cmdline.sh@18 -- # waitforlisten 1615220 00:06:42.398 20:07:19 -- common/autotest_common.sh@817 -- # '[' -z 1615220 ']' 00:06:42.398 20:07:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.398 20:07:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:42.398 20:07:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.398 20:07:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:42.398 20:07:19 -- common/autotest_common.sh@10 -- # set +x 00:06:42.398 [2024-02-14 20:07:19.661869] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:06:42.399 [2024-02-14 20:07:19.661939] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1615220 ] 00:06:42.399 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.399 [2024-02-14 20:07:19.721119] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.399 [2024-02-14 20:07:19.797296] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:42.399 [2024-02-14 20:07:19.797410] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.337 20:07:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:43.337 20:07:20 -- common/autotest_common.sh@850 -- # return 0 00:06:43.337 20:07:20 -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:43.337 { 00:06:43.337 "version": "SPDK v24.05-pre git sha1 aa824ae66", 00:06:43.337 "fields": { 00:06:43.337 "major": 24, 00:06:43.337 "minor": 5, 00:06:43.337 "patch": 0, 00:06:43.337 "suffix": "-pre", 00:06:43.337 "commit": "aa824ae66" 00:06:43.337 } 00:06:43.337 } 00:06:43.337 20:07:20 -- app/cmdline.sh@22 -- # expected_methods=() 00:06:43.337 20:07:20 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:43.337 20:07:20 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:43.337 20:07:20 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:43.337 20:07:20 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:43.337 20:07:20 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:43.337 20:07:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:43.337 20:07:20 -- app/cmdline.sh@26 -- # sort 00:06:43.337 20:07:20 -- common/autotest_common.sh@10 -- # set +x 00:06:43.337 20:07:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:43.337 20:07:20 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:43.337 20:07:20 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:43.337 20:07:20 -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:43.337 20:07:20 -- common/autotest_common.sh@638 -- # local es=0 00:06:43.337 20:07:20 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:43.337 20:07:20 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:43.337 20:07:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:43.337 20:07:20 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:43.337 20:07:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:43.337 20:07:20 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:43.337 20:07:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:43.337 20:07:20 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:43.337 20:07:20 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:43.337 20:07:20 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:43.597 request: 00:06:43.597 { 00:06:43.597 "method": "env_dpdk_get_mem_stats", 00:06:43.597 "req_id": 1 00:06:43.597 } 00:06:43.597 Got JSON-RPC error response 00:06:43.597 response: 00:06:43.597 { 00:06:43.597 "code": -32601, 00:06:43.597 "message": "Method not found" 00:06:43.597 } 00:06:43.597 20:07:20 -- common/autotest_common.sh@641 -- # es=1 00:06:43.597 20:07:20 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:43.597 20:07:20 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:43.597 20:07:20 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:43.597 20:07:20 -- app/cmdline.sh@1 -- # killprocess 1615220 00:06:43.597 20:07:20 -- common/autotest_common.sh@924 -- # '[' -z 1615220 ']' 00:06:43.597 20:07:20 -- common/autotest_common.sh@928 -- # kill -0 1615220 00:06:43.597 20:07:20 -- common/autotest_common.sh@929 -- # uname 00:06:43.597 20:07:20 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:06:43.597 20:07:20 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 1615220 00:06:43.597 20:07:20 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:06:43.597 20:07:20 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:06:43.597 20:07:20 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 1615220' 00:06:43.597 killing process with pid 1615220 00:06:43.597 20:07:20 -- common/autotest_common.sh@943 -- # kill 1615220 00:06:43.597 20:07:20 -- common/autotest_common.sh@948 -- # wait 1615220 00:06:43.857 00:06:43.857 real 0m1.674s 00:06:43.857 user 0m2.011s 00:06:43.857 sys 0m0.410s 00:06:43.857 20:07:21 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:43.857 20:07:21 -- common/autotest_common.sh@10 -- # set +x 00:06:43.857 ************************************ 00:06:43.857 END TEST app_cmdline 00:06:43.857 ************************************ 00:06:43.857 20:07:21 -- spdk/autotest.sh@192 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:43.857 20:07:21 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:06:43.857 20:07:21 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:43.857 20:07:21 -- common/autotest_common.sh@10 -- # set +x 00:06:43.857 ************************************ 00:06:43.857 START TEST version 00:06:43.857 ************************************ 00:06:43.857 20:07:21 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:44.118 * Looking for test storage... 00:06:44.118 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:44.118 20:07:21 -- app/version.sh@17 -- # get_header_version major 00:06:44.118 20:07:21 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:44.118 20:07:21 -- app/version.sh@14 -- # cut -f2 00:06:44.118 20:07:21 -- app/version.sh@14 -- # tr -d '"' 00:06:44.118 20:07:21 -- app/version.sh@17 -- # major=24 00:06:44.118 20:07:21 -- app/version.sh@18 -- # get_header_version minor 00:06:44.118 20:07:21 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:44.118 20:07:21 -- app/version.sh@14 -- # cut -f2 00:06:44.118 20:07:21 -- app/version.sh@14 -- # tr -d '"' 00:06:44.118 20:07:21 -- app/version.sh@18 -- # minor=5 00:06:44.118 20:07:21 -- app/version.sh@19 -- # get_header_version patch 00:06:44.118 20:07:21 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:44.118 20:07:21 -- app/version.sh@14 -- # cut -f2 00:06:44.118 20:07:21 -- app/version.sh@14 -- # tr -d '"' 00:06:44.118 20:07:21 -- app/version.sh@19 -- # patch=0 00:06:44.118 20:07:21 -- app/version.sh@20 -- # get_header_version suffix 00:06:44.118 20:07:21 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:44.118 20:07:21 -- app/version.sh@14 -- # cut -f2 00:06:44.118 20:07:21 -- app/version.sh@14 -- # tr -d '"' 00:06:44.118 20:07:21 -- app/version.sh@20 -- # suffix=-pre 00:06:44.118 20:07:21 -- app/version.sh@22 -- # version=24.5 00:06:44.118 20:07:21 -- app/version.sh@25 -- # (( patch != 0 )) 00:06:44.118 20:07:21 -- app/version.sh@28 -- # version=24.5rc0 00:06:44.118 20:07:21 -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:44.118 20:07:21 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:44.118 20:07:21 -- app/version.sh@30 -- # py_version=24.5rc0 00:06:44.118 20:07:21 -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:06:44.118 00:06:44.118 real 0m0.144s 00:06:44.118 user 0m0.074s 00:06:44.118 sys 0m0.105s 00:06:44.118 20:07:21 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:44.118 20:07:21 -- common/autotest_common.sh@10 -- # set +x 00:06:44.118 ************************************ 00:06:44.118 END TEST version 00:06:44.118 ************************************ 00:06:44.118 20:07:21 -- spdk/autotest.sh@194 -- # '[' 0 -eq 1 ']' 00:06:44.118 20:07:21 -- spdk/autotest.sh@204 -- # uname -s 00:06:44.118 20:07:21 -- spdk/autotest.sh@204 -- # [[ Linux == Linux ]] 00:06:44.118 20:07:21 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:06:44.118 20:07:21 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:06:44.118 20:07:21 -- spdk/autotest.sh@217 -- # '[' 0 -eq 1 ']' 00:06:44.118 20:07:21 -- spdk/autotest.sh@264 -- # '[' 0 -eq 1 ']' 00:06:44.118 20:07:21 -- spdk/autotest.sh@268 -- # timing_exit lib 00:06:44.118 20:07:21 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:44.118 20:07:21 -- common/autotest_common.sh@10 -- # set +x 00:06:44.118 20:07:21 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:06:44.118 20:07:21 -- spdk/autotest.sh@278 -- # '[' 0 -eq 1 ']' 00:06:44.118 20:07:21 -- spdk/autotest.sh@287 -- # '[' 1 -eq 1 ']' 00:06:44.118 20:07:21 -- spdk/autotest.sh@288 -- # export NET_TYPE 00:06:44.118 20:07:21 -- spdk/autotest.sh@291 -- # '[' tcp = rdma ']' 00:06:44.118 20:07:21 -- spdk/autotest.sh@294 -- # '[' tcp = tcp ']' 00:06:44.118 20:07:21 -- spdk/autotest.sh@295 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:44.118 20:07:21 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:06:44.118 20:07:21 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:44.118 20:07:21 -- common/autotest_common.sh@10 -- # set +x 00:06:44.118 ************************************ 00:06:44.118 START TEST nvmf_tcp 00:06:44.118 ************************************ 00:06:44.118 20:07:21 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:44.379 * Looking for test storage... 00:06:44.379 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:44.379 20:07:21 -- nvmf/nvmf.sh@10 -- # uname -s 00:06:44.379 20:07:21 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:44.379 20:07:21 -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:44.379 20:07:21 -- nvmf/common.sh@7 -- # uname -s 00:06:44.379 20:07:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:44.379 20:07:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:44.379 20:07:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:44.379 20:07:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:44.379 20:07:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:44.379 20:07:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:44.379 20:07:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:44.379 20:07:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:44.379 20:07:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:44.379 20:07:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:44.379 20:07:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:06:44.379 20:07:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:06:44.379 20:07:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:44.379 20:07:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:44.379 20:07:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:44.379 20:07:21 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:44.379 20:07:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:44.379 20:07:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:44.379 20:07:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:44.379 20:07:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.379 20:07:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.379 20:07:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.379 20:07:21 -- paths/export.sh@5 -- # export PATH 00:06:44.379 20:07:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.379 20:07:21 -- nvmf/common.sh@46 -- # : 0 00:06:44.379 20:07:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:06:44.379 20:07:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:06:44.379 20:07:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:06:44.379 20:07:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:44.379 20:07:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:44.379 20:07:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:06:44.379 20:07:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:06:44.379 20:07:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:06:44.379 20:07:21 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:44.379 20:07:21 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:06:44.379 20:07:21 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:06:44.379 20:07:21 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:44.379 20:07:21 -- common/autotest_common.sh@10 -- # set +x 00:06:44.379 20:07:21 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:06:44.379 20:07:21 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:44.379 20:07:21 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:06:44.379 20:07:21 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:44.379 20:07:21 -- common/autotest_common.sh@10 -- # set +x 00:06:44.379 ************************************ 00:06:44.379 START TEST nvmf_example 00:06:44.379 ************************************ 00:06:44.379 20:07:21 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:44.379 * Looking for test storage... 00:06:44.379 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:44.379 20:07:21 -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:44.379 20:07:21 -- nvmf/common.sh@7 -- # uname -s 00:06:44.379 20:07:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:44.379 20:07:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:44.379 20:07:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:44.379 20:07:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:44.379 20:07:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:44.379 20:07:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:44.379 20:07:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:44.379 20:07:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:44.379 20:07:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:44.379 20:07:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:44.379 20:07:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:06:44.379 20:07:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:06:44.379 20:07:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:44.379 20:07:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:44.379 20:07:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:44.379 20:07:21 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:44.379 20:07:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:44.379 20:07:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:44.379 20:07:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:44.379 20:07:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.379 20:07:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.380 20:07:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.380 20:07:21 -- paths/export.sh@5 -- # export PATH 00:06:44.380 20:07:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.380 20:07:21 -- nvmf/common.sh@46 -- # : 0 00:06:44.380 20:07:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:06:44.380 20:07:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:06:44.380 20:07:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:06:44.380 20:07:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:44.380 20:07:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:44.380 20:07:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:06:44.380 20:07:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:06:44.380 20:07:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:06:44.380 20:07:21 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:06:44.380 20:07:21 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:06:44.380 20:07:21 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:06:44.380 20:07:21 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:06:44.380 20:07:21 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:06:44.380 20:07:21 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:06:44.380 20:07:21 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:06:44.380 20:07:21 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:06:44.380 20:07:21 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:44.380 20:07:21 -- common/autotest_common.sh@10 -- # set +x 00:06:44.380 20:07:21 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:06:44.380 20:07:21 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:06:44.380 20:07:21 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:44.380 20:07:21 -- nvmf/common.sh@436 -- # prepare_net_devs 00:06:44.380 20:07:21 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:06:44.380 20:07:21 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:06:44.380 20:07:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:44.380 20:07:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:44.380 20:07:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:44.380 20:07:21 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:06:44.380 20:07:21 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:06:44.380 20:07:21 -- nvmf/common.sh@284 -- # xtrace_disable 00:06:44.380 20:07:21 -- common/autotest_common.sh@10 -- # set +x 00:06:50.973 20:07:27 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:06:50.973 20:07:27 -- nvmf/common.sh@290 -- # pci_devs=() 00:06:50.973 20:07:27 -- nvmf/common.sh@290 -- # local -a pci_devs 00:06:50.973 20:07:27 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:06:50.973 20:07:27 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:06:50.973 20:07:27 -- nvmf/common.sh@292 -- # pci_drivers=() 00:06:50.973 20:07:27 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:06:50.973 20:07:27 -- nvmf/common.sh@294 -- # net_devs=() 00:06:50.973 20:07:27 -- nvmf/common.sh@294 -- # local -ga net_devs 00:06:50.973 20:07:27 -- nvmf/common.sh@295 -- # e810=() 00:06:50.973 20:07:27 -- nvmf/common.sh@295 -- # local -ga e810 00:06:50.973 20:07:27 -- nvmf/common.sh@296 -- # x722=() 00:06:50.973 20:07:27 -- nvmf/common.sh@296 -- # local -ga x722 00:06:50.973 20:07:27 -- nvmf/common.sh@297 -- # mlx=() 00:06:50.973 20:07:27 -- nvmf/common.sh@297 -- # local -ga mlx 00:06:50.973 20:07:27 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:50.973 20:07:27 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:50.973 20:07:27 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:50.973 20:07:27 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:50.973 20:07:27 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:50.973 20:07:27 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:50.973 20:07:27 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:50.973 20:07:27 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:50.973 20:07:27 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:50.973 20:07:27 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:50.973 20:07:27 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:50.973 20:07:27 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:06:50.973 20:07:27 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:06:50.973 20:07:27 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:06:50.973 20:07:27 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:06:50.973 20:07:27 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:06:50.973 20:07:27 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:06:50.973 20:07:27 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:06:50.973 20:07:27 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:06:50.973 Found 0000:af:00.0 (0x8086 - 0x159b) 00:06:50.973 20:07:27 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:06:50.973 20:07:27 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:06:50.973 20:07:27 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:50.973 20:07:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:50.973 20:07:27 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:06:50.973 20:07:27 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:06:50.973 20:07:27 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:06:50.973 Found 0000:af:00.1 (0x8086 - 0x159b) 00:06:50.973 20:07:27 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:06:50.973 20:07:27 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:06:50.973 20:07:27 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:50.973 20:07:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:50.973 20:07:27 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:06:50.973 20:07:27 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:06:50.973 20:07:27 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:06:50.973 20:07:27 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:06:50.974 20:07:27 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:06:50.974 20:07:27 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:50.974 20:07:27 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:06:50.974 20:07:27 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:50.974 20:07:27 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:06:50.974 Found net devices under 0000:af:00.0: cvl_0_0 00:06:50.974 20:07:27 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:06:50.974 20:07:27 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:06:50.974 20:07:27 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:50.974 20:07:27 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:06:50.974 20:07:27 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:50.974 20:07:27 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:06:50.974 Found net devices under 0000:af:00.1: cvl_0_1 00:06:50.974 20:07:27 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:06:50.974 20:07:27 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:06:50.974 20:07:27 -- nvmf/common.sh@402 -- # is_hw=yes 00:06:50.974 20:07:27 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:06:50.974 20:07:27 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:06:50.974 20:07:27 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:06:50.974 20:07:27 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:50.974 20:07:27 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:50.974 20:07:27 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:50.974 20:07:27 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:06:50.974 20:07:27 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:50.974 20:07:27 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:50.974 20:07:27 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:06:50.974 20:07:27 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:50.974 20:07:27 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:50.974 20:07:27 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:06:50.974 20:07:27 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:06:50.974 20:07:27 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:06:50.974 20:07:27 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:50.974 20:07:27 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:50.974 20:07:27 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:50.974 20:07:27 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:06:50.974 20:07:27 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:50.974 20:07:27 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:50.974 20:07:27 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:50.974 20:07:27 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:06:50.974 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:50.974 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.259 ms 00:06:50.974 00:06:50.974 --- 10.0.0.2 ping statistics --- 00:06:50.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:50.974 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:06:50.974 20:07:27 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:50.974 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:50.974 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.364 ms 00:06:50.974 00:06:50.974 --- 10.0.0.1 ping statistics --- 00:06:50.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:50.974 rtt min/avg/max/mdev = 0.364/0.364/0.364/0.000 ms 00:06:50.974 20:07:27 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:50.974 20:07:27 -- nvmf/common.sh@410 -- # return 0 00:06:50.974 20:07:27 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:06:50.974 20:07:27 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:50.974 20:07:27 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:06:50.974 20:07:27 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:06:50.974 20:07:27 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:50.974 20:07:27 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:06:50.974 20:07:27 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:06:50.974 20:07:27 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:06:50.974 20:07:27 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:06:50.974 20:07:27 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:50.974 20:07:27 -- common/autotest_common.sh@10 -- # set +x 00:06:50.974 20:07:27 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:06:50.974 20:07:27 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:06:50.974 20:07:27 -- target/nvmf_example.sh@34 -- # nvmfpid=1619100 00:06:50.974 20:07:27 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:50.974 20:07:27 -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:06:50.974 20:07:27 -- target/nvmf_example.sh@36 -- # waitforlisten 1619100 00:06:50.974 20:07:27 -- common/autotest_common.sh@817 -- # '[' -z 1619100 ']' 00:06:50.974 20:07:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.974 20:07:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:50.974 20:07:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.974 20:07:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:50.974 20:07:27 -- common/autotest_common.sh@10 -- # set +x 00:06:50.974 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.234 20:07:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:51.234 20:07:28 -- common/autotest_common.sh@850 -- # return 0 00:06:51.234 20:07:28 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:06:51.234 20:07:28 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:51.234 20:07:28 -- common/autotest_common.sh@10 -- # set +x 00:06:51.234 20:07:28 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:51.234 20:07:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:51.234 20:07:28 -- common/autotest_common.sh@10 -- # set +x 00:06:51.234 20:07:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:51.234 20:07:28 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:06:51.234 20:07:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:51.234 20:07:28 -- common/autotest_common.sh@10 -- # set +x 00:06:51.234 20:07:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:51.234 20:07:28 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:06:51.234 20:07:28 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:51.234 20:07:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:51.234 20:07:28 -- common/autotest_common.sh@10 -- # set +x 00:06:51.234 20:07:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:51.234 20:07:28 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:06:51.234 20:07:28 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:51.234 20:07:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:51.234 20:07:28 -- common/autotest_common.sh@10 -- # set +x 00:06:51.234 20:07:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:51.234 20:07:28 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:51.234 20:07:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:51.234 20:07:28 -- common/autotest_common.sh@10 -- # set +x 00:06:51.234 20:07:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:51.234 20:07:28 -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:06:51.234 20:07:28 -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:06:51.234 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.500 Initializing NVMe Controllers 00:07:03.500 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:03.500 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:03.500 Initialization complete. Launching workers. 00:07:03.500 ======================================================== 00:07:03.500 Latency(us) 00:07:03.500 Device Information : IOPS MiB/s Average min max 00:07:03.500 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14440.42 56.41 4431.64 682.98 15490.84 00:07:03.500 ======================================================== 00:07:03.500 Total : 14440.42 56.41 4431.64 682.98 15490.84 00:07:03.500 00:07:03.500 20:07:38 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:03.500 20:07:38 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:03.500 20:07:38 -- nvmf/common.sh@476 -- # nvmfcleanup 00:07:03.500 20:07:38 -- nvmf/common.sh@116 -- # sync 00:07:03.500 20:07:38 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:07:03.500 20:07:38 -- nvmf/common.sh@119 -- # set +e 00:07:03.500 20:07:38 -- nvmf/common.sh@120 -- # for i in {1..20} 00:07:03.500 20:07:38 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:07:03.500 rmmod nvme_tcp 00:07:03.500 rmmod nvme_fabrics 00:07:03.500 rmmod nvme_keyring 00:07:03.500 20:07:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:07:03.500 20:07:38 -- nvmf/common.sh@123 -- # set -e 00:07:03.500 20:07:38 -- nvmf/common.sh@124 -- # return 0 00:07:03.500 20:07:38 -- nvmf/common.sh@477 -- # '[' -n 1619100 ']' 00:07:03.500 20:07:38 -- nvmf/common.sh@478 -- # killprocess 1619100 00:07:03.500 20:07:38 -- common/autotest_common.sh@924 -- # '[' -z 1619100 ']' 00:07:03.500 20:07:38 -- common/autotest_common.sh@928 -- # kill -0 1619100 00:07:03.500 20:07:38 -- common/autotest_common.sh@929 -- # uname 00:07:03.500 20:07:38 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:07:03.500 20:07:38 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 1619100 00:07:03.500 20:07:38 -- common/autotest_common.sh@930 -- # process_name=nvmf 00:07:03.500 20:07:38 -- common/autotest_common.sh@934 -- # '[' nvmf = sudo ']' 00:07:03.500 20:07:38 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 1619100' 00:07:03.500 killing process with pid 1619100 00:07:03.500 20:07:38 -- common/autotest_common.sh@943 -- # kill 1619100 00:07:03.500 20:07:38 -- common/autotest_common.sh@948 -- # wait 1619100 00:07:03.500 nvmf threads initialize successfully 00:07:03.500 bdev subsystem init successfully 00:07:03.500 created a nvmf target service 00:07:03.500 create targets's poll groups done 00:07:03.500 all subsystems of target started 00:07:03.500 nvmf target is running 00:07:03.500 all subsystems of target stopped 00:07:03.500 destroy targets's poll groups done 00:07:03.500 destroyed the nvmf target service 00:07:03.500 bdev subsystem finish successfully 00:07:03.500 nvmf threads destroy successfully 00:07:03.500 20:07:39 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:07:03.500 20:07:39 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:07:03.500 20:07:39 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:07:03.500 20:07:39 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:03.500 20:07:39 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:07:03.500 20:07:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:03.500 20:07:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:03.500 20:07:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:03.760 20:07:41 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:07:03.760 20:07:41 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:03.760 20:07:41 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:03.760 20:07:41 -- common/autotest_common.sh@10 -- # set +x 00:07:03.760 00:07:03.760 real 0m19.545s 00:07:03.760 user 0m45.812s 00:07:03.760 sys 0m5.670s 00:07:03.760 20:07:41 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:03.760 20:07:41 -- common/autotest_common.sh@10 -- # set +x 00:07:03.760 ************************************ 00:07:03.760 END TEST nvmf_example 00:07:03.760 ************************************ 00:07:04.023 20:07:41 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:04.023 20:07:41 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:07:04.023 20:07:41 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:07:04.023 20:07:41 -- common/autotest_common.sh@10 -- # set +x 00:07:04.023 ************************************ 00:07:04.023 START TEST nvmf_filesystem 00:07:04.023 ************************************ 00:07:04.023 20:07:41 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:04.023 * Looking for test storage... 00:07:04.023 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:04.023 20:07:41 -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:07:04.023 20:07:41 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:04.023 20:07:41 -- common/autotest_common.sh@34 -- # set -e 00:07:04.023 20:07:41 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:04.023 20:07:41 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:04.023 20:07:41 -- common/autotest_common.sh@38 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:04.023 20:07:41 -- common/autotest_common.sh@39 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:07:04.023 20:07:41 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:04.023 20:07:41 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:04.023 20:07:41 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:04.023 20:07:41 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:04.023 20:07:41 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:04.023 20:07:41 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:04.023 20:07:41 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:04.023 20:07:41 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:04.023 20:07:41 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:04.023 20:07:41 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:04.023 20:07:41 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:04.023 20:07:41 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:04.023 20:07:41 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:04.023 20:07:41 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:04.023 20:07:41 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:04.023 20:07:41 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:04.023 20:07:41 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:04.023 20:07:41 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:04.023 20:07:41 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:04.023 20:07:41 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:04.023 20:07:41 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:04.023 20:07:41 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:04.023 20:07:41 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:04.023 20:07:41 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:04.023 20:07:41 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:04.023 20:07:41 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:04.023 20:07:41 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:04.023 20:07:41 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:04.023 20:07:41 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:04.023 20:07:41 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:04.023 20:07:41 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:04.023 20:07:41 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:04.023 20:07:41 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:04.023 20:07:41 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:04.023 20:07:41 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:04.023 20:07:41 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:04.023 20:07:41 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:04.023 20:07:41 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:04.023 20:07:41 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:04.023 20:07:41 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:04.023 20:07:41 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:07:04.023 20:07:41 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:04.023 20:07:41 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:04.023 20:07:41 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:04.023 20:07:41 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:04.023 20:07:41 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:07:04.023 20:07:41 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:07:04.023 20:07:41 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:04.023 20:07:41 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:07:04.023 20:07:41 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:07:04.023 20:07:41 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:07:04.023 20:07:41 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:07:04.023 20:07:41 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:07:04.023 20:07:41 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:07:04.023 20:07:41 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:07:04.023 20:07:41 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:07:04.023 20:07:41 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:07:04.023 20:07:41 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:07:04.023 20:07:41 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:07:04.023 20:07:41 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=n 00:07:04.023 20:07:41 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR= 00:07:04.023 20:07:41 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:07:04.023 20:07:41 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:07:04.023 20:07:41 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:07:04.023 20:07:41 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:07:04.023 20:07:41 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:04.023 20:07:41 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:07:04.023 20:07:41 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:07:04.023 20:07:41 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:07:04.023 20:07:41 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:07:04.023 20:07:41 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:07:04.023 20:07:41 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:07:04.023 20:07:41 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:07:04.023 20:07:41 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:07:04.023 20:07:41 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:07:04.023 20:07:41 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:07:04.023 20:07:41 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:04.023 20:07:41 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:07:04.023 20:07:41 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:07:04.023 20:07:41 -- common/autotest_common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:04.023 20:07:41 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:04.023 20:07:41 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:04.023 20:07:41 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:04.023 20:07:41 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:04.023 20:07:41 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:04.023 20:07:41 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:04.024 20:07:41 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:04.024 20:07:41 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:04.024 20:07:41 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:04.024 20:07:41 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:04.024 20:07:41 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:04.024 20:07:41 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:04.024 20:07:41 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:04.024 20:07:41 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:07:04.024 20:07:41 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:04.024 #define SPDK_CONFIG_H 00:07:04.024 #define SPDK_CONFIG_APPS 1 00:07:04.024 #define SPDK_CONFIG_ARCH native 00:07:04.024 #undef SPDK_CONFIG_ASAN 00:07:04.024 #undef SPDK_CONFIG_AVAHI 00:07:04.024 #undef SPDK_CONFIG_CET 00:07:04.024 #define SPDK_CONFIG_COVERAGE 1 00:07:04.024 #define SPDK_CONFIG_CROSS_PREFIX 00:07:04.024 #undef SPDK_CONFIG_CRYPTO 00:07:04.024 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:04.024 #undef SPDK_CONFIG_CUSTOMOCF 00:07:04.024 #undef SPDK_CONFIG_DAOS 00:07:04.024 #define SPDK_CONFIG_DAOS_DIR 00:07:04.024 #define SPDK_CONFIG_DEBUG 1 00:07:04.024 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:04.024 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:04.024 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:04.024 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:04.024 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:04.024 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:04.024 #define SPDK_CONFIG_EXAMPLES 1 00:07:04.024 #undef SPDK_CONFIG_FC 00:07:04.024 #define SPDK_CONFIG_FC_PATH 00:07:04.024 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:04.024 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:04.024 #undef SPDK_CONFIG_FUSE 00:07:04.024 #undef SPDK_CONFIG_FUZZER 00:07:04.024 #define SPDK_CONFIG_FUZZER_LIB 00:07:04.024 #undef SPDK_CONFIG_GOLANG 00:07:04.024 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:04.024 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:04.024 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:04.024 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:04.024 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:04.024 #define SPDK_CONFIG_IDXD 1 00:07:04.024 #undef SPDK_CONFIG_IDXD_KERNEL 00:07:04.024 #undef SPDK_CONFIG_IPSEC_MB 00:07:04.024 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:04.024 #define SPDK_CONFIG_ISAL 1 00:07:04.024 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:04.024 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:04.024 #define SPDK_CONFIG_LIBDIR 00:07:04.024 #undef SPDK_CONFIG_LTO 00:07:04.024 #define SPDK_CONFIG_MAX_LCORES 00:07:04.024 #define SPDK_CONFIG_NVME_CUSE 1 00:07:04.024 #undef SPDK_CONFIG_OCF 00:07:04.024 #define SPDK_CONFIG_OCF_PATH 00:07:04.024 #define SPDK_CONFIG_OPENSSL_PATH 00:07:04.024 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:04.024 #undef SPDK_CONFIG_PGO_USE 00:07:04.024 #define SPDK_CONFIG_PREFIX /usr/local 00:07:04.024 #undef SPDK_CONFIG_RAID5F 00:07:04.024 #undef SPDK_CONFIG_RBD 00:07:04.024 #define SPDK_CONFIG_RDMA 1 00:07:04.024 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:04.024 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:04.024 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:04.024 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:04.024 #define SPDK_CONFIG_SHARED 1 00:07:04.024 #undef SPDK_CONFIG_SMA 00:07:04.024 #define SPDK_CONFIG_TESTS 1 00:07:04.024 #undef SPDK_CONFIG_TSAN 00:07:04.024 #define SPDK_CONFIG_UBLK 1 00:07:04.024 #define SPDK_CONFIG_UBSAN 1 00:07:04.024 #undef SPDK_CONFIG_UNIT_TESTS 00:07:04.024 #undef SPDK_CONFIG_URING 00:07:04.024 #define SPDK_CONFIG_URING_PATH 00:07:04.024 #undef SPDK_CONFIG_URING_ZNS 00:07:04.024 #undef SPDK_CONFIG_USDT 00:07:04.024 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:04.024 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:04.024 #undef SPDK_CONFIG_VFIO_USER 00:07:04.024 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:04.024 #define SPDK_CONFIG_VHOST 1 00:07:04.024 #define SPDK_CONFIG_VIRTIO 1 00:07:04.024 #undef SPDK_CONFIG_VTUNE 00:07:04.024 #define SPDK_CONFIG_VTUNE_DIR 00:07:04.024 #define SPDK_CONFIG_WERROR 1 00:07:04.024 #define SPDK_CONFIG_WPDK_DIR 00:07:04.024 #undef SPDK_CONFIG_XNVME 00:07:04.024 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:04.024 20:07:41 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:04.024 20:07:41 -- common/autotest_common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:04.024 20:07:41 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:04.024 20:07:41 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:04.024 20:07:41 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:04.024 20:07:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.024 20:07:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.024 20:07:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.024 20:07:41 -- paths/export.sh@5 -- # export PATH 00:07:04.024 20:07:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.024 20:07:41 -- common/autotest_common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:04.024 20:07:41 -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:04.024 20:07:41 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:04.024 20:07:41 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:04.024 20:07:41 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:04.024 20:07:41 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:04.024 20:07:41 -- pm/common@16 -- # TEST_TAG=N/A 00:07:04.024 20:07:41 -- pm/common@17 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:07:04.024 20:07:41 -- common/autotest_common.sh@52 -- # : 1 00:07:04.024 20:07:41 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:07:04.024 20:07:41 -- common/autotest_common.sh@56 -- # : 0 00:07:04.024 20:07:41 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:04.024 20:07:41 -- common/autotest_common.sh@58 -- # : 0 00:07:04.024 20:07:41 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:07:04.024 20:07:41 -- common/autotest_common.sh@60 -- # : 1 00:07:04.024 20:07:41 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:04.024 20:07:41 -- common/autotest_common.sh@62 -- # : 0 00:07:04.024 20:07:41 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:07:04.024 20:07:41 -- common/autotest_common.sh@64 -- # : 00:07:04.024 20:07:41 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:07:04.024 20:07:41 -- common/autotest_common.sh@66 -- # : 0 00:07:04.024 20:07:41 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:07:04.024 20:07:41 -- common/autotest_common.sh@68 -- # : 0 00:07:04.024 20:07:41 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:07:04.024 20:07:41 -- common/autotest_common.sh@70 -- # : 0 00:07:04.024 20:07:41 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:07:04.024 20:07:41 -- common/autotest_common.sh@72 -- # : 0 00:07:04.024 20:07:41 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:04.024 20:07:41 -- common/autotest_common.sh@74 -- # : 0 00:07:04.024 20:07:41 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:07:04.024 20:07:41 -- common/autotest_common.sh@76 -- # : 0 00:07:04.024 20:07:41 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:07:04.024 20:07:41 -- common/autotest_common.sh@78 -- # : 0 00:07:04.024 20:07:41 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:07:04.024 20:07:41 -- common/autotest_common.sh@80 -- # : 1 00:07:04.024 20:07:41 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:07:04.024 20:07:41 -- common/autotest_common.sh@82 -- # : 0 00:07:04.024 20:07:41 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:07:04.024 20:07:41 -- common/autotest_common.sh@84 -- # : 0 00:07:04.024 20:07:41 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:07:04.024 20:07:41 -- common/autotest_common.sh@86 -- # : 1 00:07:04.024 20:07:41 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:07:04.024 20:07:41 -- common/autotest_common.sh@88 -- # : 0 00:07:04.024 20:07:41 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:07:04.024 20:07:41 -- common/autotest_common.sh@90 -- # : 0 00:07:04.024 20:07:41 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:04.024 20:07:41 -- common/autotest_common.sh@92 -- # : 0 00:07:04.024 20:07:41 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:07:04.024 20:07:41 -- common/autotest_common.sh@94 -- # : 0 00:07:04.024 20:07:41 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:07:04.024 20:07:41 -- common/autotest_common.sh@96 -- # : tcp 00:07:04.025 20:07:41 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:04.025 20:07:41 -- common/autotest_common.sh@98 -- # : 0 00:07:04.025 20:07:41 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:07:04.025 20:07:41 -- common/autotest_common.sh@100 -- # : 0 00:07:04.025 20:07:41 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:07:04.025 20:07:41 -- common/autotest_common.sh@102 -- # : 0 00:07:04.025 20:07:41 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:07:04.025 20:07:41 -- common/autotest_common.sh@104 -- # : 0 00:07:04.025 20:07:41 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:07:04.025 20:07:41 -- common/autotest_common.sh@106 -- # : 0 00:07:04.025 20:07:41 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:07:04.025 20:07:41 -- common/autotest_common.sh@108 -- # : 0 00:07:04.025 20:07:41 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:07:04.025 20:07:41 -- common/autotest_common.sh@110 -- # : 0 00:07:04.025 20:07:41 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:07:04.025 20:07:41 -- common/autotest_common.sh@112 -- # : 0 00:07:04.025 20:07:41 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:04.025 20:07:41 -- common/autotest_common.sh@114 -- # : 0 00:07:04.025 20:07:41 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:07:04.025 20:07:41 -- common/autotest_common.sh@116 -- # : 1 00:07:04.025 20:07:41 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:07:04.025 20:07:41 -- common/autotest_common.sh@118 -- # : 00:07:04.025 20:07:41 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:04.025 20:07:41 -- common/autotest_common.sh@120 -- # : 0 00:07:04.025 20:07:41 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:07:04.025 20:07:41 -- common/autotest_common.sh@122 -- # : 0 00:07:04.025 20:07:41 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:07:04.025 20:07:41 -- common/autotest_common.sh@124 -- # : 0 00:07:04.025 20:07:41 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:07:04.025 20:07:41 -- common/autotest_common.sh@126 -- # : 0 00:07:04.025 20:07:41 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:07:04.025 20:07:41 -- common/autotest_common.sh@128 -- # : 0 00:07:04.025 20:07:41 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:07:04.025 20:07:41 -- common/autotest_common.sh@130 -- # : 0 00:07:04.025 20:07:41 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:07:04.025 20:07:41 -- common/autotest_common.sh@132 -- # : 00:07:04.025 20:07:41 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:07:04.025 20:07:41 -- common/autotest_common.sh@134 -- # : true 00:07:04.025 20:07:41 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:07:04.025 20:07:41 -- common/autotest_common.sh@136 -- # : 0 00:07:04.025 20:07:41 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:07:04.025 20:07:41 -- common/autotest_common.sh@138 -- # : 0 00:07:04.025 20:07:41 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:07:04.025 20:07:41 -- common/autotest_common.sh@140 -- # : 0 00:07:04.025 20:07:41 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:07:04.025 20:07:41 -- common/autotest_common.sh@142 -- # : 0 00:07:04.025 20:07:41 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:07:04.025 20:07:41 -- common/autotest_common.sh@144 -- # : 0 00:07:04.025 20:07:41 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:07:04.025 20:07:41 -- common/autotest_common.sh@146 -- # : 0 00:07:04.025 20:07:41 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:07:04.025 20:07:41 -- common/autotest_common.sh@148 -- # : e810 00:07:04.025 20:07:41 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:07:04.025 20:07:41 -- common/autotest_common.sh@150 -- # : 0 00:07:04.025 20:07:41 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:07:04.025 20:07:41 -- common/autotest_common.sh@152 -- # : 0 00:07:04.025 20:07:41 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:07:04.025 20:07:41 -- common/autotest_common.sh@154 -- # : 0 00:07:04.025 20:07:41 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:07:04.025 20:07:41 -- common/autotest_common.sh@156 -- # : 0 00:07:04.025 20:07:41 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:07:04.025 20:07:41 -- common/autotest_common.sh@158 -- # : 0 00:07:04.025 20:07:41 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:07:04.025 20:07:41 -- common/autotest_common.sh@161 -- # : 00:07:04.025 20:07:41 -- common/autotest_common.sh@162 -- # export SPDK_TEST_FUZZER_TARGET 00:07:04.025 20:07:41 -- common/autotest_common.sh@163 -- # : 0 00:07:04.025 20:07:41 -- common/autotest_common.sh@164 -- # export SPDK_TEST_NVMF_MDNS 00:07:04.025 20:07:41 -- common/autotest_common.sh@165 -- # : 0 00:07:04.025 20:07:41 -- common/autotest_common.sh@166 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:04.025 20:07:41 -- common/autotest_common.sh@169 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:04.025 20:07:41 -- common/autotest_common.sh@169 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:04.025 20:07:41 -- common/autotest_common.sh@170 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:04.025 20:07:41 -- common/autotest_common.sh@170 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:04.025 20:07:41 -- common/autotest_common.sh@171 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:04.025 20:07:41 -- common/autotest_common.sh@171 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:04.025 20:07:41 -- common/autotest_common.sh@172 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:04.025 20:07:41 -- common/autotest_common.sh@172 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:04.025 20:07:41 -- common/autotest_common.sh@175 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:04.025 20:07:41 -- common/autotest_common.sh@175 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:04.025 20:07:41 -- common/autotest_common.sh@179 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:04.025 20:07:41 -- common/autotest_common.sh@179 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:04.025 20:07:41 -- common/autotest_common.sh@183 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:04.025 20:07:41 -- common/autotest_common.sh@183 -- # PYTHONDONTWRITEBYTECODE=1 00:07:04.025 20:07:41 -- common/autotest_common.sh@187 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:04.025 20:07:41 -- common/autotest_common.sh@187 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:04.025 20:07:41 -- common/autotest_common.sh@188 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:04.025 20:07:41 -- common/autotest_common.sh@188 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:04.025 20:07:41 -- common/autotest_common.sh@192 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:04.025 20:07:41 -- common/autotest_common.sh@193 -- # rm -rf /var/tmp/asan_suppression_file 00:07:04.025 20:07:41 -- common/autotest_common.sh@194 -- # cat 00:07:04.025 20:07:41 -- common/autotest_common.sh@220 -- # echo leak:libfuse3.so 00:07:04.025 20:07:41 -- common/autotest_common.sh@222 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:04.025 20:07:41 -- common/autotest_common.sh@222 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:04.025 20:07:41 -- common/autotest_common.sh@224 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:04.025 20:07:41 -- common/autotest_common.sh@224 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:04.025 20:07:41 -- common/autotest_common.sh@226 -- # '[' -z /var/spdk/dependencies ']' 00:07:04.025 20:07:41 -- common/autotest_common.sh@229 -- # export DEPENDENCY_DIR 00:07:04.025 20:07:41 -- common/autotest_common.sh@233 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:04.025 20:07:41 -- common/autotest_common.sh@233 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:04.025 20:07:41 -- common/autotest_common.sh@234 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:04.025 20:07:41 -- common/autotest_common.sh@234 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:04.025 20:07:41 -- common/autotest_common.sh@237 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:04.025 20:07:41 -- common/autotest_common.sh@237 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:04.025 20:07:41 -- common/autotest_common.sh@238 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:04.025 20:07:41 -- common/autotest_common.sh@238 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:04.025 20:07:41 -- common/autotest_common.sh@240 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:04.025 20:07:41 -- common/autotest_common.sh@240 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:04.025 20:07:41 -- common/autotest_common.sh@243 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:04.025 20:07:41 -- common/autotest_common.sh@243 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:04.025 20:07:41 -- common/autotest_common.sh@246 -- # '[' 0 -eq 0 ']' 00:07:04.025 20:07:41 -- common/autotest_common.sh@247 -- # export valgrind= 00:07:04.025 20:07:41 -- common/autotest_common.sh@247 -- # valgrind= 00:07:04.025 20:07:41 -- common/autotest_common.sh@253 -- # uname -s 00:07:04.026 20:07:41 -- common/autotest_common.sh@253 -- # '[' Linux = Linux ']' 00:07:04.026 20:07:41 -- common/autotest_common.sh@254 -- # HUGEMEM=4096 00:07:04.026 20:07:41 -- common/autotest_common.sh@255 -- # export CLEAR_HUGE=yes 00:07:04.026 20:07:41 -- common/autotest_common.sh@255 -- # CLEAR_HUGE=yes 00:07:04.026 20:07:41 -- common/autotest_common.sh@256 -- # [[ 0 -eq 1 ]] 00:07:04.026 20:07:41 -- common/autotest_common.sh@256 -- # [[ 0 -eq 1 ]] 00:07:04.026 20:07:41 -- common/autotest_common.sh@263 -- # MAKE=make 00:07:04.026 20:07:41 -- common/autotest_common.sh@264 -- # MAKEFLAGS=-j96 00:07:04.026 20:07:41 -- common/autotest_common.sh@280 -- # export HUGEMEM=4096 00:07:04.026 20:07:41 -- common/autotest_common.sh@280 -- # HUGEMEM=4096 00:07:04.026 20:07:41 -- common/autotest_common.sh@282 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:07:04.026 20:07:41 -- common/autotest_common.sh@287 -- # NO_HUGE=() 00:07:04.026 20:07:41 -- common/autotest_common.sh@288 -- # TEST_MODE= 00:07:04.026 20:07:41 -- common/autotest_common.sh@289 -- # for i in "$@" 00:07:04.026 20:07:41 -- common/autotest_common.sh@290 -- # case "$i" in 00:07:04.026 20:07:41 -- common/autotest_common.sh@295 -- # TEST_TRANSPORT=tcp 00:07:04.026 20:07:41 -- common/autotest_common.sh@307 -- # [[ -z 1621524 ]] 00:07:04.026 20:07:41 -- common/autotest_common.sh@307 -- # kill -0 1621524 00:07:04.026 20:07:41 -- common/autotest_common.sh@1663 -- # set_test_storage 2147483648 00:07:04.026 20:07:41 -- common/autotest_common.sh@317 -- # [[ -v testdir ]] 00:07:04.026 20:07:41 -- common/autotest_common.sh@319 -- # local requested_size=2147483648 00:07:04.026 20:07:41 -- common/autotest_common.sh@320 -- # local mount target_dir 00:07:04.026 20:07:41 -- common/autotest_common.sh@322 -- # local -A mounts fss sizes avails uses 00:07:04.026 20:07:41 -- common/autotest_common.sh@323 -- # local source fs size avail mount use 00:07:04.026 20:07:41 -- common/autotest_common.sh@325 -- # local storage_fallback storage_candidates 00:07:04.026 20:07:41 -- common/autotest_common.sh@327 -- # mktemp -udt spdk.XXXXXX 00:07:04.026 20:07:41 -- common/autotest_common.sh@327 -- # storage_fallback=/tmp/spdk.ro1x4W 00:07:04.026 20:07:41 -- common/autotest_common.sh@332 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:04.026 20:07:41 -- common/autotest_common.sh@334 -- # [[ -n '' ]] 00:07:04.026 20:07:41 -- common/autotest_common.sh@339 -- # [[ -n '' ]] 00:07:04.026 20:07:41 -- common/autotest_common.sh@344 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.ro1x4W/tests/target /tmp/spdk.ro1x4W 00:07:04.026 20:07:41 -- common/autotest_common.sh@347 -- # requested_size=2214592512 00:07:04.026 20:07:41 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:04.026 20:07:41 -- common/autotest_common.sh@316 -- # grep -v Filesystem 00:07:04.026 20:07:41 -- common/autotest_common.sh@316 -- # df -T 00:07:04.026 20:07:41 -- common/autotest_common.sh@350 -- # mounts["$mount"]=spdk_devtmpfs 00:07:04.026 20:07:41 -- common/autotest_common.sh@350 -- # fss["$mount"]=devtmpfs 00:07:04.026 20:07:41 -- common/autotest_common.sh@351 -- # avails["$mount"]=67108864 00:07:04.026 20:07:41 -- common/autotest_common.sh@351 -- # sizes["$mount"]=67108864 00:07:04.026 20:07:41 -- common/autotest_common.sh@352 -- # uses["$mount"]=0 00:07:04.026 20:07:41 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:04.026 20:07:41 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/pmem0 00:07:04.026 20:07:41 -- common/autotest_common.sh@350 -- # fss["$mount"]=ext2 00:07:04.026 20:07:41 -- common/autotest_common.sh@351 -- # avails["$mount"]=931024896 00:07:04.026 20:07:41 -- common/autotest_common.sh@351 -- # sizes["$mount"]=5284429824 00:07:04.026 20:07:41 -- common/autotest_common.sh@352 -- # uses["$mount"]=4353404928 00:07:04.026 20:07:41 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:04.026 20:07:41 -- common/autotest_common.sh@350 -- # mounts["$mount"]=spdk_root 00:07:04.026 20:07:41 -- common/autotest_common.sh@350 -- # fss["$mount"]=overlay 00:07:04.026 20:07:41 -- common/autotest_common.sh@351 -- # avails["$mount"]=86261411840 00:07:04.026 20:07:41 -- common/autotest_common.sh@351 -- # sizes["$mount"]=95562752000 00:07:04.026 20:07:41 -- common/autotest_common.sh@352 -- # uses["$mount"]=9301340160 00:07:04.026 20:07:41 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:04.026 20:07:41 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:07:04.026 20:07:41 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:07:04.026 20:07:41 -- common/autotest_common.sh@351 -- # avails["$mount"]=47780118528 00:07:04.026 20:07:41 -- common/autotest_common.sh@351 -- # sizes["$mount"]=47781376000 00:07:04.026 20:07:41 -- common/autotest_common.sh@352 -- # uses["$mount"]=1257472 00:07:04.026 20:07:41 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:04.026 20:07:41 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:07:04.026 20:07:41 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:07:04.026 20:07:41 -- common/autotest_common.sh@351 -- # avails["$mount"]=19102998528 00:07:04.026 20:07:41 -- common/autotest_common.sh@351 -- # sizes["$mount"]=19112550400 00:07:04.026 20:07:41 -- common/autotest_common.sh@352 -- # uses["$mount"]=9551872 00:07:04.026 20:07:41 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:04.026 20:07:41 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:07:04.026 20:07:41 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:07:04.026 20:07:41 -- common/autotest_common.sh@351 -- # avails["$mount"]=47780663296 00:07:04.026 20:07:41 -- common/autotest_common.sh@351 -- # sizes["$mount"]=47781376000 00:07:04.026 20:07:41 -- common/autotest_common.sh@352 -- # uses["$mount"]=712704 00:07:04.026 20:07:41 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:04.026 20:07:41 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:07:04.026 20:07:41 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:07:04.026 20:07:41 -- common/autotest_common.sh@351 -- # avails["$mount"]=9556271104 00:07:04.026 20:07:41 -- common/autotest_common.sh@351 -- # sizes["$mount"]=9556275200 00:07:04.026 20:07:41 -- common/autotest_common.sh@352 -- # uses["$mount"]=4096 00:07:04.026 20:07:41 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:04.026 20:07:41 -- common/autotest_common.sh@355 -- # printf '* Looking for test storage...\n' 00:07:04.026 * Looking for test storage... 00:07:04.026 20:07:41 -- common/autotest_common.sh@357 -- # local target_space new_size 00:07:04.026 20:07:41 -- common/autotest_common.sh@358 -- # for target_dir in "${storage_candidates[@]}" 00:07:04.026 20:07:41 -- common/autotest_common.sh@361 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:04.026 20:07:41 -- common/autotest_common.sh@361 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:04.026 20:07:41 -- common/autotest_common.sh@361 -- # mount=/ 00:07:04.026 20:07:41 -- common/autotest_common.sh@363 -- # target_space=86261411840 00:07:04.026 20:07:41 -- common/autotest_common.sh@364 -- # (( target_space == 0 || target_space < requested_size )) 00:07:04.026 20:07:41 -- common/autotest_common.sh@367 -- # (( target_space >= requested_size )) 00:07:04.026 20:07:41 -- common/autotest_common.sh@369 -- # [[ overlay == tmpfs ]] 00:07:04.026 20:07:41 -- common/autotest_common.sh@369 -- # [[ overlay == ramfs ]] 00:07:04.026 20:07:41 -- common/autotest_common.sh@369 -- # [[ / == / ]] 00:07:04.026 20:07:41 -- common/autotest_common.sh@370 -- # new_size=11515932672 00:07:04.026 20:07:41 -- common/autotest_common.sh@371 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:04.026 20:07:41 -- common/autotest_common.sh@376 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:04.026 20:07:41 -- common/autotest_common.sh@376 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:04.026 20:07:41 -- common/autotest_common.sh@377 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:04.026 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:04.026 20:07:41 -- common/autotest_common.sh@378 -- # return 0 00:07:04.026 20:07:41 -- common/autotest_common.sh@1665 -- # set -o errtrace 00:07:04.026 20:07:41 -- common/autotest_common.sh@1666 -- # shopt -s extdebug 00:07:04.026 20:07:41 -- common/autotest_common.sh@1667 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:04.026 20:07:41 -- common/autotest_common.sh@1669 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:04.026 20:07:41 -- common/autotest_common.sh@1670 -- # true 00:07:04.026 20:07:41 -- common/autotest_common.sh@1672 -- # xtrace_fd 00:07:04.026 20:07:41 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:04.026 20:07:41 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:04.026 20:07:41 -- common/autotest_common.sh@27 -- # exec 00:07:04.026 20:07:41 -- common/autotest_common.sh@29 -- # exec 00:07:04.026 20:07:41 -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:04.026 20:07:41 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:04.026 20:07:41 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:04.026 20:07:41 -- common/autotest_common.sh@18 -- # set -x 00:07:04.026 20:07:41 -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:04.026 20:07:41 -- nvmf/common.sh@7 -- # uname -s 00:07:04.026 20:07:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:04.026 20:07:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:04.026 20:07:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:04.026 20:07:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:04.026 20:07:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:04.026 20:07:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:04.026 20:07:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:04.026 20:07:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:04.026 20:07:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:04.026 20:07:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:04.026 20:07:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:07:04.026 20:07:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:07:04.026 20:07:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:04.026 20:07:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:04.026 20:07:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:04.026 20:07:41 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:04.026 20:07:41 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:04.026 20:07:41 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:04.026 20:07:41 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:04.027 20:07:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.027 20:07:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.027 20:07:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.027 20:07:41 -- paths/export.sh@5 -- # export PATH 00:07:04.027 20:07:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.027 20:07:41 -- nvmf/common.sh@46 -- # : 0 00:07:04.027 20:07:41 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:04.027 20:07:41 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:04.027 20:07:41 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:04.027 20:07:41 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:04.027 20:07:41 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:04.027 20:07:41 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:04.027 20:07:41 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:04.027 20:07:41 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:04.287 20:07:41 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:04.287 20:07:41 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:04.287 20:07:41 -- target/filesystem.sh@15 -- # nvmftestinit 00:07:04.287 20:07:41 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:04.287 20:07:41 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:04.287 20:07:41 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:04.287 20:07:41 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:04.287 20:07:41 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:04.287 20:07:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:04.287 20:07:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:04.287 20:07:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:04.287 20:07:41 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:07:04.287 20:07:41 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:07:04.287 20:07:41 -- nvmf/common.sh@284 -- # xtrace_disable 00:07:04.287 20:07:41 -- common/autotest_common.sh@10 -- # set +x 00:07:10.866 20:07:47 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:10.866 20:07:47 -- nvmf/common.sh@290 -- # pci_devs=() 00:07:10.866 20:07:47 -- nvmf/common.sh@290 -- # local -a pci_devs 00:07:10.866 20:07:47 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:07:10.866 20:07:47 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:07:10.866 20:07:47 -- nvmf/common.sh@292 -- # pci_drivers=() 00:07:10.866 20:07:47 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:07:10.866 20:07:47 -- nvmf/common.sh@294 -- # net_devs=() 00:07:10.866 20:07:47 -- nvmf/common.sh@294 -- # local -ga net_devs 00:07:10.866 20:07:47 -- nvmf/common.sh@295 -- # e810=() 00:07:10.866 20:07:47 -- nvmf/common.sh@295 -- # local -ga e810 00:07:10.866 20:07:47 -- nvmf/common.sh@296 -- # x722=() 00:07:10.866 20:07:47 -- nvmf/common.sh@296 -- # local -ga x722 00:07:10.866 20:07:47 -- nvmf/common.sh@297 -- # mlx=() 00:07:10.866 20:07:47 -- nvmf/common.sh@297 -- # local -ga mlx 00:07:10.866 20:07:47 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:10.866 20:07:47 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:10.866 20:07:47 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:10.866 20:07:47 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:10.866 20:07:47 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:10.866 20:07:47 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:10.866 20:07:47 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:10.866 20:07:47 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:10.866 20:07:47 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:10.866 20:07:47 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:10.866 20:07:47 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:10.867 20:07:47 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:07:10.867 20:07:47 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:07:10.867 20:07:47 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:07:10.867 20:07:47 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:07:10.867 20:07:47 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:07:10.867 20:07:47 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:07:10.867 20:07:47 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:10.867 20:07:47 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:10.867 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:10.867 20:07:47 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:10.867 20:07:47 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:10.867 20:07:47 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:10.867 20:07:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:10.867 20:07:47 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:10.867 20:07:47 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:10.867 20:07:47 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:10.867 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:10.867 20:07:47 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:10.867 20:07:47 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:10.867 20:07:47 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:10.867 20:07:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:10.867 20:07:47 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:10.867 20:07:47 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:07:10.867 20:07:47 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:07:10.867 20:07:47 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:07:10.867 20:07:47 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:10.867 20:07:47 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:10.867 20:07:47 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:10.867 20:07:47 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:10.867 20:07:47 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:10.867 Found net devices under 0000:af:00.0: cvl_0_0 00:07:10.867 20:07:47 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:10.867 20:07:47 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:10.867 20:07:47 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:10.867 20:07:47 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:10.867 20:07:47 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:10.867 20:07:47 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:10.867 Found net devices under 0000:af:00.1: cvl_0_1 00:07:10.867 20:07:47 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:10.867 20:07:47 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:07:10.867 20:07:47 -- nvmf/common.sh@402 -- # is_hw=yes 00:07:10.867 20:07:47 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:07:10.867 20:07:47 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:07:10.867 20:07:47 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:07:10.867 20:07:47 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:10.867 20:07:47 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:10.867 20:07:47 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:10.867 20:07:47 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:07:10.867 20:07:47 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:10.867 20:07:47 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:10.867 20:07:47 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:07:10.867 20:07:47 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:10.867 20:07:47 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:10.867 20:07:47 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:07:10.867 20:07:47 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:07:10.867 20:07:47 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:07:10.867 20:07:47 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:10.867 20:07:47 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:10.867 20:07:47 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:10.867 20:07:47 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:07:10.867 20:07:47 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:10.867 20:07:47 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:10.867 20:07:47 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:10.867 20:07:47 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:07:10.867 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:10.867 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:07:10.867 00:07:10.867 --- 10.0.0.2 ping statistics --- 00:07:10.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:10.867 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:07:10.867 20:07:47 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:10.867 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:10.867 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.387 ms 00:07:10.867 00:07:10.867 --- 10.0.0.1 ping statistics --- 00:07:10.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:10.867 rtt min/avg/max/mdev = 0.387/0.387/0.387/0.000 ms 00:07:10.867 20:07:47 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:10.867 20:07:47 -- nvmf/common.sh@410 -- # return 0 00:07:10.867 20:07:47 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:10.867 20:07:47 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:10.867 20:07:47 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:10.867 20:07:47 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:10.867 20:07:47 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:10.867 20:07:47 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:10.867 20:07:47 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:10.867 20:07:47 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:10.867 20:07:47 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:07:10.867 20:07:47 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:07:10.867 20:07:47 -- common/autotest_common.sh@10 -- # set +x 00:07:10.867 ************************************ 00:07:10.867 START TEST nvmf_filesystem_no_in_capsule 00:07:10.867 ************************************ 00:07:10.867 20:07:47 -- common/autotest_common.sh@1102 -- # nvmf_filesystem_part 0 00:07:10.867 20:07:47 -- target/filesystem.sh@47 -- # in_capsule=0 00:07:10.867 20:07:47 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:10.867 20:07:47 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:10.867 20:07:47 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:10.867 20:07:47 -- common/autotest_common.sh@10 -- # set +x 00:07:10.867 20:07:47 -- nvmf/common.sh@469 -- # nvmfpid=1624860 00:07:10.867 20:07:47 -- nvmf/common.sh@470 -- # waitforlisten 1624860 00:07:10.867 20:07:47 -- common/autotest_common.sh@817 -- # '[' -z 1624860 ']' 00:07:10.867 20:07:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.867 20:07:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:10.867 20:07:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.867 20:07:47 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:10.867 20:07:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:10.867 20:07:47 -- common/autotest_common.sh@10 -- # set +x 00:07:10.867 [2024-02-14 20:07:47.704054] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:07:10.867 [2024-02-14 20:07:47.704096] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:10.867 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.867 [2024-02-14 20:07:47.765111] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:10.867 [2024-02-14 20:07:47.841227] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:10.867 [2024-02-14 20:07:47.841346] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:10.867 [2024-02-14 20:07:47.841354] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:10.867 [2024-02-14 20:07:47.841360] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:10.867 [2024-02-14 20:07:47.841399] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:10.867 [2024-02-14 20:07:47.841509] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:10.867 [2024-02-14 20:07:47.841598] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.867 [2024-02-14 20:07:47.841597] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:11.127 20:07:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:11.127 20:07:48 -- common/autotest_common.sh@850 -- # return 0 00:07:11.127 20:07:48 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:11.127 20:07:48 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:11.127 20:07:48 -- common/autotest_common.sh@10 -- # set +x 00:07:11.127 20:07:48 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:11.127 20:07:48 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:11.127 20:07:48 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:11.127 20:07:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:11.127 20:07:48 -- common/autotest_common.sh@10 -- # set +x 00:07:11.387 [2024-02-14 20:07:48.547872] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:11.387 20:07:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:11.387 20:07:48 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:11.387 20:07:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:11.387 20:07:48 -- common/autotest_common.sh@10 -- # set +x 00:07:11.387 Malloc1 00:07:11.387 20:07:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:11.387 20:07:48 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:11.387 20:07:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:11.387 20:07:48 -- common/autotest_common.sh@10 -- # set +x 00:07:11.387 20:07:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:11.387 20:07:48 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:11.387 20:07:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:11.387 20:07:48 -- common/autotest_common.sh@10 -- # set +x 00:07:11.387 20:07:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:11.387 20:07:48 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:11.387 20:07:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:11.387 20:07:48 -- common/autotest_common.sh@10 -- # set +x 00:07:11.387 [2024-02-14 20:07:48.691170] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:11.387 20:07:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:11.387 20:07:48 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:11.387 20:07:48 -- common/autotest_common.sh@1355 -- # local bdev_name=Malloc1 00:07:11.387 20:07:48 -- common/autotest_common.sh@1356 -- # local bdev_info 00:07:11.387 20:07:48 -- common/autotest_common.sh@1357 -- # local bs 00:07:11.387 20:07:48 -- common/autotest_common.sh@1358 -- # local nb 00:07:11.387 20:07:48 -- common/autotest_common.sh@1359 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:11.387 20:07:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:11.387 20:07:48 -- common/autotest_common.sh@10 -- # set +x 00:07:11.387 20:07:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:11.387 20:07:48 -- common/autotest_common.sh@1359 -- # bdev_info='[ 00:07:11.387 { 00:07:11.387 "name": "Malloc1", 00:07:11.387 "aliases": [ 00:07:11.387 "8273e709-5207-4dd7-aa2d-a9f6b8b8c869" 00:07:11.387 ], 00:07:11.387 "product_name": "Malloc disk", 00:07:11.387 "block_size": 512, 00:07:11.387 "num_blocks": 1048576, 00:07:11.387 "uuid": "8273e709-5207-4dd7-aa2d-a9f6b8b8c869", 00:07:11.387 "assigned_rate_limits": { 00:07:11.387 "rw_ios_per_sec": 0, 00:07:11.387 "rw_mbytes_per_sec": 0, 00:07:11.387 "r_mbytes_per_sec": 0, 00:07:11.387 "w_mbytes_per_sec": 0 00:07:11.387 }, 00:07:11.387 "claimed": true, 00:07:11.387 "claim_type": "exclusive_write", 00:07:11.387 "zoned": false, 00:07:11.388 "supported_io_types": { 00:07:11.388 "read": true, 00:07:11.388 "write": true, 00:07:11.388 "unmap": true, 00:07:11.388 "write_zeroes": true, 00:07:11.388 "flush": true, 00:07:11.388 "reset": true, 00:07:11.388 "compare": false, 00:07:11.388 "compare_and_write": false, 00:07:11.388 "abort": true, 00:07:11.388 "nvme_admin": false, 00:07:11.388 "nvme_io": false 00:07:11.388 }, 00:07:11.388 "memory_domains": [ 00:07:11.388 { 00:07:11.388 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:11.388 "dma_device_type": 2 00:07:11.388 } 00:07:11.388 ], 00:07:11.388 "driver_specific": {} 00:07:11.388 } 00:07:11.388 ]' 00:07:11.388 20:07:48 -- common/autotest_common.sh@1360 -- # jq '.[] .block_size' 00:07:11.388 20:07:48 -- common/autotest_common.sh@1360 -- # bs=512 00:07:11.388 20:07:48 -- common/autotest_common.sh@1361 -- # jq '.[] .num_blocks' 00:07:11.388 20:07:48 -- common/autotest_common.sh@1361 -- # nb=1048576 00:07:11.388 20:07:48 -- common/autotest_common.sh@1364 -- # bdev_size=512 00:07:11.388 20:07:48 -- common/autotest_common.sh@1365 -- # echo 512 00:07:11.647 20:07:48 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:11.647 20:07:48 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:12.587 20:07:49 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:12.587 20:07:49 -- common/autotest_common.sh@1175 -- # local i=0 00:07:12.587 20:07:49 -- common/autotest_common.sh@1176 -- # local nvme_device_counter=1 nvme_devices=0 00:07:12.588 20:07:49 -- common/autotest_common.sh@1177 -- # [[ -n '' ]] 00:07:12.588 20:07:49 -- common/autotest_common.sh@1182 -- # sleep 2 00:07:15.127 20:07:51 -- common/autotest_common.sh@1183 -- # (( i++ <= 15 )) 00:07:15.127 20:07:51 -- common/autotest_common.sh@1184 -- # lsblk -l -o NAME,SERIAL 00:07:15.127 20:07:51 -- common/autotest_common.sh@1184 -- # grep -c SPDKISFASTANDAWESOME 00:07:15.127 20:07:51 -- common/autotest_common.sh@1184 -- # nvme_devices=1 00:07:15.127 20:07:51 -- common/autotest_common.sh@1185 -- # (( nvme_devices == nvme_device_counter )) 00:07:15.127 20:07:51 -- common/autotest_common.sh@1185 -- # return 0 00:07:15.127 20:07:51 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:15.127 20:07:51 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:15.127 20:07:51 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:15.127 20:07:51 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:15.127 20:07:51 -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:15.127 20:07:51 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:15.127 20:07:51 -- setup/common.sh@80 -- # echo 536870912 00:07:15.127 20:07:51 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:15.127 20:07:51 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:15.127 20:07:51 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:15.127 20:07:51 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:15.127 20:07:52 -- target/filesystem.sh@69 -- # partprobe 00:07:15.697 20:07:52 -- target/filesystem.sh@70 -- # sleep 1 00:07:16.636 20:07:53 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:16.636 20:07:53 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:16.636 20:07:53 -- common/autotest_common.sh@1075 -- # '[' 4 -le 1 ']' 00:07:16.636 20:07:53 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:07:16.636 20:07:53 -- common/autotest_common.sh@10 -- # set +x 00:07:16.636 ************************************ 00:07:16.636 START TEST filesystem_ext4 00:07:16.636 ************************************ 00:07:16.636 20:07:53 -- common/autotest_common.sh@1102 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:16.636 20:07:53 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:16.636 20:07:53 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:16.636 20:07:53 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:16.636 20:07:53 -- common/autotest_common.sh@900 -- # local fstype=ext4 00:07:16.636 20:07:53 -- common/autotest_common.sh@901 -- # local dev_name=/dev/nvme0n1p1 00:07:16.636 20:07:53 -- common/autotest_common.sh@902 -- # local i=0 00:07:16.636 20:07:53 -- common/autotest_common.sh@903 -- # local force 00:07:16.636 20:07:53 -- common/autotest_common.sh@905 -- # '[' ext4 = ext4 ']' 00:07:16.636 20:07:53 -- common/autotest_common.sh@906 -- # force=-F 00:07:16.636 20:07:53 -- common/autotest_common.sh@911 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:16.636 mke2fs 1.46.5 (30-Dec-2021) 00:07:16.636 Discarding device blocks: 0/522240 done 00:07:16.636 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:16.636 Filesystem UUID: 56b1b82a-3557-43f4-b549-1a809bac39b7 00:07:16.636 Superblock backups stored on blocks: 00:07:16.636 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:16.636 00:07:16.636 Allocating group tables: 0/64 done 00:07:16.636 Writing inode tables: 0/64 done 00:07:16.895 Creating journal (8192 blocks): done 00:07:16.895 Writing superblocks and filesystem accounting information: 0/64 done 00:07:16.895 00:07:16.895 20:07:54 -- common/autotest_common.sh@919 -- # return 0 00:07:16.895 20:07:54 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:16.895 20:07:54 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:17.154 20:07:54 -- target/filesystem.sh@25 -- # sync 00:07:17.154 20:07:54 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:17.154 20:07:54 -- target/filesystem.sh@27 -- # sync 00:07:17.154 20:07:54 -- target/filesystem.sh@29 -- # i=0 00:07:17.154 20:07:54 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:17.154 20:07:54 -- target/filesystem.sh@37 -- # kill -0 1624860 00:07:17.154 20:07:54 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:17.154 20:07:54 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:17.154 20:07:54 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:17.154 20:07:54 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:17.154 00:07:17.154 real 0m0.525s 00:07:17.154 user 0m0.030s 00:07:17.154 sys 0m0.057s 00:07:17.154 20:07:54 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:17.154 20:07:54 -- common/autotest_common.sh@10 -- # set +x 00:07:17.154 ************************************ 00:07:17.154 END TEST filesystem_ext4 00:07:17.154 ************************************ 00:07:17.154 20:07:54 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:17.154 20:07:54 -- common/autotest_common.sh@1075 -- # '[' 4 -le 1 ']' 00:07:17.154 20:07:54 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:07:17.154 20:07:54 -- common/autotest_common.sh@10 -- # set +x 00:07:17.154 ************************************ 00:07:17.154 START TEST filesystem_btrfs 00:07:17.154 ************************************ 00:07:17.154 20:07:54 -- common/autotest_common.sh@1102 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:17.154 20:07:54 -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:17.154 20:07:54 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:17.154 20:07:54 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:17.154 20:07:54 -- common/autotest_common.sh@900 -- # local fstype=btrfs 00:07:17.154 20:07:54 -- common/autotest_common.sh@901 -- # local dev_name=/dev/nvme0n1p1 00:07:17.154 20:07:54 -- common/autotest_common.sh@902 -- # local i=0 00:07:17.154 20:07:54 -- common/autotest_common.sh@903 -- # local force 00:07:17.154 20:07:54 -- common/autotest_common.sh@905 -- # '[' btrfs = ext4 ']' 00:07:17.154 20:07:54 -- common/autotest_common.sh@908 -- # force=-f 00:07:17.154 20:07:54 -- common/autotest_common.sh@911 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:17.723 btrfs-progs v6.6.2 00:07:17.723 See https://btrfs.readthedocs.io for more information. 00:07:17.723 00:07:17.723 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:17.723 NOTE: several default settings have changed in version 5.15, please make sure 00:07:17.723 this does not affect your deployments: 00:07:17.723 - DUP for metadata (-m dup) 00:07:17.723 - enabled no-holes (-O no-holes) 00:07:17.723 - enabled free-space-tree (-R free-space-tree) 00:07:17.723 00:07:17.723 Label: (null) 00:07:17.723 UUID: 807365c8-713b-4b44-bdb5-c7a281e66534 00:07:17.723 Node size: 16384 00:07:17.723 Sector size: 4096 00:07:17.723 Filesystem size: 510.00MiB 00:07:17.723 Block group profiles: 00:07:17.723 Data: single 8.00MiB 00:07:17.723 Metadata: DUP 32.00MiB 00:07:17.723 System: DUP 8.00MiB 00:07:17.723 SSD detected: yes 00:07:17.723 Zoned device: no 00:07:17.723 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:17.723 Runtime features: free-space-tree 00:07:17.723 Checksum: crc32c 00:07:17.723 Number of devices: 1 00:07:17.723 Devices: 00:07:17.723 ID SIZE PATH 00:07:17.723 1 510.00MiB /dev/nvme0n1p1 00:07:17.723 00:07:17.723 20:07:54 -- common/autotest_common.sh@919 -- # return 0 00:07:17.723 20:07:54 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:17.982 20:07:55 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:17.982 20:07:55 -- target/filesystem.sh@25 -- # sync 00:07:17.982 20:07:55 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:17.982 20:07:55 -- target/filesystem.sh@27 -- # sync 00:07:17.982 20:07:55 -- target/filesystem.sh@29 -- # i=0 00:07:17.982 20:07:55 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:17.982 20:07:55 -- target/filesystem.sh@37 -- # kill -0 1624860 00:07:17.982 20:07:55 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:17.982 20:07:55 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:17.982 20:07:55 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:17.982 20:07:55 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:17.982 00:07:17.982 real 0m0.880s 00:07:17.982 user 0m0.027s 00:07:17.982 sys 0m0.124s 00:07:17.982 20:07:55 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:17.982 20:07:55 -- common/autotest_common.sh@10 -- # set +x 00:07:17.982 ************************************ 00:07:17.982 END TEST filesystem_btrfs 00:07:17.982 ************************************ 00:07:17.982 20:07:55 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:17.982 20:07:55 -- common/autotest_common.sh@1075 -- # '[' 4 -le 1 ']' 00:07:17.982 20:07:55 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:07:17.982 20:07:55 -- common/autotest_common.sh@10 -- # set +x 00:07:17.982 ************************************ 00:07:17.982 START TEST filesystem_xfs 00:07:17.982 ************************************ 00:07:17.982 20:07:55 -- common/autotest_common.sh@1102 -- # nvmf_filesystem_create xfs nvme0n1 00:07:17.982 20:07:55 -- target/filesystem.sh@18 -- # fstype=xfs 00:07:17.982 20:07:55 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:17.982 20:07:55 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:17.982 20:07:55 -- common/autotest_common.sh@900 -- # local fstype=xfs 00:07:17.982 20:07:55 -- common/autotest_common.sh@901 -- # local dev_name=/dev/nvme0n1p1 00:07:17.982 20:07:55 -- common/autotest_common.sh@902 -- # local i=0 00:07:17.982 20:07:55 -- common/autotest_common.sh@903 -- # local force 00:07:17.983 20:07:55 -- common/autotest_common.sh@905 -- # '[' xfs = ext4 ']' 00:07:17.983 20:07:55 -- common/autotest_common.sh@908 -- # force=-f 00:07:17.983 20:07:55 -- common/autotest_common.sh@911 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:18.242 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:18.242 = sectsz=512 attr=2, projid32bit=1 00:07:18.242 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:18.242 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:18.242 data = bsize=4096 blocks=130560, imaxpct=25 00:07:18.242 = sunit=0 swidth=0 blks 00:07:18.242 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:18.242 log =internal log bsize=4096 blocks=16384, version=2 00:07:18.242 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:18.242 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:19.181 Discarding blocks...Done. 00:07:19.181 20:07:56 -- common/autotest_common.sh@919 -- # return 0 00:07:19.181 20:07:56 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:21.722 20:07:58 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:21.722 20:07:58 -- target/filesystem.sh@25 -- # sync 00:07:21.722 20:07:58 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:21.722 20:07:58 -- target/filesystem.sh@27 -- # sync 00:07:21.722 20:07:58 -- target/filesystem.sh@29 -- # i=0 00:07:21.722 20:07:58 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:21.722 20:07:58 -- target/filesystem.sh@37 -- # kill -0 1624860 00:07:21.722 20:07:58 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:21.722 20:07:58 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:21.722 20:07:58 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:21.722 20:07:58 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:21.722 00:07:21.722 real 0m3.269s 00:07:21.722 user 0m0.025s 00:07:21.722 sys 0m0.070s 00:07:21.722 20:07:58 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:21.722 20:07:58 -- common/autotest_common.sh@10 -- # set +x 00:07:21.722 ************************************ 00:07:21.722 END TEST filesystem_xfs 00:07:21.722 ************************************ 00:07:21.722 20:07:58 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:21.722 20:07:58 -- target/filesystem.sh@93 -- # sync 00:07:21.722 20:07:58 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:21.722 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:21.722 20:07:59 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:21.722 20:07:59 -- common/autotest_common.sh@1196 -- # local i=0 00:07:21.722 20:07:59 -- common/autotest_common.sh@1197 -- # lsblk -o NAME,SERIAL 00:07:21.722 20:07:59 -- common/autotest_common.sh@1197 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:21.722 20:07:59 -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:07:21.722 20:07:59 -- common/autotest_common.sh@1204 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:21.722 20:07:59 -- common/autotest_common.sh@1208 -- # return 0 00:07:21.722 20:07:59 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:21.722 20:07:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:21.722 20:07:59 -- common/autotest_common.sh@10 -- # set +x 00:07:21.722 20:07:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:21.722 20:07:59 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:21.722 20:07:59 -- target/filesystem.sh@101 -- # killprocess 1624860 00:07:21.722 20:07:59 -- common/autotest_common.sh@924 -- # '[' -z 1624860 ']' 00:07:21.722 20:07:59 -- common/autotest_common.sh@928 -- # kill -0 1624860 00:07:21.722 20:07:59 -- common/autotest_common.sh@929 -- # uname 00:07:21.722 20:07:59 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:07:21.722 20:07:59 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 1624860 00:07:21.722 20:07:59 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:07:21.722 20:07:59 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:07:21.722 20:07:59 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 1624860' 00:07:21.722 killing process with pid 1624860 00:07:21.722 20:07:59 -- common/autotest_common.sh@943 -- # kill 1624860 00:07:21.722 20:07:59 -- common/autotest_common.sh@948 -- # wait 1624860 00:07:22.292 20:07:59 -- target/filesystem.sh@102 -- # nvmfpid= 00:07:22.292 00:07:22.292 real 0m11.828s 00:07:22.292 user 0m46.377s 00:07:22.292 sys 0m1.128s 00:07:22.292 20:07:59 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:22.292 20:07:59 -- common/autotest_common.sh@10 -- # set +x 00:07:22.292 ************************************ 00:07:22.292 END TEST nvmf_filesystem_no_in_capsule 00:07:22.292 ************************************ 00:07:22.292 20:07:59 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:22.292 20:07:59 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:07:22.292 20:07:59 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:07:22.292 20:07:59 -- common/autotest_common.sh@10 -- # set +x 00:07:22.292 ************************************ 00:07:22.292 START TEST nvmf_filesystem_in_capsule 00:07:22.292 ************************************ 00:07:22.292 20:07:59 -- common/autotest_common.sh@1102 -- # nvmf_filesystem_part 4096 00:07:22.292 20:07:59 -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:22.292 20:07:59 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:22.292 20:07:59 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:22.292 20:07:59 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:22.292 20:07:59 -- common/autotest_common.sh@10 -- # set +x 00:07:22.292 20:07:59 -- nvmf/common.sh@469 -- # nvmfpid=1627117 00:07:22.292 20:07:59 -- nvmf/common.sh@470 -- # waitforlisten 1627117 00:07:22.292 20:07:59 -- common/autotest_common.sh@817 -- # '[' -z 1627117 ']' 00:07:22.292 20:07:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:22.292 20:07:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:22.292 20:07:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:22.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:22.292 20:07:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:22.292 20:07:59 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:22.292 20:07:59 -- common/autotest_common.sh@10 -- # set +x 00:07:22.292 [2024-02-14 20:07:59.566476] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:07:22.292 [2024-02-14 20:07:59.566521] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:22.292 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.292 [2024-02-14 20:07:59.627404] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:22.292 [2024-02-14 20:07:59.703560] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:22.292 [2024-02-14 20:07:59.703672] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:22.292 [2024-02-14 20:07:59.703680] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:22.292 [2024-02-14 20:07:59.703690] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:22.292 [2024-02-14 20:07:59.703729] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:22.292 [2024-02-14 20:07:59.703747] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:22.292 [2024-02-14 20:07:59.703843] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:22.292 [2024-02-14 20:07:59.703844] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.232 20:08:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:23.232 20:08:00 -- common/autotest_common.sh@850 -- # return 0 00:07:23.232 20:08:00 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:23.232 20:08:00 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:23.232 20:08:00 -- common/autotest_common.sh@10 -- # set +x 00:07:23.232 20:08:00 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:23.232 20:08:00 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:23.232 20:08:00 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:23.232 20:08:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:23.232 20:08:00 -- common/autotest_common.sh@10 -- # set +x 00:07:23.232 [2024-02-14 20:08:00.426989] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:23.232 20:08:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:23.232 20:08:00 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:23.232 20:08:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:23.232 20:08:00 -- common/autotest_common.sh@10 -- # set +x 00:07:23.232 Malloc1 00:07:23.232 20:08:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:23.232 20:08:00 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:23.232 20:08:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:23.232 20:08:00 -- common/autotest_common.sh@10 -- # set +x 00:07:23.232 20:08:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:23.232 20:08:00 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:23.232 20:08:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:23.232 20:08:00 -- common/autotest_common.sh@10 -- # set +x 00:07:23.232 20:08:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:23.232 20:08:00 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:23.232 20:08:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:23.232 20:08:00 -- common/autotest_common.sh@10 -- # set +x 00:07:23.232 [2024-02-14 20:08:00.571120] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:23.232 20:08:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:23.232 20:08:00 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:23.232 20:08:00 -- common/autotest_common.sh@1355 -- # local bdev_name=Malloc1 00:07:23.232 20:08:00 -- common/autotest_common.sh@1356 -- # local bdev_info 00:07:23.232 20:08:00 -- common/autotest_common.sh@1357 -- # local bs 00:07:23.232 20:08:00 -- common/autotest_common.sh@1358 -- # local nb 00:07:23.232 20:08:00 -- common/autotest_common.sh@1359 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:23.232 20:08:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:23.232 20:08:00 -- common/autotest_common.sh@10 -- # set +x 00:07:23.232 20:08:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:23.232 20:08:00 -- common/autotest_common.sh@1359 -- # bdev_info='[ 00:07:23.232 { 00:07:23.232 "name": "Malloc1", 00:07:23.232 "aliases": [ 00:07:23.232 "6a386c59-f8f7-4b0b-bca1-3ba7f1f108eb" 00:07:23.232 ], 00:07:23.233 "product_name": "Malloc disk", 00:07:23.233 "block_size": 512, 00:07:23.233 "num_blocks": 1048576, 00:07:23.233 "uuid": "6a386c59-f8f7-4b0b-bca1-3ba7f1f108eb", 00:07:23.233 "assigned_rate_limits": { 00:07:23.233 "rw_ios_per_sec": 0, 00:07:23.233 "rw_mbytes_per_sec": 0, 00:07:23.233 "r_mbytes_per_sec": 0, 00:07:23.233 "w_mbytes_per_sec": 0 00:07:23.233 }, 00:07:23.233 "claimed": true, 00:07:23.233 "claim_type": "exclusive_write", 00:07:23.233 "zoned": false, 00:07:23.233 "supported_io_types": { 00:07:23.233 "read": true, 00:07:23.233 "write": true, 00:07:23.233 "unmap": true, 00:07:23.233 "write_zeroes": true, 00:07:23.233 "flush": true, 00:07:23.233 "reset": true, 00:07:23.233 "compare": false, 00:07:23.233 "compare_and_write": false, 00:07:23.233 "abort": true, 00:07:23.233 "nvme_admin": false, 00:07:23.233 "nvme_io": false 00:07:23.233 }, 00:07:23.233 "memory_domains": [ 00:07:23.233 { 00:07:23.233 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:23.233 "dma_device_type": 2 00:07:23.233 } 00:07:23.233 ], 00:07:23.233 "driver_specific": {} 00:07:23.233 } 00:07:23.233 ]' 00:07:23.233 20:08:00 -- common/autotest_common.sh@1360 -- # jq '.[] .block_size' 00:07:23.233 20:08:00 -- common/autotest_common.sh@1360 -- # bs=512 00:07:23.233 20:08:00 -- common/autotest_common.sh@1361 -- # jq '.[] .num_blocks' 00:07:23.493 20:08:00 -- common/autotest_common.sh@1361 -- # nb=1048576 00:07:23.493 20:08:00 -- common/autotest_common.sh@1364 -- # bdev_size=512 00:07:23.493 20:08:00 -- common/autotest_common.sh@1365 -- # echo 512 00:07:23.493 20:08:00 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:23.493 20:08:00 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:24.433 20:08:01 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:24.433 20:08:01 -- common/autotest_common.sh@1175 -- # local i=0 00:07:24.433 20:08:01 -- common/autotest_common.sh@1176 -- # local nvme_device_counter=1 nvme_devices=0 00:07:24.433 20:08:01 -- common/autotest_common.sh@1177 -- # [[ -n '' ]] 00:07:24.433 20:08:01 -- common/autotest_common.sh@1182 -- # sleep 2 00:07:27.002 20:08:03 -- common/autotest_common.sh@1183 -- # (( i++ <= 15 )) 00:07:27.002 20:08:03 -- common/autotest_common.sh@1184 -- # lsblk -l -o NAME,SERIAL 00:07:27.002 20:08:03 -- common/autotest_common.sh@1184 -- # grep -c SPDKISFASTANDAWESOME 00:07:27.002 20:08:03 -- common/autotest_common.sh@1184 -- # nvme_devices=1 00:07:27.002 20:08:03 -- common/autotest_common.sh@1185 -- # (( nvme_devices == nvme_device_counter )) 00:07:27.002 20:08:03 -- common/autotest_common.sh@1185 -- # return 0 00:07:27.002 20:08:03 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:27.002 20:08:03 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:27.002 20:08:03 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:27.002 20:08:03 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:27.002 20:08:03 -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:27.002 20:08:03 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:27.002 20:08:03 -- setup/common.sh@80 -- # echo 536870912 00:07:27.002 20:08:03 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:27.002 20:08:03 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:27.003 20:08:03 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:27.003 20:08:03 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:27.003 20:08:04 -- target/filesystem.sh@69 -- # partprobe 00:07:27.941 20:08:05 -- target/filesystem.sh@70 -- # sleep 1 00:07:28.880 20:08:06 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:28.880 20:08:06 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:28.880 20:08:06 -- common/autotest_common.sh@1075 -- # '[' 4 -le 1 ']' 00:07:28.880 20:08:06 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:07:28.880 20:08:06 -- common/autotest_common.sh@10 -- # set +x 00:07:28.880 ************************************ 00:07:28.880 START TEST filesystem_in_capsule_ext4 00:07:28.880 ************************************ 00:07:28.880 20:08:06 -- common/autotest_common.sh@1102 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:28.880 20:08:06 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:28.880 20:08:06 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:28.880 20:08:06 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:28.880 20:08:06 -- common/autotest_common.sh@900 -- # local fstype=ext4 00:07:28.880 20:08:06 -- common/autotest_common.sh@901 -- # local dev_name=/dev/nvme0n1p1 00:07:28.880 20:08:06 -- common/autotest_common.sh@902 -- # local i=0 00:07:28.880 20:08:06 -- common/autotest_common.sh@903 -- # local force 00:07:28.880 20:08:06 -- common/autotest_common.sh@905 -- # '[' ext4 = ext4 ']' 00:07:28.880 20:08:06 -- common/autotest_common.sh@906 -- # force=-F 00:07:28.880 20:08:06 -- common/autotest_common.sh@911 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:28.880 mke2fs 1.46.5 (30-Dec-2021) 00:07:28.880 Discarding device blocks: 0/522240 done 00:07:28.880 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:28.880 Filesystem UUID: 5fbbbccf-51e0-4b77-9c2e-ee0986fa1348 00:07:28.880 Superblock backups stored on blocks: 00:07:28.880 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:28.880 00:07:28.880 Allocating group tables: 0/64 done 00:07:28.880 Writing inode tables: 0/64 done 00:07:29.139 Creating journal (8192 blocks): done 00:07:29.970 Writing superblocks and filesystem accounting information: 0/6410/64 done 00:07:29.970 00:07:29.970 20:08:07 -- common/autotest_common.sh@919 -- # return 0 00:07:29.970 20:08:07 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:30.231 20:08:07 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:30.231 20:08:07 -- target/filesystem.sh@25 -- # sync 00:07:30.231 20:08:07 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:30.231 20:08:07 -- target/filesystem.sh@27 -- # sync 00:07:30.231 20:08:07 -- target/filesystem.sh@29 -- # i=0 00:07:30.231 20:08:07 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:30.231 20:08:07 -- target/filesystem.sh@37 -- # kill -0 1627117 00:07:30.231 20:08:07 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:30.231 20:08:07 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:30.231 20:08:07 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:30.231 20:08:07 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:30.231 00:07:30.231 real 0m1.504s 00:07:30.231 user 0m0.027s 00:07:30.231 sys 0m0.063s 00:07:30.231 20:08:07 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:30.231 20:08:07 -- common/autotest_common.sh@10 -- # set +x 00:07:30.231 ************************************ 00:07:30.231 END TEST filesystem_in_capsule_ext4 00:07:30.231 ************************************ 00:07:30.231 20:08:07 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:30.231 20:08:07 -- common/autotest_common.sh@1075 -- # '[' 4 -le 1 ']' 00:07:30.231 20:08:07 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:07:30.231 20:08:07 -- common/autotest_common.sh@10 -- # set +x 00:07:30.231 ************************************ 00:07:30.231 START TEST filesystem_in_capsule_btrfs 00:07:30.231 ************************************ 00:07:30.231 20:08:07 -- common/autotest_common.sh@1102 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:30.231 20:08:07 -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:30.231 20:08:07 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:30.231 20:08:07 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:30.231 20:08:07 -- common/autotest_common.sh@900 -- # local fstype=btrfs 00:07:30.231 20:08:07 -- common/autotest_common.sh@901 -- # local dev_name=/dev/nvme0n1p1 00:07:30.231 20:08:07 -- common/autotest_common.sh@902 -- # local i=0 00:07:30.231 20:08:07 -- common/autotest_common.sh@903 -- # local force 00:07:30.231 20:08:07 -- common/autotest_common.sh@905 -- # '[' btrfs = ext4 ']' 00:07:30.231 20:08:07 -- common/autotest_common.sh@908 -- # force=-f 00:07:30.231 20:08:07 -- common/autotest_common.sh@911 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:30.801 btrfs-progs v6.6.2 00:07:30.801 See https://btrfs.readthedocs.io for more information. 00:07:30.801 00:07:30.801 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:30.801 NOTE: several default settings have changed in version 5.15, please make sure 00:07:30.801 this does not affect your deployments: 00:07:30.801 - DUP for metadata (-m dup) 00:07:30.801 - enabled no-holes (-O no-holes) 00:07:30.801 - enabled free-space-tree (-R free-space-tree) 00:07:30.801 00:07:30.801 Label: (null) 00:07:30.801 UUID: 8f03bf15-e66f-4b20-9ec7-c9966594acf1 00:07:30.801 Node size: 16384 00:07:30.801 Sector size: 4096 00:07:30.801 Filesystem size: 510.00MiB 00:07:30.801 Block group profiles: 00:07:30.801 Data: single 8.00MiB 00:07:30.801 Metadata: DUP 32.00MiB 00:07:30.801 System: DUP 8.00MiB 00:07:30.801 SSD detected: yes 00:07:30.801 Zoned device: no 00:07:30.801 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:30.801 Runtime features: free-space-tree 00:07:30.801 Checksum: crc32c 00:07:30.801 Number of devices: 1 00:07:30.801 Devices: 00:07:30.801 ID SIZE PATH 00:07:30.801 1 510.00MiB /dev/nvme0n1p1 00:07:30.801 00:07:30.801 20:08:07 -- common/autotest_common.sh@919 -- # return 0 00:07:30.801 20:08:07 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:31.742 20:08:08 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:31.742 20:08:08 -- target/filesystem.sh@25 -- # sync 00:07:31.742 20:08:08 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:31.742 20:08:08 -- target/filesystem.sh@27 -- # sync 00:07:31.742 20:08:08 -- target/filesystem.sh@29 -- # i=0 00:07:31.742 20:08:08 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:31.742 20:08:08 -- target/filesystem.sh@37 -- # kill -0 1627117 00:07:31.742 20:08:08 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:31.742 20:08:08 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:31.742 20:08:08 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:31.742 20:08:08 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:31.742 00:07:31.742 real 0m1.289s 00:07:31.742 user 0m0.032s 00:07:31.742 sys 0m0.121s 00:07:31.742 20:08:08 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:31.742 20:08:08 -- common/autotest_common.sh@10 -- # set +x 00:07:31.742 ************************************ 00:07:31.742 END TEST filesystem_in_capsule_btrfs 00:07:31.742 ************************************ 00:07:31.742 20:08:08 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:31.742 20:08:08 -- common/autotest_common.sh@1075 -- # '[' 4 -le 1 ']' 00:07:31.742 20:08:08 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:07:31.742 20:08:08 -- common/autotest_common.sh@10 -- # set +x 00:07:31.742 ************************************ 00:07:31.742 START TEST filesystem_in_capsule_xfs 00:07:31.742 ************************************ 00:07:31.742 20:08:08 -- common/autotest_common.sh@1102 -- # nvmf_filesystem_create xfs nvme0n1 00:07:31.742 20:08:08 -- target/filesystem.sh@18 -- # fstype=xfs 00:07:31.742 20:08:08 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:31.742 20:08:08 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:31.742 20:08:08 -- common/autotest_common.sh@900 -- # local fstype=xfs 00:07:31.742 20:08:08 -- common/autotest_common.sh@901 -- # local dev_name=/dev/nvme0n1p1 00:07:31.742 20:08:08 -- common/autotest_common.sh@902 -- # local i=0 00:07:31.742 20:08:08 -- common/autotest_common.sh@903 -- # local force 00:07:31.742 20:08:08 -- common/autotest_common.sh@905 -- # '[' xfs = ext4 ']' 00:07:31.742 20:08:08 -- common/autotest_common.sh@908 -- # force=-f 00:07:31.742 20:08:08 -- common/autotest_common.sh@911 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:31.742 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:31.742 = sectsz=512 attr=2, projid32bit=1 00:07:31.742 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:31.742 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:31.742 data = bsize=4096 blocks=130560, imaxpct=25 00:07:31.742 = sunit=0 swidth=0 blks 00:07:31.743 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:31.743 log =internal log bsize=4096 blocks=16384, version=2 00:07:31.743 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:31.743 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:32.681 Discarding blocks...Done. 00:07:32.681 20:08:09 -- common/autotest_common.sh@919 -- # return 0 00:07:32.681 20:08:09 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:35.226 20:08:12 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:35.226 20:08:12 -- target/filesystem.sh@25 -- # sync 00:07:35.226 20:08:12 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:35.226 20:08:12 -- target/filesystem.sh@27 -- # sync 00:07:35.226 20:08:12 -- target/filesystem.sh@29 -- # i=0 00:07:35.226 20:08:12 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:35.226 20:08:12 -- target/filesystem.sh@37 -- # kill -0 1627117 00:07:35.226 20:08:12 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:35.226 20:08:12 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:35.226 20:08:12 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:35.226 20:08:12 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:35.226 00:07:35.226 real 0m3.404s 00:07:35.226 user 0m0.023s 00:07:35.226 sys 0m0.073s 00:07:35.226 20:08:12 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:35.226 20:08:12 -- common/autotest_common.sh@10 -- # set +x 00:07:35.226 ************************************ 00:07:35.226 END TEST filesystem_in_capsule_xfs 00:07:35.226 ************************************ 00:07:35.226 20:08:12 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:35.226 20:08:12 -- target/filesystem.sh@93 -- # sync 00:07:35.226 20:08:12 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:35.226 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:35.226 20:08:12 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:35.226 20:08:12 -- common/autotest_common.sh@1196 -- # local i=0 00:07:35.226 20:08:12 -- common/autotest_common.sh@1197 -- # lsblk -o NAME,SERIAL 00:07:35.226 20:08:12 -- common/autotest_common.sh@1197 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:35.226 20:08:12 -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:07:35.226 20:08:12 -- common/autotest_common.sh@1204 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:35.226 20:08:12 -- common/autotest_common.sh@1208 -- # return 0 00:07:35.226 20:08:12 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:35.226 20:08:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:35.226 20:08:12 -- common/autotest_common.sh@10 -- # set +x 00:07:35.226 20:08:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:35.226 20:08:12 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:35.226 20:08:12 -- target/filesystem.sh@101 -- # killprocess 1627117 00:07:35.226 20:08:12 -- common/autotest_common.sh@924 -- # '[' -z 1627117 ']' 00:07:35.226 20:08:12 -- common/autotest_common.sh@928 -- # kill -0 1627117 00:07:35.226 20:08:12 -- common/autotest_common.sh@929 -- # uname 00:07:35.226 20:08:12 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:07:35.226 20:08:12 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 1627117 00:07:35.226 20:08:12 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:07:35.226 20:08:12 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:07:35.226 20:08:12 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 1627117' 00:07:35.226 killing process with pid 1627117 00:07:35.226 20:08:12 -- common/autotest_common.sh@943 -- # kill 1627117 00:07:35.226 20:08:12 -- common/autotest_common.sh@948 -- # wait 1627117 00:07:35.797 20:08:12 -- target/filesystem.sh@102 -- # nvmfpid= 00:07:35.797 00:07:35.797 real 0m13.464s 00:07:35.797 user 0m52.894s 00:07:35.797 sys 0m1.179s 00:07:35.797 20:08:12 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:35.797 20:08:12 -- common/autotest_common.sh@10 -- # set +x 00:07:35.797 ************************************ 00:07:35.797 END TEST nvmf_filesystem_in_capsule 00:07:35.797 ************************************ 00:07:35.797 20:08:13 -- target/filesystem.sh@108 -- # nvmftestfini 00:07:35.797 20:08:13 -- nvmf/common.sh@476 -- # nvmfcleanup 00:07:35.797 20:08:13 -- nvmf/common.sh@116 -- # sync 00:07:35.797 20:08:13 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:07:35.797 20:08:13 -- nvmf/common.sh@119 -- # set +e 00:07:35.797 20:08:13 -- nvmf/common.sh@120 -- # for i in {1..20} 00:07:35.797 20:08:13 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:07:35.797 rmmod nvme_tcp 00:07:35.797 rmmod nvme_fabrics 00:07:35.797 rmmod nvme_keyring 00:07:35.797 20:08:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:07:35.797 20:08:13 -- nvmf/common.sh@123 -- # set -e 00:07:35.797 20:08:13 -- nvmf/common.sh@124 -- # return 0 00:07:35.797 20:08:13 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:07:35.797 20:08:13 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:07:35.797 20:08:13 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:07:35.797 20:08:13 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:07:35.797 20:08:13 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:35.797 20:08:13 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:07:35.797 20:08:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:35.797 20:08:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:35.797 20:08:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:38.340 20:08:15 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:07:38.340 00:07:38.340 real 0m33.943s 00:07:38.340 user 1m41.149s 00:07:38.340 sys 0m7.102s 00:07:38.340 20:08:15 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:38.340 20:08:15 -- common/autotest_common.sh@10 -- # set +x 00:07:38.340 ************************************ 00:07:38.340 END TEST nvmf_filesystem 00:07:38.340 ************************************ 00:07:38.340 20:08:15 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:38.340 20:08:15 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:07:38.340 20:08:15 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:07:38.340 20:08:15 -- common/autotest_common.sh@10 -- # set +x 00:07:38.340 ************************************ 00:07:38.340 START TEST nvmf_discovery 00:07:38.340 ************************************ 00:07:38.340 20:08:15 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:38.340 * Looking for test storage... 00:07:38.340 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:38.340 20:08:15 -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:38.340 20:08:15 -- nvmf/common.sh@7 -- # uname -s 00:07:38.340 20:08:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:38.340 20:08:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:38.340 20:08:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:38.340 20:08:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:38.340 20:08:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:38.340 20:08:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:38.340 20:08:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:38.340 20:08:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:38.340 20:08:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:38.340 20:08:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:38.340 20:08:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:07:38.340 20:08:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:07:38.340 20:08:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:38.340 20:08:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:38.340 20:08:15 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:38.340 20:08:15 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:38.340 20:08:15 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:38.340 20:08:15 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:38.340 20:08:15 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:38.340 20:08:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.340 20:08:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.340 20:08:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.340 20:08:15 -- paths/export.sh@5 -- # export PATH 00:07:38.340 20:08:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.340 20:08:15 -- nvmf/common.sh@46 -- # : 0 00:07:38.340 20:08:15 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:38.340 20:08:15 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:38.340 20:08:15 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:38.340 20:08:15 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:38.340 20:08:15 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:38.340 20:08:15 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:38.340 20:08:15 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:38.340 20:08:15 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:38.340 20:08:15 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:07:38.340 20:08:15 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:07:38.340 20:08:15 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:07:38.340 20:08:15 -- target/discovery.sh@15 -- # hash nvme 00:07:38.340 20:08:15 -- target/discovery.sh@20 -- # nvmftestinit 00:07:38.340 20:08:15 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:38.340 20:08:15 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:38.340 20:08:15 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:38.340 20:08:15 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:38.340 20:08:15 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:38.340 20:08:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:38.340 20:08:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:38.340 20:08:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:38.340 20:08:15 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:07:38.340 20:08:15 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:07:38.340 20:08:15 -- nvmf/common.sh@284 -- # xtrace_disable 00:07:38.340 20:08:15 -- common/autotest_common.sh@10 -- # set +x 00:07:43.627 20:08:20 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:43.627 20:08:20 -- nvmf/common.sh@290 -- # pci_devs=() 00:07:43.627 20:08:20 -- nvmf/common.sh@290 -- # local -a pci_devs 00:07:43.627 20:08:20 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:07:43.627 20:08:20 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:07:43.627 20:08:20 -- nvmf/common.sh@292 -- # pci_drivers=() 00:07:43.627 20:08:20 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:07:43.627 20:08:20 -- nvmf/common.sh@294 -- # net_devs=() 00:07:43.627 20:08:20 -- nvmf/common.sh@294 -- # local -ga net_devs 00:07:43.627 20:08:20 -- nvmf/common.sh@295 -- # e810=() 00:07:43.627 20:08:20 -- nvmf/common.sh@295 -- # local -ga e810 00:07:43.627 20:08:20 -- nvmf/common.sh@296 -- # x722=() 00:07:43.627 20:08:20 -- nvmf/common.sh@296 -- # local -ga x722 00:07:43.627 20:08:20 -- nvmf/common.sh@297 -- # mlx=() 00:07:43.627 20:08:20 -- nvmf/common.sh@297 -- # local -ga mlx 00:07:43.627 20:08:20 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:43.627 20:08:20 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:43.627 20:08:20 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:43.627 20:08:20 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:43.627 20:08:20 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:43.627 20:08:20 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:43.627 20:08:20 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:43.627 20:08:20 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:43.627 20:08:20 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:43.627 20:08:20 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:43.627 20:08:20 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:43.627 20:08:20 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:07:43.627 20:08:20 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:07:43.627 20:08:20 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:07:43.627 20:08:20 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:07:43.627 20:08:20 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:07:43.627 20:08:20 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:07:43.627 20:08:20 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:43.627 20:08:20 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:43.627 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:43.627 20:08:20 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:43.627 20:08:20 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:43.627 20:08:20 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:43.627 20:08:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:43.627 20:08:20 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:43.627 20:08:20 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:43.627 20:08:20 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:43.627 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:43.627 20:08:20 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:43.627 20:08:20 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:43.627 20:08:20 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:43.627 20:08:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:43.627 20:08:20 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:43.627 20:08:20 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:07:43.627 20:08:20 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:07:43.627 20:08:20 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:07:43.627 20:08:20 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:43.627 20:08:20 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:43.627 20:08:20 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:43.627 20:08:20 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:43.627 20:08:20 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:43.627 Found net devices under 0000:af:00.0: cvl_0_0 00:07:43.627 20:08:20 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:43.627 20:08:20 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:43.627 20:08:20 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:43.627 20:08:20 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:43.627 20:08:20 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:43.627 20:08:20 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:43.627 Found net devices under 0000:af:00.1: cvl_0_1 00:07:43.627 20:08:20 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:43.627 20:08:20 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:07:43.627 20:08:20 -- nvmf/common.sh@402 -- # is_hw=yes 00:07:43.627 20:08:20 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:07:43.627 20:08:20 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:07:43.627 20:08:20 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:07:43.627 20:08:20 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:43.627 20:08:20 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:43.627 20:08:20 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:43.627 20:08:20 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:07:43.627 20:08:20 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:43.627 20:08:20 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:43.627 20:08:20 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:07:43.627 20:08:20 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:43.627 20:08:20 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:43.627 20:08:20 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:07:43.627 20:08:20 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:07:43.627 20:08:20 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:07:43.627 20:08:20 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:43.627 20:08:21 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:43.627 20:08:21 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:43.627 20:08:21 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:07:43.627 20:08:21 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:43.887 20:08:21 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:43.887 20:08:21 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:43.887 20:08:21 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:07:43.887 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:43.888 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:07:43.888 00:07:43.888 --- 10.0.0.2 ping statistics --- 00:07:43.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:43.888 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:07:43.888 20:08:21 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:43.888 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:43.888 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.376 ms 00:07:43.888 00:07:43.888 --- 10.0.0.1 ping statistics --- 00:07:43.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:43.888 rtt min/avg/max/mdev = 0.376/0.376/0.376/0.000 ms 00:07:43.888 20:08:21 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:43.888 20:08:21 -- nvmf/common.sh@410 -- # return 0 00:07:43.888 20:08:21 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:43.888 20:08:21 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:43.888 20:08:21 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:43.888 20:08:21 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:43.888 20:08:21 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:43.888 20:08:21 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:43.888 20:08:21 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:43.888 20:08:21 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:07:43.888 20:08:21 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:43.888 20:08:21 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:43.888 20:08:21 -- common/autotest_common.sh@10 -- # set +x 00:07:43.888 20:08:21 -- nvmf/common.sh@469 -- # nvmfpid=1633429 00:07:43.888 20:08:21 -- nvmf/common.sh@470 -- # waitforlisten 1633429 00:07:43.888 20:08:21 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:43.888 20:08:21 -- common/autotest_common.sh@817 -- # '[' -z 1633429 ']' 00:07:43.888 20:08:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:43.888 20:08:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:43.888 20:08:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:43.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:43.888 20:08:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:43.888 20:08:21 -- common/autotest_common.sh@10 -- # set +x 00:07:43.888 [2024-02-14 20:08:21.247325] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:07:43.888 [2024-02-14 20:08:21.247369] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:43.888 EAL: No free 2048 kB hugepages reported on node 1 00:07:44.148 [2024-02-14 20:08:21.310482] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:44.148 [2024-02-14 20:08:21.386737] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:44.148 [2024-02-14 20:08:21.386855] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:44.148 [2024-02-14 20:08:21.386863] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:44.148 [2024-02-14 20:08:21.386869] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:44.148 [2024-02-14 20:08:21.386914] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:44.148 [2024-02-14 20:08:21.387011] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:44.148 [2024-02-14 20:08:21.387086] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:44.148 [2024-02-14 20:08:21.387087] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.719 20:08:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:44.719 20:08:22 -- common/autotest_common.sh@850 -- # return 0 00:07:44.719 20:08:22 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:44.719 20:08:22 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:44.719 20:08:22 -- common/autotest_common.sh@10 -- # set +x 00:07:44.719 20:08:22 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:44.719 20:08:22 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:44.719 20:08:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:44.719 20:08:22 -- common/autotest_common.sh@10 -- # set +x 00:07:44.719 [2024-02-14 20:08:22.094924] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:44.719 20:08:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:44.719 20:08:22 -- target/discovery.sh@26 -- # seq 1 4 00:07:44.719 20:08:22 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:44.719 20:08:22 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:07:44.719 20:08:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:44.719 20:08:22 -- common/autotest_common.sh@10 -- # set +x 00:07:44.719 Null1 00:07:44.719 20:08:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:44.719 20:08:22 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:44.719 20:08:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:44.719 20:08:22 -- common/autotest_common.sh@10 -- # set +x 00:07:44.719 20:08:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:44.719 20:08:22 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:07:44.719 20:08:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:44.719 20:08:22 -- common/autotest_common.sh@10 -- # set +x 00:07:44.980 20:08:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:44.980 20:08:22 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:44.980 20:08:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:44.980 20:08:22 -- common/autotest_common.sh@10 -- # set +x 00:07:44.980 [2024-02-14 20:08:22.140335] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:44.980 20:08:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:44.980 20:08:22 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:44.980 20:08:22 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:07:44.980 20:08:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:44.980 20:08:22 -- common/autotest_common.sh@10 -- # set +x 00:07:44.980 Null2 00:07:44.980 20:08:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:44.980 20:08:22 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:07:44.980 20:08:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:44.980 20:08:22 -- common/autotest_common.sh@10 -- # set +x 00:07:44.980 20:08:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:44.980 20:08:22 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:07:44.980 20:08:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:44.980 20:08:22 -- common/autotest_common.sh@10 -- # set +x 00:07:44.980 20:08:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:44.980 20:08:22 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:07:44.980 20:08:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:44.980 20:08:22 -- common/autotest_common.sh@10 -- # set +x 00:07:44.980 20:08:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:44.980 20:08:22 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:44.980 20:08:22 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:07:44.980 20:08:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:44.980 20:08:22 -- common/autotest_common.sh@10 -- # set +x 00:07:44.980 Null3 00:07:44.980 20:08:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:44.980 20:08:22 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:07:44.980 20:08:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:44.980 20:08:22 -- common/autotest_common.sh@10 -- # set +x 00:07:44.980 20:08:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:44.980 20:08:22 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:07:44.980 20:08:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:44.980 20:08:22 -- common/autotest_common.sh@10 -- # set +x 00:07:44.980 20:08:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:44.980 20:08:22 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:07:44.980 20:08:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:44.980 20:08:22 -- common/autotest_common.sh@10 -- # set +x 00:07:44.980 20:08:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:44.980 20:08:22 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:44.980 20:08:22 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:07:44.980 20:08:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:44.980 20:08:22 -- common/autotest_common.sh@10 -- # set +x 00:07:44.980 Null4 00:07:44.980 20:08:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:44.980 20:08:22 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:07:44.980 20:08:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:44.980 20:08:22 -- common/autotest_common.sh@10 -- # set +x 00:07:44.980 20:08:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:44.980 20:08:22 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:07:44.980 20:08:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:44.980 20:08:22 -- common/autotest_common.sh@10 -- # set +x 00:07:44.980 20:08:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:44.980 20:08:22 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:07:44.980 20:08:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:44.980 20:08:22 -- common/autotest_common.sh@10 -- # set +x 00:07:44.980 20:08:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:44.980 20:08:22 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:44.980 20:08:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:44.980 20:08:22 -- common/autotest_common.sh@10 -- # set +x 00:07:44.980 20:08:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:44.980 20:08:22 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:07:44.980 20:08:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:44.980 20:08:22 -- common/autotest_common.sh@10 -- # set +x 00:07:44.980 20:08:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:44.980 20:08:22 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:07:45.241 00:07:45.241 Discovery Log Number of Records 6, Generation counter 6 00:07:45.241 =====Discovery Log Entry 0====== 00:07:45.241 trtype: tcp 00:07:45.241 adrfam: ipv4 00:07:45.241 subtype: current discovery subsystem 00:07:45.241 treq: not required 00:07:45.241 portid: 0 00:07:45.241 trsvcid: 4420 00:07:45.241 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:45.241 traddr: 10.0.0.2 00:07:45.241 eflags: explicit discovery connections, duplicate discovery information 00:07:45.241 sectype: none 00:07:45.241 =====Discovery Log Entry 1====== 00:07:45.241 trtype: tcp 00:07:45.241 adrfam: ipv4 00:07:45.241 subtype: nvme subsystem 00:07:45.241 treq: not required 00:07:45.241 portid: 0 00:07:45.241 trsvcid: 4420 00:07:45.241 subnqn: nqn.2016-06.io.spdk:cnode1 00:07:45.241 traddr: 10.0.0.2 00:07:45.241 eflags: none 00:07:45.241 sectype: none 00:07:45.241 =====Discovery Log Entry 2====== 00:07:45.241 trtype: tcp 00:07:45.241 adrfam: ipv4 00:07:45.241 subtype: nvme subsystem 00:07:45.241 treq: not required 00:07:45.241 portid: 0 00:07:45.241 trsvcid: 4420 00:07:45.241 subnqn: nqn.2016-06.io.spdk:cnode2 00:07:45.241 traddr: 10.0.0.2 00:07:45.241 eflags: none 00:07:45.241 sectype: none 00:07:45.241 =====Discovery Log Entry 3====== 00:07:45.241 trtype: tcp 00:07:45.241 adrfam: ipv4 00:07:45.241 subtype: nvme subsystem 00:07:45.241 treq: not required 00:07:45.241 portid: 0 00:07:45.241 trsvcid: 4420 00:07:45.241 subnqn: nqn.2016-06.io.spdk:cnode3 00:07:45.241 traddr: 10.0.0.2 00:07:45.241 eflags: none 00:07:45.241 sectype: none 00:07:45.241 =====Discovery Log Entry 4====== 00:07:45.241 trtype: tcp 00:07:45.241 adrfam: ipv4 00:07:45.241 subtype: nvme subsystem 00:07:45.241 treq: not required 00:07:45.241 portid: 0 00:07:45.241 trsvcid: 4420 00:07:45.242 subnqn: nqn.2016-06.io.spdk:cnode4 00:07:45.242 traddr: 10.0.0.2 00:07:45.242 eflags: none 00:07:45.242 sectype: none 00:07:45.242 =====Discovery Log Entry 5====== 00:07:45.242 trtype: tcp 00:07:45.242 adrfam: ipv4 00:07:45.242 subtype: discovery subsystem referral 00:07:45.242 treq: not required 00:07:45.242 portid: 0 00:07:45.242 trsvcid: 4430 00:07:45.242 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:45.242 traddr: 10.0.0.2 00:07:45.242 eflags: none 00:07:45.242 sectype: none 00:07:45.242 20:08:22 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:07:45.242 Perform nvmf subsystem discovery via RPC 00:07:45.242 20:08:22 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:07:45.242 20:08:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:45.242 20:08:22 -- common/autotest_common.sh@10 -- # set +x 00:07:45.242 [2024-02-14 20:08:22.469337] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:07:45.242 [ 00:07:45.242 { 00:07:45.242 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:07:45.242 "subtype": "Discovery", 00:07:45.242 "listen_addresses": [ 00:07:45.242 { 00:07:45.242 "transport": "TCP", 00:07:45.242 "trtype": "TCP", 00:07:45.242 "adrfam": "IPv4", 00:07:45.242 "traddr": "10.0.0.2", 00:07:45.242 "trsvcid": "4420" 00:07:45.242 } 00:07:45.242 ], 00:07:45.242 "allow_any_host": true, 00:07:45.242 "hosts": [] 00:07:45.242 }, 00:07:45.242 { 00:07:45.242 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:07:45.242 "subtype": "NVMe", 00:07:45.242 "listen_addresses": [ 00:07:45.242 { 00:07:45.242 "transport": "TCP", 00:07:45.242 "trtype": "TCP", 00:07:45.242 "adrfam": "IPv4", 00:07:45.242 "traddr": "10.0.0.2", 00:07:45.242 "trsvcid": "4420" 00:07:45.242 } 00:07:45.242 ], 00:07:45.242 "allow_any_host": true, 00:07:45.242 "hosts": [], 00:07:45.242 "serial_number": "SPDK00000000000001", 00:07:45.242 "model_number": "SPDK bdev Controller", 00:07:45.242 "max_namespaces": 32, 00:07:45.242 "min_cntlid": 1, 00:07:45.242 "max_cntlid": 65519, 00:07:45.242 "namespaces": [ 00:07:45.242 { 00:07:45.242 "nsid": 1, 00:07:45.242 "bdev_name": "Null1", 00:07:45.242 "name": "Null1", 00:07:45.242 "nguid": "513E6C051FAB4881B47D7EF0A6260B8A", 00:07:45.242 "uuid": "513e6c05-1fab-4881-b47d-7ef0a6260b8a" 00:07:45.242 } 00:07:45.242 ] 00:07:45.242 }, 00:07:45.242 { 00:07:45.242 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:07:45.242 "subtype": "NVMe", 00:07:45.242 "listen_addresses": [ 00:07:45.242 { 00:07:45.242 "transport": "TCP", 00:07:45.242 "trtype": "TCP", 00:07:45.242 "adrfam": "IPv4", 00:07:45.242 "traddr": "10.0.0.2", 00:07:45.242 "trsvcid": "4420" 00:07:45.242 } 00:07:45.242 ], 00:07:45.242 "allow_any_host": true, 00:07:45.242 "hosts": [], 00:07:45.242 "serial_number": "SPDK00000000000002", 00:07:45.242 "model_number": "SPDK bdev Controller", 00:07:45.242 "max_namespaces": 32, 00:07:45.242 "min_cntlid": 1, 00:07:45.242 "max_cntlid": 65519, 00:07:45.242 "namespaces": [ 00:07:45.242 { 00:07:45.242 "nsid": 1, 00:07:45.242 "bdev_name": "Null2", 00:07:45.242 "name": "Null2", 00:07:45.242 "nguid": "11292AF72B3B4E948B598410768AC7E5", 00:07:45.242 "uuid": "11292af7-2b3b-4e94-8b59-8410768ac7e5" 00:07:45.242 } 00:07:45.242 ] 00:07:45.242 }, 00:07:45.242 { 00:07:45.242 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:07:45.242 "subtype": "NVMe", 00:07:45.242 "listen_addresses": [ 00:07:45.242 { 00:07:45.242 "transport": "TCP", 00:07:45.242 "trtype": "TCP", 00:07:45.242 "adrfam": "IPv4", 00:07:45.242 "traddr": "10.0.0.2", 00:07:45.242 "trsvcid": "4420" 00:07:45.242 } 00:07:45.242 ], 00:07:45.242 "allow_any_host": true, 00:07:45.242 "hosts": [], 00:07:45.242 "serial_number": "SPDK00000000000003", 00:07:45.242 "model_number": "SPDK bdev Controller", 00:07:45.242 "max_namespaces": 32, 00:07:45.242 "min_cntlid": 1, 00:07:45.242 "max_cntlid": 65519, 00:07:45.242 "namespaces": [ 00:07:45.242 { 00:07:45.242 "nsid": 1, 00:07:45.242 "bdev_name": "Null3", 00:07:45.242 "name": "Null3", 00:07:45.242 "nguid": "B0FEE735BD32469EAEE998BE92249274", 00:07:45.242 "uuid": "b0fee735-bd32-469e-aee9-98be92249274" 00:07:45.242 } 00:07:45.242 ] 00:07:45.242 }, 00:07:45.242 { 00:07:45.242 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:07:45.242 "subtype": "NVMe", 00:07:45.242 "listen_addresses": [ 00:07:45.242 { 00:07:45.242 "transport": "TCP", 00:07:45.242 "trtype": "TCP", 00:07:45.242 "adrfam": "IPv4", 00:07:45.242 "traddr": "10.0.0.2", 00:07:45.242 "trsvcid": "4420" 00:07:45.242 } 00:07:45.242 ], 00:07:45.242 "allow_any_host": true, 00:07:45.242 "hosts": [], 00:07:45.242 "serial_number": "SPDK00000000000004", 00:07:45.242 "model_number": "SPDK bdev Controller", 00:07:45.242 "max_namespaces": 32, 00:07:45.242 "min_cntlid": 1, 00:07:45.242 "max_cntlid": 65519, 00:07:45.242 "namespaces": [ 00:07:45.242 { 00:07:45.242 "nsid": 1, 00:07:45.242 "bdev_name": "Null4", 00:07:45.242 "name": "Null4", 00:07:45.242 "nguid": "EF41D5B5B7154FB7BD873B2461F4C729", 00:07:45.242 "uuid": "ef41d5b5-b715-4fb7-bd87-3b2461f4c729" 00:07:45.242 } 00:07:45.242 ] 00:07:45.242 } 00:07:45.242 ] 00:07:45.242 20:08:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:45.242 20:08:22 -- target/discovery.sh@42 -- # seq 1 4 00:07:45.242 20:08:22 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:45.242 20:08:22 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:45.242 20:08:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:45.242 20:08:22 -- common/autotest_common.sh@10 -- # set +x 00:07:45.242 20:08:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:45.242 20:08:22 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:07:45.242 20:08:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:45.242 20:08:22 -- common/autotest_common.sh@10 -- # set +x 00:07:45.242 20:08:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:45.242 20:08:22 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:45.242 20:08:22 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:07:45.242 20:08:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:45.242 20:08:22 -- common/autotest_common.sh@10 -- # set +x 00:07:45.242 20:08:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:45.242 20:08:22 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:07:45.242 20:08:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:45.242 20:08:22 -- common/autotest_common.sh@10 -- # set +x 00:07:45.242 20:08:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:45.242 20:08:22 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:45.242 20:08:22 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:07:45.242 20:08:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:45.242 20:08:22 -- common/autotest_common.sh@10 -- # set +x 00:07:45.242 20:08:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:45.242 20:08:22 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:07:45.242 20:08:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:45.242 20:08:22 -- common/autotest_common.sh@10 -- # set +x 00:07:45.242 20:08:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:45.242 20:08:22 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:45.242 20:08:22 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:07:45.242 20:08:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:45.242 20:08:22 -- common/autotest_common.sh@10 -- # set +x 00:07:45.242 20:08:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:45.243 20:08:22 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:07:45.243 20:08:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:45.243 20:08:22 -- common/autotest_common.sh@10 -- # set +x 00:07:45.243 20:08:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:45.243 20:08:22 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:07:45.243 20:08:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:45.243 20:08:22 -- common/autotest_common.sh@10 -- # set +x 00:07:45.243 20:08:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:45.243 20:08:22 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:07:45.243 20:08:22 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:07:45.243 20:08:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:45.243 20:08:22 -- common/autotest_common.sh@10 -- # set +x 00:07:45.243 20:08:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:45.243 20:08:22 -- target/discovery.sh@49 -- # check_bdevs= 00:07:45.243 20:08:22 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:07:45.243 20:08:22 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:07:45.243 20:08:22 -- target/discovery.sh@57 -- # nvmftestfini 00:07:45.243 20:08:22 -- nvmf/common.sh@476 -- # nvmfcleanup 00:07:45.243 20:08:22 -- nvmf/common.sh@116 -- # sync 00:07:45.243 20:08:22 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:07:45.243 20:08:22 -- nvmf/common.sh@119 -- # set +e 00:07:45.243 20:08:22 -- nvmf/common.sh@120 -- # for i in {1..20} 00:07:45.243 20:08:22 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:07:45.243 rmmod nvme_tcp 00:07:45.243 rmmod nvme_fabrics 00:07:45.243 rmmod nvme_keyring 00:07:45.503 20:08:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:07:45.503 20:08:22 -- nvmf/common.sh@123 -- # set -e 00:07:45.503 20:08:22 -- nvmf/common.sh@124 -- # return 0 00:07:45.503 20:08:22 -- nvmf/common.sh@477 -- # '[' -n 1633429 ']' 00:07:45.503 20:08:22 -- nvmf/common.sh@478 -- # killprocess 1633429 00:07:45.503 20:08:22 -- common/autotest_common.sh@924 -- # '[' -z 1633429 ']' 00:07:45.503 20:08:22 -- common/autotest_common.sh@928 -- # kill -0 1633429 00:07:45.503 20:08:22 -- common/autotest_common.sh@929 -- # uname 00:07:45.503 20:08:22 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:07:45.503 20:08:22 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 1633429 00:07:45.503 20:08:22 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:07:45.503 20:08:22 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:07:45.503 20:08:22 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 1633429' 00:07:45.503 killing process with pid 1633429 00:07:45.503 20:08:22 -- common/autotest_common.sh@943 -- # kill 1633429 00:07:45.503 [2024-02-14 20:08:22.723988] app.c: 881:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:07:45.503 20:08:22 -- common/autotest_common.sh@948 -- # wait 1633429 00:07:45.764 20:08:22 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:07:45.764 20:08:22 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:07:45.764 20:08:22 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:07:45.764 20:08:22 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:45.764 20:08:22 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:07:45.764 20:08:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:45.764 20:08:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:45.764 20:08:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:47.680 20:08:24 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:07:47.680 00:07:47.680 real 0m9.814s 00:07:47.680 user 0m7.818s 00:07:47.680 sys 0m4.791s 00:07:47.680 20:08:24 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:47.680 20:08:24 -- common/autotest_common.sh@10 -- # set +x 00:07:47.680 ************************************ 00:07:47.680 END TEST nvmf_discovery 00:07:47.680 ************************************ 00:07:47.680 20:08:25 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:47.680 20:08:25 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:07:47.680 20:08:25 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:07:47.680 20:08:25 -- common/autotest_common.sh@10 -- # set +x 00:07:47.680 ************************************ 00:07:47.680 START TEST nvmf_referrals 00:07:47.680 ************************************ 00:07:47.680 20:08:25 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:47.942 * Looking for test storage... 00:07:47.942 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:47.942 20:08:25 -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:47.942 20:08:25 -- nvmf/common.sh@7 -- # uname -s 00:07:47.942 20:08:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:47.942 20:08:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:47.942 20:08:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:47.942 20:08:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:47.942 20:08:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:47.942 20:08:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:47.942 20:08:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:47.942 20:08:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:47.942 20:08:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:47.942 20:08:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:47.942 20:08:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:07:47.942 20:08:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:07:47.942 20:08:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:47.942 20:08:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:47.942 20:08:25 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:47.942 20:08:25 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:47.942 20:08:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:47.942 20:08:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:47.942 20:08:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:47.942 20:08:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.942 20:08:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.942 20:08:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.942 20:08:25 -- paths/export.sh@5 -- # export PATH 00:07:47.942 20:08:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.942 20:08:25 -- nvmf/common.sh@46 -- # : 0 00:07:47.942 20:08:25 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:47.942 20:08:25 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:47.942 20:08:25 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:47.942 20:08:25 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:47.942 20:08:25 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:47.942 20:08:25 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:47.942 20:08:25 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:47.942 20:08:25 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:47.942 20:08:25 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:07:47.942 20:08:25 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:07:47.942 20:08:25 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:07:47.942 20:08:25 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:07:47.942 20:08:25 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:07:47.942 20:08:25 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:07:47.942 20:08:25 -- target/referrals.sh@37 -- # nvmftestinit 00:07:47.942 20:08:25 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:47.942 20:08:25 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:47.942 20:08:25 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:47.942 20:08:25 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:47.942 20:08:25 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:47.942 20:08:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:47.942 20:08:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:47.942 20:08:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:47.942 20:08:25 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:07:47.942 20:08:25 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:07:47.942 20:08:25 -- nvmf/common.sh@284 -- # xtrace_disable 00:07:47.942 20:08:25 -- common/autotest_common.sh@10 -- # set +x 00:07:54.607 20:08:31 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:54.607 20:08:31 -- nvmf/common.sh@290 -- # pci_devs=() 00:07:54.607 20:08:31 -- nvmf/common.sh@290 -- # local -a pci_devs 00:07:54.607 20:08:31 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:07:54.607 20:08:31 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:07:54.607 20:08:31 -- nvmf/common.sh@292 -- # pci_drivers=() 00:07:54.607 20:08:31 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:07:54.607 20:08:31 -- nvmf/common.sh@294 -- # net_devs=() 00:07:54.607 20:08:31 -- nvmf/common.sh@294 -- # local -ga net_devs 00:07:54.607 20:08:31 -- nvmf/common.sh@295 -- # e810=() 00:07:54.607 20:08:31 -- nvmf/common.sh@295 -- # local -ga e810 00:07:54.607 20:08:31 -- nvmf/common.sh@296 -- # x722=() 00:07:54.607 20:08:31 -- nvmf/common.sh@296 -- # local -ga x722 00:07:54.607 20:08:31 -- nvmf/common.sh@297 -- # mlx=() 00:07:54.607 20:08:31 -- nvmf/common.sh@297 -- # local -ga mlx 00:07:54.607 20:08:31 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:54.607 20:08:31 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:54.607 20:08:31 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:54.607 20:08:31 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:54.607 20:08:31 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:54.607 20:08:31 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:54.607 20:08:31 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:54.607 20:08:31 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:54.607 20:08:31 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:54.607 20:08:31 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:54.607 20:08:31 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:54.607 20:08:31 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:07:54.607 20:08:31 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:07:54.607 20:08:31 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:07:54.607 20:08:31 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:07:54.607 20:08:31 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:07:54.607 20:08:31 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:07:54.607 20:08:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:54.607 20:08:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:54.607 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:54.607 20:08:31 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:54.607 20:08:31 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:54.607 20:08:31 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:54.607 20:08:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:54.607 20:08:31 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:54.607 20:08:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:54.607 20:08:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:54.607 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:54.607 20:08:31 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:54.607 20:08:31 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:54.607 20:08:31 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:54.607 20:08:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:54.607 20:08:31 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:54.607 20:08:31 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:07:54.607 20:08:31 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:07:54.607 20:08:31 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:07:54.607 20:08:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:54.607 20:08:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:54.608 20:08:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:54.608 20:08:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:54.608 20:08:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:54.608 Found net devices under 0000:af:00.0: cvl_0_0 00:07:54.608 20:08:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:54.608 20:08:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:54.608 20:08:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:54.608 20:08:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:54.608 20:08:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:54.608 20:08:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:54.608 Found net devices under 0000:af:00.1: cvl_0_1 00:07:54.608 20:08:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:54.608 20:08:31 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:07:54.608 20:08:31 -- nvmf/common.sh@402 -- # is_hw=yes 00:07:54.608 20:08:31 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:07:54.608 20:08:31 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:07:54.608 20:08:31 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:07:54.608 20:08:31 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:54.608 20:08:31 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:54.608 20:08:31 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:54.608 20:08:31 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:07:54.608 20:08:31 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:54.608 20:08:31 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:54.608 20:08:31 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:07:54.608 20:08:31 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:54.608 20:08:31 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:54.608 20:08:31 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:07:54.608 20:08:31 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:07:54.608 20:08:31 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:07:54.608 20:08:31 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:54.608 20:08:31 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:54.608 20:08:31 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:54.608 20:08:31 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:07:54.608 20:08:31 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:54.608 20:08:31 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:54.608 20:08:31 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:54.608 20:08:31 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:07:54.608 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:54.608 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:07:54.608 00:07:54.608 --- 10.0.0.2 ping statistics --- 00:07:54.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:54.608 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:07:54.608 20:08:31 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:54.608 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:54.608 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.243 ms 00:07:54.608 00:07:54.608 --- 10.0.0.1 ping statistics --- 00:07:54.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:54.608 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:07:54.608 20:08:31 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:54.608 20:08:31 -- nvmf/common.sh@410 -- # return 0 00:07:54.608 20:08:31 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:54.608 20:08:31 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:54.608 20:08:31 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:54.608 20:08:31 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:54.608 20:08:31 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:54.608 20:08:31 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:54.608 20:08:31 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:54.608 20:08:31 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:07:54.608 20:08:31 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:54.608 20:08:31 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:54.608 20:08:31 -- common/autotest_common.sh@10 -- # set +x 00:07:54.608 20:08:31 -- nvmf/common.sh@469 -- # nvmfpid=1637498 00:07:54.608 20:08:31 -- nvmf/common.sh@470 -- # waitforlisten 1637498 00:07:54.608 20:08:31 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:54.608 20:08:31 -- common/autotest_common.sh@817 -- # '[' -z 1637498 ']' 00:07:54.608 20:08:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:54.608 20:08:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:54.608 20:08:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:54.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:54.608 20:08:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:54.608 20:08:31 -- common/autotest_common.sh@10 -- # set +x 00:07:54.608 [2024-02-14 20:08:31.600508] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:07:54.608 [2024-02-14 20:08:31.600548] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:54.608 EAL: No free 2048 kB hugepages reported on node 1 00:07:54.608 [2024-02-14 20:08:31.663610] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:54.608 [2024-02-14 20:08:31.738696] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:54.608 [2024-02-14 20:08:31.738809] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:54.608 [2024-02-14 20:08:31.738816] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:54.608 [2024-02-14 20:08:31.738822] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:54.608 [2024-02-14 20:08:31.738868] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:54.608 [2024-02-14 20:08:31.738982] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:54.608 [2024-02-14 20:08:31.739070] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:54.608 [2024-02-14 20:08:31.739071] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.178 20:08:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:55.178 20:08:32 -- common/autotest_common.sh@850 -- # return 0 00:07:55.178 20:08:32 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:55.178 20:08:32 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:55.178 20:08:32 -- common/autotest_common.sh@10 -- # set +x 00:07:55.178 20:08:32 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:55.178 20:08:32 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:55.178 20:08:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:55.178 20:08:32 -- common/autotest_common.sh@10 -- # set +x 00:07:55.178 [2024-02-14 20:08:32.429866] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:55.178 20:08:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:55.178 20:08:32 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:07:55.178 20:08:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:55.178 20:08:32 -- common/autotest_common.sh@10 -- # set +x 00:07:55.178 [2024-02-14 20:08:32.443209] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:07:55.178 20:08:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:55.178 20:08:32 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:07:55.178 20:08:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:55.178 20:08:32 -- common/autotest_common.sh@10 -- # set +x 00:07:55.178 20:08:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:55.178 20:08:32 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:07:55.178 20:08:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:55.178 20:08:32 -- common/autotest_common.sh@10 -- # set +x 00:07:55.178 20:08:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:55.178 20:08:32 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:07:55.178 20:08:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:55.178 20:08:32 -- common/autotest_common.sh@10 -- # set +x 00:07:55.178 20:08:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:55.178 20:08:32 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:55.178 20:08:32 -- target/referrals.sh@48 -- # jq length 00:07:55.178 20:08:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:55.178 20:08:32 -- common/autotest_common.sh@10 -- # set +x 00:07:55.178 20:08:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:55.178 20:08:32 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:07:55.178 20:08:32 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:07:55.178 20:08:32 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:55.178 20:08:32 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:55.178 20:08:32 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:55.178 20:08:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:55.178 20:08:32 -- target/referrals.sh@21 -- # sort 00:07:55.178 20:08:32 -- common/autotest_common.sh@10 -- # set +x 00:07:55.178 20:08:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:55.178 20:08:32 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:55.178 20:08:32 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:55.178 20:08:32 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:07:55.178 20:08:32 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:55.178 20:08:32 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:55.178 20:08:32 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:55.178 20:08:32 -- target/referrals.sh@26 -- # sort 00:07:55.178 20:08:32 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:55.437 20:08:32 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:55.437 20:08:32 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:55.437 20:08:32 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:07:55.437 20:08:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:55.437 20:08:32 -- common/autotest_common.sh@10 -- # set +x 00:07:55.437 20:08:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:55.437 20:08:32 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:07:55.437 20:08:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:55.437 20:08:32 -- common/autotest_common.sh@10 -- # set +x 00:07:55.437 20:08:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:55.437 20:08:32 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:07:55.437 20:08:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:55.437 20:08:32 -- common/autotest_common.sh@10 -- # set +x 00:07:55.437 20:08:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:55.437 20:08:32 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:55.437 20:08:32 -- target/referrals.sh@56 -- # jq length 00:07:55.437 20:08:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:55.437 20:08:32 -- common/autotest_common.sh@10 -- # set +x 00:07:55.437 20:08:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:55.437 20:08:32 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:07:55.696 20:08:32 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:07:55.696 20:08:32 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:55.696 20:08:32 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:55.697 20:08:32 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:55.697 20:08:32 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:55.697 20:08:32 -- target/referrals.sh@26 -- # sort 00:07:55.697 20:08:32 -- target/referrals.sh@26 -- # echo 00:07:55.697 20:08:32 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:07:55.697 20:08:32 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:07:55.697 20:08:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:55.697 20:08:32 -- common/autotest_common.sh@10 -- # set +x 00:07:55.697 20:08:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:55.697 20:08:33 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:55.697 20:08:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:55.697 20:08:33 -- common/autotest_common.sh@10 -- # set +x 00:07:55.697 20:08:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:55.697 20:08:33 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:07:55.697 20:08:33 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:55.697 20:08:33 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:55.697 20:08:33 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:55.697 20:08:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:55.697 20:08:33 -- target/referrals.sh@21 -- # sort 00:07:55.697 20:08:33 -- common/autotest_common.sh@10 -- # set +x 00:07:55.697 20:08:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:55.697 20:08:33 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:07:55.697 20:08:33 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:55.697 20:08:33 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:07:55.697 20:08:33 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:55.697 20:08:33 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:55.697 20:08:33 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:55.697 20:08:33 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:55.697 20:08:33 -- target/referrals.sh@26 -- # sort 00:07:55.956 20:08:33 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:07:55.956 20:08:33 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:55.956 20:08:33 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:07:55.956 20:08:33 -- target/referrals.sh@67 -- # jq -r .subnqn 00:07:55.956 20:08:33 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:55.956 20:08:33 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:55.956 20:08:33 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:55.956 20:08:33 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:07:56.215 20:08:33 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:07:56.215 20:08:33 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:56.215 20:08:33 -- target/referrals.sh@68 -- # jq -r .subnqn 00:07:56.215 20:08:33 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:56.215 20:08:33 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:56.215 20:08:33 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:56.215 20:08:33 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:56.215 20:08:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:56.215 20:08:33 -- common/autotest_common.sh@10 -- # set +x 00:07:56.215 20:08:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:56.215 20:08:33 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:07:56.215 20:08:33 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:56.215 20:08:33 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:56.215 20:08:33 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:56.215 20:08:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:56.215 20:08:33 -- target/referrals.sh@21 -- # sort 00:07:56.215 20:08:33 -- common/autotest_common.sh@10 -- # set +x 00:07:56.215 20:08:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:56.215 20:08:33 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:07:56.215 20:08:33 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:56.215 20:08:33 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:07:56.215 20:08:33 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:56.215 20:08:33 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:56.215 20:08:33 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:56.215 20:08:33 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:56.215 20:08:33 -- target/referrals.sh@26 -- # sort 00:07:56.475 20:08:33 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:07:56.475 20:08:33 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:56.475 20:08:33 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:07:56.475 20:08:33 -- target/referrals.sh@75 -- # jq -r .subnqn 00:07:56.475 20:08:33 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:56.475 20:08:33 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:56.475 20:08:33 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:56.475 20:08:33 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:07:56.475 20:08:33 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:07:56.475 20:08:33 -- target/referrals.sh@76 -- # jq -r .subnqn 00:07:56.475 20:08:33 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:56.475 20:08:33 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:56.475 20:08:33 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:56.736 20:08:34 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:56.736 20:08:34 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:07:56.736 20:08:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:56.736 20:08:34 -- common/autotest_common.sh@10 -- # set +x 00:07:56.736 20:08:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:56.736 20:08:34 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:56.736 20:08:34 -- target/referrals.sh@82 -- # jq length 00:07:56.736 20:08:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:56.736 20:08:34 -- common/autotest_common.sh@10 -- # set +x 00:07:56.736 20:08:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:56.736 20:08:34 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:07:56.736 20:08:34 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:07:56.736 20:08:34 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:56.736 20:08:34 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:56.736 20:08:34 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:56.736 20:08:34 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:56.736 20:08:34 -- target/referrals.sh@26 -- # sort 00:07:56.997 20:08:34 -- target/referrals.sh@26 -- # echo 00:07:56.997 20:08:34 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:07:56.997 20:08:34 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:07:56.997 20:08:34 -- target/referrals.sh@86 -- # nvmftestfini 00:07:56.997 20:08:34 -- nvmf/common.sh@476 -- # nvmfcleanup 00:07:56.997 20:08:34 -- nvmf/common.sh@116 -- # sync 00:07:56.997 20:08:34 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:07:56.997 20:08:34 -- nvmf/common.sh@119 -- # set +e 00:07:56.997 20:08:34 -- nvmf/common.sh@120 -- # for i in {1..20} 00:07:56.997 20:08:34 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:07:56.997 rmmod nvme_tcp 00:07:56.997 rmmod nvme_fabrics 00:07:56.997 rmmod nvme_keyring 00:07:56.997 20:08:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:07:56.997 20:08:34 -- nvmf/common.sh@123 -- # set -e 00:07:56.997 20:08:34 -- nvmf/common.sh@124 -- # return 0 00:07:56.997 20:08:34 -- nvmf/common.sh@477 -- # '[' -n 1637498 ']' 00:07:56.997 20:08:34 -- nvmf/common.sh@478 -- # killprocess 1637498 00:07:56.997 20:08:34 -- common/autotest_common.sh@924 -- # '[' -z 1637498 ']' 00:07:56.997 20:08:34 -- common/autotest_common.sh@928 -- # kill -0 1637498 00:07:56.997 20:08:34 -- common/autotest_common.sh@929 -- # uname 00:07:56.997 20:08:34 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:07:56.997 20:08:34 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 1637498 00:07:56.997 20:08:34 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:07:56.997 20:08:34 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:07:56.997 20:08:34 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 1637498' 00:07:56.997 killing process with pid 1637498 00:07:56.997 20:08:34 -- common/autotest_common.sh@943 -- # kill 1637498 00:07:56.997 20:08:34 -- common/autotest_common.sh@948 -- # wait 1637498 00:07:57.258 20:08:34 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:07:57.258 20:08:34 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:07:57.258 20:08:34 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:07:57.258 20:08:34 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:57.258 20:08:34 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:07:57.258 20:08:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:57.258 20:08:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:57.258 20:08:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:59.798 20:08:36 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:07:59.798 00:07:59.798 real 0m11.554s 00:07:59.798 user 0m13.657s 00:07:59.798 sys 0m5.444s 00:07:59.798 20:08:36 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:59.798 20:08:36 -- common/autotest_common.sh@10 -- # set +x 00:07:59.798 ************************************ 00:07:59.798 END TEST nvmf_referrals 00:07:59.798 ************************************ 00:07:59.798 20:08:36 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:59.798 20:08:36 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:07:59.798 20:08:36 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:07:59.798 20:08:36 -- common/autotest_common.sh@10 -- # set +x 00:07:59.798 ************************************ 00:07:59.798 START TEST nvmf_connect_disconnect 00:07:59.798 ************************************ 00:07:59.798 20:08:36 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:59.798 * Looking for test storage... 00:07:59.798 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:59.798 20:08:36 -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:59.798 20:08:36 -- nvmf/common.sh@7 -- # uname -s 00:07:59.798 20:08:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:59.798 20:08:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:59.798 20:08:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:59.798 20:08:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:59.798 20:08:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:59.799 20:08:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:59.799 20:08:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:59.799 20:08:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:59.799 20:08:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:59.799 20:08:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:59.799 20:08:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:07:59.799 20:08:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:07:59.799 20:08:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:59.799 20:08:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:59.799 20:08:36 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:59.799 20:08:36 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:59.799 20:08:36 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:59.799 20:08:36 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:59.799 20:08:36 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:59.799 20:08:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.799 20:08:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.799 20:08:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.799 20:08:36 -- paths/export.sh@5 -- # export PATH 00:07:59.799 20:08:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.799 20:08:36 -- nvmf/common.sh@46 -- # : 0 00:07:59.799 20:08:36 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:59.799 20:08:36 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:59.799 20:08:36 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:59.799 20:08:36 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:59.799 20:08:36 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:59.799 20:08:36 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:59.799 20:08:36 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:59.799 20:08:36 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:59.799 20:08:36 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:59.799 20:08:36 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:59.799 20:08:36 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:07:59.799 20:08:36 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:59.799 20:08:36 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:59.799 20:08:36 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:59.799 20:08:36 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:59.799 20:08:36 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:59.799 20:08:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:59.799 20:08:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:59.799 20:08:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:59.799 20:08:36 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:07:59.799 20:08:36 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:07:59.799 20:08:36 -- nvmf/common.sh@284 -- # xtrace_disable 00:07:59.799 20:08:36 -- common/autotest_common.sh@10 -- # set +x 00:08:05.082 20:08:42 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:05.082 20:08:42 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:05.082 20:08:42 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:05.082 20:08:42 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:05.082 20:08:42 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:05.082 20:08:42 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:05.082 20:08:42 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:05.082 20:08:42 -- nvmf/common.sh@294 -- # net_devs=() 00:08:05.082 20:08:42 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:05.082 20:08:42 -- nvmf/common.sh@295 -- # e810=() 00:08:05.082 20:08:42 -- nvmf/common.sh@295 -- # local -ga e810 00:08:05.082 20:08:42 -- nvmf/common.sh@296 -- # x722=() 00:08:05.082 20:08:42 -- nvmf/common.sh@296 -- # local -ga x722 00:08:05.082 20:08:42 -- nvmf/common.sh@297 -- # mlx=() 00:08:05.082 20:08:42 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:05.082 20:08:42 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:05.082 20:08:42 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:05.082 20:08:42 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:05.082 20:08:42 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:05.082 20:08:42 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:05.082 20:08:42 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:05.082 20:08:42 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:05.082 20:08:42 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:05.082 20:08:42 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:05.082 20:08:42 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:05.082 20:08:42 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:05.082 20:08:42 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:05.082 20:08:42 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:08:05.082 20:08:42 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:08:05.082 20:08:42 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:08:05.082 20:08:42 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:08:05.082 20:08:42 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:05.082 20:08:42 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:05.082 20:08:42 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:05.082 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:05.082 20:08:42 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:05.082 20:08:42 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:05.082 20:08:42 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:05.082 20:08:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:05.082 20:08:42 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:05.082 20:08:42 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:05.082 20:08:42 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:05.082 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:05.082 20:08:42 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:05.082 20:08:42 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:05.082 20:08:42 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:05.082 20:08:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:05.082 20:08:42 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:05.082 20:08:42 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:05.082 20:08:42 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:08:05.082 20:08:42 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:08:05.082 20:08:42 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:05.082 20:08:42 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:05.082 20:08:42 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:05.082 20:08:42 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:05.082 20:08:42 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:05.083 Found net devices under 0000:af:00.0: cvl_0_0 00:08:05.083 20:08:42 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:05.083 20:08:42 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:05.083 20:08:42 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:05.083 20:08:42 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:05.083 20:08:42 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:05.083 20:08:42 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:05.083 Found net devices under 0000:af:00.1: cvl_0_1 00:08:05.083 20:08:42 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:05.083 20:08:42 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:05.083 20:08:42 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:05.083 20:08:42 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:05.083 20:08:42 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:08:05.083 20:08:42 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:08:05.083 20:08:42 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:05.083 20:08:42 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:05.083 20:08:42 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:05.083 20:08:42 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:08:05.083 20:08:42 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:05.083 20:08:42 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:05.083 20:08:42 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:08:05.083 20:08:42 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:05.083 20:08:42 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:05.083 20:08:42 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:08:05.083 20:08:42 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:08:05.083 20:08:42 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:08:05.083 20:08:42 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:05.083 20:08:42 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:05.083 20:08:42 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:05.083 20:08:42 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:08:05.083 20:08:42 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:05.343 20:08:42 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:05.343 20:08:42 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:05.343 20:08:42 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:08:05.343 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:05.343 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.288 ms 00:08:05.343 00:08:05.343 --- 10.0.0.2 ping statistics --- 00:08:05.343 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:05.343 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:08:05.343 20:08:42 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:05.343 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:05.343 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.294 ms 00:08:05.343 00:08:05.343 --- 10.0.0.1 ping statistics --- 00:08:05.343 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:05.343 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:08:05.343 20:08:42 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:05.343 20:08:42 -- nvmf/common.sh@410 -- # return 0 00:08:05.343 20:08:42 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:05.343 20:08:42 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:05.343 20:08:42 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:05.343 20:08:42 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:05.343 20:08:42 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:05.343 20:08:42 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:05.343 20:08:42 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:05.343 20:08:42 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:05.343 20:08:42 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:05.343 20:08:42 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:05.343 20:08:42 -- common/autotest_common.sh@10 -- # set +x 00:08:05.343 20:08:42 -- nvmf/common.sh@469 -- # nvmfpid=1641860 00:08:05.343 20:08:42 -- nvmf/common.sh@470 -- # waitforlisten 1641860 00:08:05.343 20:08:42 -- common/autotest_common.sh@817 -- # '[' -z 1641860 ']' 00:08:05.343 20:08:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:05.343 20:08:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:05.343 20:08:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:05.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:05.343 20:08:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:05.344 20:08:42 -- common/autotest_common.sh@10 -- # set +x 00:08:05.344 20:08:42 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:05.344 [2024-02-14 20:08:42.683802] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:08:05.344 [2024-02-14 20:08:42.683845] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:05.344 EAL: No free 2048 kB hugepages reported on node 1 00:08:05.344 [2024-02-14 20:08:42.749837] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:05.604 [2024-02-14 20:08:42.828153] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:05.604 [2024-02-14 20:08:42.828257] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:05.604 [2024-02-14 20:08:42.828264] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:05.604 [2024-02-14 20:08:42.828271] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:05.604 [2024-02-14 20:08:42.828326] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:05.604 [2024-02-14 20:08:42.828428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:05.604 [2024-02-14 20:08:42.828514] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:05.604 [2024-02-14 20:08:42.828515] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.174 20:08:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:06.174 20:08:43 -- common/autotest_common.sh@850 -- # return 0 00:08:06.174 20:08:43 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:06.174 20:08:43 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:06.174 20:08:43 -- common/autotest_common.sh@10 -- # set +x 00:08:06.174 20:08:43 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:06.174 20:08:43 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:06.174 20:08:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:06.174 20:08:43 -- common/autotest_common.sh@10 -- # set +x 00:08:06.174 [2024-02-14 20:08:43.516796] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:06.174 20:08:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:06.174 20:08:43 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:06.174 20:08:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:06.174 20:08:43 -- common/autotest_common.sh@10 -- # set +x 00:08:06.174 20:08:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:06.174 20:08:43 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:06.174 20:08:43 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:06.174 20:08:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:06.174 20:08:43 -- common/autotest_common.sh@10 -- # set +x 00:08:06.174 20:08:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:06.174 20:08:43 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:06.174 20:08:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:06.174 20:08:43 -- common/autotest_common.sh@10 -- # set +x 00:08:06.174 20:08:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:06.174 20:08:43 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:06.174 20:08:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:06.174 20:08:43 -- common/autotest_common.sh@10 -- # set +x 00:08:06.174 [2024-02-14 20:08:43.568374] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:06.174 20:08:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:06.174 20:08:43 -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:08:06.174 20:08:43 -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:08:06.174 20:08:43 -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:08:06.174 20:08:43 -- target/connect_disconnect.sh@34 -- # set +x 00:08:08.714 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:11.254 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:13.795 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:15.702 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:18.322 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:20.859 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:22.765 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:25.298 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:27.835 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:29.740 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:32.276 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:34.811 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:36.718 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:39.278 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:41.180 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:43.713 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:46.251 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:48.785 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:50.689 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:53.227 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:55.133 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:57.674 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:00.209 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:02.149 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:04.682 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:07.217 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:09.120 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:11.653 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:14.185 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:16.091 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:18.626 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:21.162 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:23.113 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:25.646 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:28.180 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:30.715 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:32.619 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:35.155 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:37.067 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:39.602 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:42.135 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:44.036 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:46.624 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:49.159 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:51.065 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:53.600 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:56.137 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:58.045 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:00.583 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:03.121 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:05.028 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:07.602 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:10.142 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:12.051 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:14.588 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:17.126 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:19.032 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:21.569 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:24.105 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:26.019 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:28.558 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:30.532 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:33.070 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:35.607 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:38.145 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:40.052 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:42.588 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:45.116 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:47.014 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:49.544 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:52.088 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:54.011 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:56.540 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:59.070 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:00.972 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:03.501 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:06.028 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:07.927 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:10.457 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:12.987 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:14.984 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:17.513 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:20.045 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:21.948 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:24.480 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:27.009 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:28.908 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:31.439 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:33.972 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:36.534 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:38.437 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:40.969 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:43.502 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:45.405 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:47.936 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:50.468 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:52.371 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:54.903 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:57.433 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:59.381 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:59.381 20:12:36 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:11:59.381 20:12:36 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:11:59.381 20:12:36 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:59.381 20:12:36 -- nvmf/common.sh@116 -- # sync 00:11:59.381 20:12:36 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:59.381 20:12:36 -- nvmf/common.sh@119 -- # set +e 00:11:59.381 20:12:36 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:59.381 20:12:36 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:59.381 rmmod nvme_tcp 00:11:59.381 rmmod nvme_fabrics 00:11:59.381 rmmod nvme_keyring 00:11:59.381 20:12:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:59.381 20:12:36 -- nvmf/common.sh@123 -- # set -e 00:11:59.381 20:12:36 -- nvmf/common.sh@124 -- # return 0 00:11:59.381 20:12:36 -- nvmf/common.sh@477 -- # '[' -n 1641860 ']' 00:11:59.381 20:12:36 -- nvmf/common.sh@478 -- # killprocess 1641860 00:11:59.381 20:12:36 -- common/autotest_common.sh@924 -- # '[' -z 1641860 ']' 00:11:59.381 20:12:36 -- common/autotest_common.sh@928 -- # kill -0 1641860 00:11:59.381 20:12:36 -- common/autotest_common.sh@929 -- # uname 00:11:59.381 20:12:36 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:11:59.382 20:12:36 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 1641860 00:11:59.382 20:12:36 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:11:59.382 20:12:36 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:11:59.382 20:12:36 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 1641860' 00:11:59.382 killing process with pid 1641860 00:11:59.382 20:12:36 -- common/autotest_common.sh@943 -- # kill 1641860 00:11:59.382 20:12:36 -- common/autotest_common.sh@948 -- # wait 1641860 00:11:59.641 20:12:37 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:59.641 20:12:37 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:59.641 20:12:37 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:59.641 20:12:37 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:59.641 20:12:37 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:59.641 20:12:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:59.641 20:12:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:59.641 20:12:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:02.179 20:12:39 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:12:02.179 00:12:02.179 real 4m2.459s 00:12:02.179 user 15m29.108s 00:12:02.179 sys 0m21.008s 00:12:02.179 20:12:39 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:02.179 20:12:39 -- common/autotest_common.sh@10 -- # set +x 00:12:02.179 ************************************ 00:12:02.179 END TEST nvmf_connect_disconnect 00:12:02.179 ************************************ 00:12:02.179 20:12:39 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:02.179 20:12:39 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:12:02.179 20:12:39 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:12:02.179 20:12:39 -- common/autotest_common.sh@10 -- # set +x 00:12:02.179 ************************************ 00:12:02.179 START TEST nvmf_multitarget 00:12:02.179 ************************************ 00:12:02.179 20:12:39 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:02.179 * Looking for test storage... 00:12:02.179 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:02.179 20:12:39 -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:02.179 20:12:39 -- nvmf/common.sh@7 -- # uname -s 00:12:02.179 20:12:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:02.179 20:12:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:02.179 20:12:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:02.179 20:12:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:02.179 20:12:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:02.179 20:12:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:02.179 20:12:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:02.179 20:12:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:02.179 20:12:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:02.179 20:12:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:02.179 20:12:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:12:02.179 20:12:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:12:02.179 20:12:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:02.179 20:12:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:02.179 20:12:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:02.179 20:12:39 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:02.179 20:12:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:02.179 20:12:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:02.179 20:12:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:02.179 20:12:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.179 20:12:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.179 20:12:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.179 20:12:39 -- paths/export.sh@5 -- # export PATH 00:12:02.179 20:12:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.179 20:12:39 -- nvmf/common.sh@46 -- # : 0 00:12:02.179 20:12:39 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:02.179 20:12:39 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:02.179 20:12:39 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:02.179 20:12:39 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:02.179 20:12:39 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:02.179 20:12:39 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:02.179 20:12:39 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:02.179 20:12:39 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:02.179 20:12:39 -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:02.179 20:12:39 -- target/multitarget.sh@15 -- # nvmftestinit 00:12:02.179 20:12:39 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:02.179 20:12:39 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:02.179 20:12:39 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:02.179 20:12:39 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:02.179 20:12:39 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:02.179 20:12:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:02.179 20:12:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:02.179 20:12:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:02.179 20:12:39 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:12:02.179 20:12:39 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:12:02.179 20:12:39 -- nvmf/common.sh@284 -- # xtrace_disable 00:12:02.179 20:12:39 -- common/autotest_common.sh@10 -- # set +x 00:12:08.745 20:12:45 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:08.745 20:12:45 -- nvmf/common.sh@290 -- # pci_devs=() 00:12:08.745 20:12:45 -- nvmf/common.sh@290 -- # local -a pci_devs 00:12:08.745 20:12:45 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:12:08.745 20:12:45 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:12:08.745 20:12:45 -- nvmf/common.sh@292 -- # pci_drivers=() 00:12:08.745 20:12:45 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:12:08.745 20:12:45 -- nvmf/common.sh@294 -- # net_devs=() 00:12:08.745 20:12:45 -- nvmf/common.sh@294 -- # local -ga net_devs 00:12:08.745 20:12:45 -- nvmf/common.sh@295 -- # e810=() 00:12:08.745 20:12:45 -- nvmf/common.sh@295 -- # local -ga e810 00:12:08.745 20:12:45 -- nvmf/common.sh@296 -- # x722=() 00:12:08.745 20:12:45 -- nvmf/common.sh@296 -- # local -ga x722 00:12:08.745 20:12:45 -- nvmf/common.sh@297 -- # mlx=() 00:12:08.745 20:12:45 -- nvmf/common.sh@297 -- # local -ga mlx 00:12:08.745 20:12:45 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:08.745 20:12:45 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:08.745 20:12:45 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:08.745 20:12:45 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:08.745 20:12:45 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:08.745 20:12:45 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:08.745 20:12:45 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:08.745 20:12:45 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:08.745 20:12:45 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:08.745 20:12:45 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:08.745 20:12:45 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:08.745 20:12:45 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:12:08.745 20:12:45 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:12:08.745 20:12:45 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:12:08.745 20:12:45 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:12:08.745 20:12:45 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:12:08.745 20:12:45 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:12:08.745 20:12:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:08.745 20:12:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:08.745 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:08.745 20:12:45 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:08.745 20:12:45 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:08.745 20:12:45 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:08.745 20:12:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:08.745 20:12:45 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:08.745 20:12:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:08.745 20:12:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:08.745 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:08.745 20:12:45 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:08.745 20:12:45 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:08.745 20:12:45 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:08.745 20:12:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:08.745 20:12:45 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:08.745 20:12:45 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:12:08.745 20:12:45 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:12:08.745 20:12:45 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:12:08.745 20:12:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:08.745 20:12:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:08.745 20:12:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:08.745 20:12:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:08.745 20:12:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:08.745 Found net devices under 0000:af:00.0: cvl_0_0 00:12:08.745 20:12:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:08.745 20:12:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:08.745 20:12:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:08.745 20:12:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:08.745 20:12:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:08.745 20:12:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:08.745 Found net devices under 0000:af:00.1: cvl_0_1 00:12:08.745 20:12:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:08.745 20:12:45 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:12:08.745 20:12:45 -- nvmf/common.sh@402 -- # is_hw=yes 00:12:08.745 20:12:45 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:12:08.745 20:12:45 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:12:08.745 20:12:45 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:12:08.745 20:12:45 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:08.745 20:12:45 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:08.745 20:12:45 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:08.745 20:12:45 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:12:08.745 20:12:45 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:08.745 20:12:45 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:08.745 20:12:45 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:12:08.745 20:12:45 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:08.745 20:12:45 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:08.745 20:12:45 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:12:08.745 20:12:45 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:12:08.745 20:12:45 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:12:08.745 20:12:45 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:08.745 20:12:45 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:08.746 20:12:45 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:08.746 20:12:45 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:12:08.746 20:12:45 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:08.746 20:12:45 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:08.746 20:12:45 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:08.746 20:12:45 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:12:08.746 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:08.746 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.152 ms 00:12:08.746 00:12:08.746 --- 10.0.0.2 ping statistics --- 00:12:08.746 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:08.746 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:12:08.746 20:12:45 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:08.746 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:08.746 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.239 ms 00:12:08.746 00:12:08.746 --- 10.0.0.1 ping statistics --- 00:12:08.746 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:08.746 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:12:08.746 20:12:45 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:08.746 20:12:45 -- nvmf/common.sh@410 -- # return 0 00:12:08.746 20:12:45 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:08.746 20:12:45 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:08.746 20:12:45 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:08.746 20:12:45 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:08.746 20:12:45 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:08.746 20:12:45 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:08.746 20:12:45 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:08.746 20:12:45 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:08.746 20:12:45 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:08.746 20:12:45 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:08.746 20:12:45 -- common/autotest_common.sh@10 -- # set +x 00:12:08.746 20:12:45 -- nvmf/common.sh@469 -- # nvmfpid=1687432 00:12:08.746 20:12:45 -- nvmf/common.sh@470 -- # waitforlisten 1687432 00:12:08.746 20:12:45 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:08.746 20:12:45 -- common/autotest_common.sh@817 -- # '[' -z 1687432 ']' 00:12:08.746 20:12:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:08.746 20:12:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:08.746 20:12:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:08.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:08.746 20:12:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:08.746 20:12:45 -- common/autotest_common.sh@10 -- # set +x 00:12:08.746 [2024-02-14 20:12:45.665472] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:12:08.746 [2024-02-14 20:12:45.665511] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:08.746 EAL: No free 2048 kB hugepages reported on node 1 00:12:08.746 [2024-02-14 20:12:45.726506] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:08.746 [2024-02-14 20:12:45.795946] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:08.746 [2024-02-14 20:12:45.796058] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:08.746 [2024-02-14 20:12:45.796065] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:08.746 [2024-02-14 20:12:45.796071] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:08.746 [2024-02-14 20:12:45.796189] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:08.746 [2024-02-14 20:12:45.796271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:08.746 [2024-02-14 20:12:45.796434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:08.746 [2024-02-14 20:12:45.796436] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:09.312 20:12:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:09.312 20:12:46 -- common/autotest_common.sh@850 -- # return 0 00:12:09.312 20:12:46 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:09.312 20:12:46 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:09.312 20:12:46 -- common/autotest_common.sh@10 -- # set +x 00:12:09.312 20:12:46 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:09.312 20:12:46 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:09.312 20:12:46 -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:09.312 20:12:46 -- target/multitarget.sh@21 -- # jq length 00:12:09.312 20:12:46 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:09.312 20:12:46 -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:09.312 "nvmf_tgt_1" 00:12:09.312 20:12:46 -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:09.570 "nvmf_tgt_2" 00:12:09.570 20:12:46 -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:09.570 20:12:46 -- target/multitarget.sh@28 -- # jq length 00:12:09.570 20:12:46 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:09.570 20:12:46 -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:09.828 true 00:12:09.828 20:12:47 -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:09.828 true 00:12:09.828 20:12:47 -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:09.828 20:12:47 -- target/multitarget.sh@35 -- # jq length 00:12:09.828 20:12:47 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:09.828 20:12:47 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:09.828 20:12:47 -- target/multitarget.sh@41 -- # nvmftestfini 00:12:09.828 20:12:47 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:09.828 20:12:47 -- nvmf/common.sh@116 -- # sync 00:12:09.828 20:12:47 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:09.828 20:12:47 -- nvmf/common.sh@119 -- # set +e 00:12:09.828 20:12:47 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:09.828 20:12:47 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:09.828 rmmod nvme_tcp 00:12:10.087 rmmod nvme_fabrics 00:12:10.087 rmmod nvme_keyring 00:12:10.087 20:12:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:10.087 20:12:47 -- nvmf/common.sh@123 -- # set -e 00:12:10.087 20:12:47 -- nvmf/common.sh@124 -- # return 0 00:12:10.087 20:12:47 -- nvmf/common.sh@477 -- # '[' -n 1687432 ']' 00:12:10.087 20:12:47 -- nvmf/common.sh@478 -- # killprocess 1687432 00:12:10.087 20:12:47 -- common/autotest_common.sh@924 -- # '[' -z 1687432 ']' 00:12:10.087 20:12:47 -- common/autotest_common.sh@928 -- # kill -0 1687432 00:12:10.087 20:12:47 -- common/autotest_common.sh@929 -- # uname 00:12:10.087 20:12:47 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:12:10.087 20:12:47 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 1687432 00:12:10.087 20:12:47 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:12:10.087 20:12:47 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:12:10.087 20:12:47 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 1687432' 00:12:10.087 killing process with pid 1687432 00:12:10.087 20:12:47 -- common/autotest_common.sh@943 -- # kill 1687432 00:12:10.087 20:12:47 -- common/autotest_common.sh@948 -- # wait 1687432 00:12:10.346 20:12:47 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:10.346 20:12:47 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:10.346 20:12:47 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:10.346 20:12:47 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:10.346 20:12:47 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:10.346 20:12:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:10.346 20:12:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:10.346 20:12:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:12.250 20:12:49 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:12:12.251 00:12:12.251 real 0m10.467s 00:12:12.251 user 0m9.318s 00:12:12.251 sys 0m5.258s 00:12:12.251 20:12:49 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:12.251 20:12:49 -- common/autotest_common.sh@10 -- # set +x 00:12:12.251 ************************************ 00:12:12.251 END TEST nvmf_multitarget 00:12:12.251 ************************************ 00:12:12.251 20:12:49 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:12.251 20:12:49 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:12:12.251 20:12:49 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:12:12.251 20:12:49 -- common/autotest_common.sh@10 -- # set +x 00:12:12.251 ************************************ 00:12:12.251 START TEST nvmf_rpc 00:12:12.251 ************************************ 00:12:12.251 20:12:49 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:12.508 * Looking for test storage... 00:12:12.508 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:12.509 20:12:49 -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:12.509 20:12:49 -- nvmf/common.sh@7 -- # uname -s 00:12:12.509 20:12:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:12.509 20:12:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:12.509 20:12:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:12.509 20:12:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:12.509 20:12:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:12.509 20:12:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:12.509 20:12:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:12.509 20:12:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:12.509 20:12:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:12.509 20:12:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:12.509 20:12:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:12:12.509 20:12:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:12:12.509 20:12:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:12.509 20:12:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:12.509 20:12:49 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:12.509 20:12:49 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:12.509 20:12:49 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:12.509 20:12:49 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:12.509 20:12:49 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:12.509 20:12:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.509 20:12:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.509 20:12:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.509 20:12:49 -- paths/export.sh@5 -- # export PATH 00:12:12.509 20:12:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.509 20:12:49 -- nvmf/common.sh@46 -- # : 0 00:12:12.509 20:12:49 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:12.509 20:12:49 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:12.509 20:12:49 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:12.509 20:12:49 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:12.509 20:12:49 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:12.509 20:12:49 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:12.509 20:12:49 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:12.509 20:12:49 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:12.509 20:12:49 -- target/rpc.sh@11 -- # loops=5 00:12:12.509 20:12:49 -- target/rpc.sh@23 -- # nvmftestinit 00:12:12.509 20:12:49 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:12.509 20:12:49 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:12.509 20:12:49 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:12.509 20:12:49 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:12.509 20:12:49 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:12.509 20:12:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:12.509 20:12:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:12.509 20:12:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:12.509 20:12:49 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:12:12.509 20:12:49 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:12:12.509 20:12:49 -- nvmf/common.sh@284 -- # xtrace_disable 00:12:12.509 20:12:49 -- common/autotest_common.sh@10 -- # set +x 00:12:19.072 20:12:55 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:19.072 20:12:55 -- nvmf/common.sh@290 -- # pci_devs=() 00:12:19.072 20:12:55 -- nvmf/common.sh@290 -- # local -a pci_devs 00:12:19.072 20:12:55 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:12:19.072 20:12:55 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:12:19.072 20:12:55 -- nvmf/common.sh@292 -- # pci_drivers=() 00:12:19.072 20:12:55 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:12:19.072 20:12:55 -- nvmf/common.sh@294 -- # net_devs=() 00:12:19.072 20:12:55 -- nvmf/common.sh@294 -- # local -ga net_devs 00:12:19.072 20:12:55 -- nvmf/common.sh@295 -- # e810=() 00:12:19.072 20:12:55 -- nvmf/common.sh@295 -- # local -ga e810 00:12:19.072 20:12:55 -- nvmf/common.sh@296 -- # x722=() 00:12:19.072 20:12:55 -- nvmf/common.sh@296 -- # local -ga x722 00:12:19.072 20:12:55 -- nvmf/common.sh@297 -- # mlx=() 00:12:19.072 20:12:55 -- nvmf/common.sh@297 -- # local -ga mlx 00:12:19.072 20:12:55 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:19.072 20:12:55 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:19.072 20:12:55 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:19.072 20:12:55 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:19.072 20:12:55 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:19.072 20:12:55 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:19.072 20:12:55 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:19.072 20:12:55 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:19.072 20:12:55 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:19.072 20:12:55 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:19.072 20:12:55 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:19.072 20:12:55 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:12:19.072 20:12:55 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:12:19.072 20:12:55 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:12:19.072 20:12:55 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:12:19.072 20:12:55 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:12:19.072 20:12:55 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:12:19.072 20:12:55 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:19.072 20:12:55 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:19.072 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:19.072 20:12:55 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:19.072 20:12:55 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:19.072 20:12:55 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:19.072 20:12:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:19.072 20:12:55 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:19.072 20:12:55 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:19.072 20:12:55 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:19.072 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:19.072 20:12:55 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:19.072 20:12:55 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:19.072 20:12:55 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:19.072 20:12:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:19.072 20:12:55 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:19.072 20:12:55 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:12:19.072 20:12:55 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:12:19.072 20:12:55 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:12:19.072 20:12:55 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:19.072 20:12:55 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:19.072 20:12:55 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:19.072 20:12:55 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:19.072 20:12:55 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:19.072 Found net devices under 0000:af:00.0: cvl_0_0 00:12:19.072 20:12:55 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:19.072 20:12:55 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:19.072 20:12:55 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:19.072 20:12:55 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:19.072 20:12:55 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:19.072 20:12:55 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:19.072 Found net devices under 0000:af:00.1: cvl_0_1 00:12:19.072 20:12:55 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:19.072 20:12:55 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:12:19.072 20:12:55 -- nvmf/common.sh@402 -- # is_hw=yes 00:12:19.072 20:12:55 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:12:19.072 20:12:55 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:12:19.072 20:12:55 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:12:19.072 20:12:55 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:19.072 20:12:55 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:19.072 20:12:55 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:19.072 20:12:55 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:12:19.072 20:12:55 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:19.072 20:12:55 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:19.072 20:12:55 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:12:19.072 20:12:55 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:19.072 20:12:55 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:19.072 20:12:55 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:12:19.072 20:12:55 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:12:19.072 20:12:55 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:12:19.072 20:12:55 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:19.072 20:12:55 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:19.072 20:12:55 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:19.072 20:12:55 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:12:19.072 20:12:55 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:19.072 20:12:56 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:19.072 20:12:56 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:19.072 20:12:56 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:12:19.072 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:19.072 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.150 ms 00:12:19.072 00:12:19.072 --- 10.0.0.2 ping statistics --- 00:12:19.072 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:19.072 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:12:19.072 20:12:56 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:19.072 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:19.072 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:12:19.072 00:12:19.072 --- 10.0.0.1 ping statistics --- 00:12:19.072 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:19.072 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:12:19.072 20:12:56 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:19.072 20:12:56 -- nvmf/common.sh@410 -- # return 0 00:12:19.072 20:12:56 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:19.072 20:12:56 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:19.072 20:12:56 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:19.072 20:12:56 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:19.072 20:12:56 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:19.072 20:12:56 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:19.072 20:12:56 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:19.072 20:12:56 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:19.072 20:12:56 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:19.072 20:12:56 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:19.072 20:12:56 -- common/autotest_common.sh@10 -- # set +x 00:12:19.072 20:12:56 -- nvmf/common.sh@469 -- # nvmfpid=1691536 00:12:19.072 20:12:56 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:19.072 20:12:56 -- nvmf/common.sh@470 -- # waitforlisten 1691536 00:12:19.072 20:12:56 -- common/autotest_common.sh@817 -- # '[' -z 1691536 ']' 00:12:19.072 20:12:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:19.072 20:12:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:19.072 20:12:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:19.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:19.072 20:12:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:19.072 20:12:56 -- common/autotest_common.sh@10 -- # set +x 00:12:19.072 [2024-02-14 20:12:56.155497] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:12:19.072 [2024-02-14 20:12:56.155549] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:19.072 EAL: No free 2048 kB hugepages reported on node 1 00:12:19.072 [2024-02-14 20:12:56.216883] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:19.072 [2024-02-14 20:12:56.292657] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:19.072 [2024-02-14 20:12:56.292766] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:19.072 [2024-02-14 20:12:56.292773] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:19.072 [2024-02-14 20:12:56.292779] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:19.072 [2024-02-14 20:12:56.292900] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:19.072 [2024-02-14 20:12:56.292985] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:19.072 [2024-02-14 20:12:56.293150] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:19.072 [2024-02-14 20:12:56.293152] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:19.639 20:12:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:19.639 20:12:56 -- common/autotest_common.sh@850 -- # return 0 00:12:19.639 20:12:56 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:19.639 20:12:56 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:19.639 20:12:56 -- common/autotest_common.sh@10 -- # set +x 00:12:19.639 20:12:56 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:19.639 20:12:56 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:19.639 20:12:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:19.639 20:12:56 -- common/autotest_common.sh@10 -- # set +x 00:12:19.639 20:12:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:19.639 20:12:56 -- target/rpc.sh@26 -- # stats='{ 00:12:19.639 "tick_rate": 2100000000, 00:12:19.639 "poll_groups": [ 00:12:19.639 { 00:12:19.639 "name": "nvmf_tgt_poll_group_0", 00:12:19.639 "admin_qpairs": 0, 00:12:19.639 "io_qpairs": 0, 00:12:19.639 "current_admin_qpairs": 0, 00:12:19.639 "current_io_qpairs": 0, 00:12:19.639 "pending_bdev_io": 0, 00:12:19.639 "completed_nvme_io": 0, 00:12:19.639 "transports": [] 00:12:19.639 }, 00:12:19.639 { 00:12:19.639 "name": "nvmf_tgt_poll_group_1", 00:12:19.639 "admin_qpairs": 0, 00:12:19.639 "io_qpairs": 0, 00:12:19.639 "current_admin_qpairs": 0, 00:12:19.639 "current_io_qpairs": 0, 00:12:19.639 "pending_bdev_io": 0, 00:12:19.639 "completed_nvme_io": 0, 00:12:19.639 "transports": [] 00:12:19.639 }, 00:12:19.639 { 00:12:19.639 "name": "nvmf_tgt_poll_group_2", 00:12:19.639 "admin_qpairs": 0, 00:12:19.639 "io_qpairs": 0, 00:12:19.639 "current_admin_qpairs": 0, 00:12:19.639 "current_io_qpairs": 0, 00:12:19.639 "pending_bdev_io": 0, 00:12:19.639 "completed_nvme_io": 0, 00:12:19.639 "transports": [] 00:12:19.639 }, 00:12:19.639 { 00:12:19.639 "name": "nvmf_tgt_poll_group_3", 00:12:19.639 "admin_qpairs": 0, 00:12:19.639 "io_qpairs": 0, 00:12:19.639 "current_admin_qpairs": 0, 00:12:19.639 "current_io_qpairs": 0, 00:12:19.639 "pending_bdev_io": 0, 00:12:19.639 "completed_nvme_io": 0, 00:12:19.639 "transports": [] 00:12:19.639 } 00:12:19.639 ] 00:12:19.639 }' 00:12:19.639 20:12:56 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:19.639 20:12:56 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:19.639 20:12:57 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:19.639 20:12:57 -- target/rpc.sh@15 -- # wc -l 00:12:19.639 20:12:57 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:19.639 20:12:57 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:19.898 20:12:57 -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:19.898 20:12:57 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:19.898 20:12:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:19.898 20:12:57 -- common/autotest_common.sh@10 -- # set +x 00:12:19.898 [2024-02-14 20:12:57.094229] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:19.898 20:12:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:19.898 20:12:57 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:19.898 20:12:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:19.898 20:12:57 -- common/autotest_common.sh@10 -- # set +x 00:12:19.898 20:12:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:19.898 20:12:57 -- target/rpc.sh@33 -- # stats='{ 00:12:19.898 "tick_rate": 2100000000, 00:12:19.898 "poll_groups": [ 00:12:19.898 { 00:12:19.898 "name": "nvmf_tgt_poll_group_0", 00:12:19.898 "admin_qpairs": 0, 00:12:19.898 "io_qpairs": 0, 00:12:19.898 "current_admin_qpairs": 0, 00:12:19.898 "current_io_qpairs": 0, 00:12:19.898 "pending_bdev_io": 0, 00:12:19.898 "completed_nvme_io": 0, 00:12:19.898 "transports": [ 00:12:19.898 { 00:12:19.898 "trtype": "TCP" 00:12:19.898 } 00:12:19.898 ] 00:12:19.898 }, 00:12:19.898 { 00:12:19.898 "name": "nvmf_tgt_poll_group_1", 00:12:19.898 "admin_qpairs": 0, 00:12:19.898 "io_qpairs": 0, 00:12:19.898 "current_admin_qpairs": 0, 00:12:19.898 "current_io_qpairs": 0, 00:12:19.898 "pending_bdev_io": 0, 00:12:19.898 "completed_nvme_io": 0, 00:12:19.898 "transports": [ 00:12:19.898 { 00:12:19.898 "trtype": "TCP" 00:12:19.898 } 00:12:19.898 ] 00:12:19.898 }, 00:12:19.898 { 00:12:19.898 "name": "nvmf_tgt_poll_group_2", 00:12:19.898 "admin_qpairs": 0, 00:12:19.898 "io_qpairs": 0, 00:12:19.898 "current_admin_qpairs": 0, 00:12:19.898 "current_io_qpairs": 0, 00:12:19.898 "pending_bdev_io": 0, 00:12:19.898 "completed_nvme_io": 0, 00:12:19.898 "transports": [ 00:12:19.898 { 00:12:19.898 "trtype": "TCP" 00:12:19.898 } 00:12:19.898 ] 00:12:19.898 }, 00:12:19.898 { 00:12:19.898 "name": "nvmf_tgt_poll_group_3", 00:12:19.898 "admin_qpairs": 0, 00:12:19.898 "io_qpairs": 0, 00:12:19.898 "current_admin_qpairs": 0, 00:12:19.898 "current_io_qpairs": 0, 00:12:19.898 "pending_bdev_io": 0, 00:12:19.898 "completed_nvme_io": 0, 00:12:19.898 "transports": [ 00:12:19.898 { 00:12:19.898 "trtype": "TCP" 00:12:19.898 } 00:12:19.898 ] 00:12:19.898 } 00:12:19.898 ] 00:12:19.898 }' 00:12:19.898 20:12:57 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:19.898 20:12:57 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:19.898 20:12:57 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:19.898 20:12:57 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:19.898 20:12:57 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:19.898 20:12:57 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:19.898 20:12:57 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:19.898 20:12:57 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:19.898 20:12:57 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:19.899 20:12:57 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:19.899 20:12:57 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:19.899 20:12:57 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:19.899 20:12:57 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:19.899 20:12:57 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:19.899 20:12:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:19.899 20:12:57 -- common/autotest_common.sh@10 -- # set +x 00:12:19.899 Malloc1 00:12:19.899 20:12:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:19.899 20:12:57 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:19.899 20:12:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:19.899 20:12:57 -- common/autotest_common.sh@10 -- # set +x 00:12:19.899 20:12:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:19.899 20:12:57 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:19.899 20:12:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:19.899 20:12:57 -- common/autotest_common.sh@10 -- # set +x 00:12:19.899 20:12:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:19.899 20:12:57 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:19.899 20:12:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:19.899 20:12:57 -- common/autotest_common.sh@10 -- # set +x 00:12:19.899 20:12:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:19.899 20:12:57 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:19.899 20:12:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:19.899 20:12:57 -- common/autotest_common.sh@10 -- # set +x 00:12:19.899 [2024-02-14 20:12:57.262287] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:19.899 20:12:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:19.899 20:12:57 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:12:19.899 20:12:57 -- common/autotest_common.sh@638 -- # local es=0 00:12:19.899 20:12:57 -- common/autotest_common.sh@640 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:12:19.899 20:12:57 -- common/autotest_common.sh@626 -- # local arg=nvme 00:12:19.899 20:12:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:19.899 20:12:57 -- common/autotest_common.sh@630 -- # type -t nvme 00:12:19.899 20:12:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:19.899 20:12:57 -- common/autotest_common.sh@632 -- # type -P nvme 00:12:19.899 20:12:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:19.899 20:12:57 -- common/autotest_common.sh@632 -- # arg=/usr/sbin/nvme 00:12:19.899 20:12:57 -- common/autotest_common.sh@632 -- # [[ -x /usr/sbin/nvme ]] 00:12:19.899 20:12:57 -- common/autotest_common.sh@641 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:12:19.899 [2024-02-14 20:12:57.290890] ctrlr.c: 742:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562' 00:12:19.899 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:19.899 could not add new controller: failed to write to nvme-fabrics device 00:12:20.157 20:12:57 -- common/autotest_common.sh@641 -- # es=1 00:12:20.157 20:12:57 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:20.157 20:12:57 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:12:20.157 20:12:57 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:20.157 20:12:57 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:12:20.157 20:12:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:20.157 20:12:57 -- common/autotest_common.sh@10 -- # set +x 00:12:20.157 20:12:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:20.157 20:12:57 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:21.094 20:12:58 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:21.094 20:12:58 -- common/autotest_common.sh@1175 -- # local i=0 00:12:21.094 20:12:58 -- common/autotest_common.sh@1176 -- # local nvme_device_counter=1 nvme_devices=0 00:12:21.094 20:12:58 -- common/autotest_common.sh@1177 -- # [[ -n '' ]] 00:12:21.094 20:12:58 -- common/autotest_common.sh@1182 -- # sleep 2 00:12:23.628 20:13:00 -- common/autotest_common.sh@1183 -- # (( i++ <= 15 )) 00:12:23.628 20:13:00 -- common/autotest_common.sh@1184 -- # lsblk -l -o NAME,SERIAL 00:12:23.628 20:13:00 -- common/autotest_common.sh@1184 -- # grep -c SPDKISFASTANDAWESOME 00:12:23.628 20:13:00 -- common/autotest_common.sh@1184 -- # nvme_devices=1 00:12:23.628 20:13:00 -- common/autotest_common.sh@1185 -- # (( nvme_devices == nvme_device_counter )) 00:12:23.628 20:13:00 -- common/autotest_common.sh@1185 -- # return 0 00:12:23.628 20:13:00 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:23.628 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:23.628 20:13:00 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:23.628 20:13:00 -- common/autotest_common.sh@1196 -- # local i=0 00:12:23.628 20:13:00 -- common/autotest_common.sh@1197 -- # lsblk -o NAME,SERIAL 00:12:23.628 20:13:00 -- common/autotest_common.sh@1197 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:23.628 20:13:00 -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:12:23.628 20:13:00 -- common/autotest_common.sh@1204 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:23.628 20:13:00 -- common/autotest_common.sh@1208 -- # return 0 00:12:23.629 20:13:00 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:12:23.629 20:13:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:23.629 20:13:00 -- common/autotest_common.sh@10 -- # set +x 00:12:23.629 20:13:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:23.629 20:13:00 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:23.629 20:13:00 -- common/autotest_common.sh@638 -- # local es=0 00:12:23.629 20:13:00 -- common/autotest_common.sh@640 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:23.629 20:13:00 -- common/autotest_common.sh@626 -- # local arg=nvme 00:12:23.629 20:13:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:23.629 20:13:00 -- common/autotest_common.sh@630 -- # type -t nvme 00:12:23.629 20:13:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:23.629 20:13:00 -- common/autotest_common.sh@632 -- # type -P nvme 00:12:23.629 20:13:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:23.629 20:13:00 -- common/autotest_common.sh@632 -- # arg=/usr/sbin/nvme 00:12:23.629 20:13:00 -- common/autotest_common.sh@632 -- # [[ -x /usr/sbin/nvme ]] 00:12:23.629 20:13:00 -- common/autotest_common.sh@641 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:23.629 [2024-02-14 20:13:00.598352] ctrlr.c: 742:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562' 00:12:23.629 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:23.629 could not add new controller: failed to write to nvme-fabrics device 00:12:23.629 20:13:00 -- common/autotest_common.sh@641 -- # es=1 00:12:23.629 20:13:00 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:23.629 20:13:00 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:12:23.629 20:13:00 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:23.629 20:13:00 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:23.629 20:13:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:23.629 20:13:00 -- common/autotest_common.sh@10 -- # set +x 00:12:23.629 20:13:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:23.629 20:13:00 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:24.564 20:13:01 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:24.564 20:13:01 -- common/autotest_common.sh@1175 -- # local i=0 00:12:24.564 20:13:01 -- common/autotest_common.sh@1176 -- # local nvme_device_counter=1 nvme_devices=0 00:12:24.564 20:13:01 -- common/autotest_common.sh@1177 -- # [[ -n '' ]] 00:12:24.564 20:13:01 -- common/autotest_common.sh@1182 -- # sleep 2 00:12:26.466 20:13:03 -- common/autotest_common.sh@1183 -- # (( i++ <= 15 )) 00:12:26.466 20:13:03 -- common/autotest_common.sh@1184 -- # lsblk -l -o NAME,SERIAL 00:12:26.466 20:13:03 -- common/autotest_common.sh@1184 -- # grep -c SPDKISFASTANDAWESOME 00:12:26.466 20:13:03 -- common/autotest_common.sh@1184 -- # nvme_devices=1 00:12:26.466 20:13:03 -- common/autotest_common.sh@1185 -- # (( nvme_devices == nvme_device_counter )) 00:12:26.466 20:13:03 -- common/autotest_common.sh@1185 -- # return 0 00:12:26.466 20:13:03 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:26.725 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:26.725 20:13:03 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:26.725 20:13:03 -- common/autotest_common.sh@1196 -- # local i=0 00:12:26.725 20:13:03 -- common/autotest_common.sh@1197 -- # lsblk -o NAME,SERIAL 00:12:26.725 20:13:03 -- common/autotest_common.sh@1197 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:26.725 20:13:03 -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:12:26.725 20:13:03 -- common/autotest_common.sh@1204 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:26.725 20:13:03 -- common/autotest_common.sh@1208 -- # return 0 00:12:26.725 20:13:03 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:26.725 20:13:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:26.725 20:13:03 -- common/autotest_common.sh@10 -- # set +x 00:12:26.725 20:13:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:26.725 20:13:04 -- target/rpc.sh@81 -- # seq 1 5 00:12:26.725 20:13:04 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:26.725 20:13:04 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:26.725 20:13:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:26.725 20:13:04 -- common/autotest_common.sh@10 -- # set +x 00:12:26.725 20:13:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:26.725 20:13:04 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:26.725 20:13:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:26.725 20:13:04 -- common/autotest_common.sh@10 -- # set +x 00:12:26.725 [2024-02-14 20:13:04.026869] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:26.725 20:13:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:26.725 20:13:04 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:26.725 20:13:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:26.725 20:13:04 -- common/autotest_common.sh@10 -- # set +x 00:12:26.725 20:13:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:26.725 20:13:04 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:26.725 20:13:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:26.725 20:13:04 -- common/autotest_common.sh@10 -- # set +x 00:12:26.725 20:13:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:26.725 20:13:04 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:28.101 20:13:05 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:28.101 20:13:05 -- common/autotest_common.sh@1175 -- # local i=0 00:12:28.101 20:13:05 -- common/autotest_common.sh@1176 -- # local nvme_device_counter=1 nvme_devices=0 00:12:28.101 20:13:05 -- common/autotest_common.sh@1177 -- # [[ -n '' ]] 00:12:28.101 20:13:05 -- common/autotest_common.sh@1182 -- # sleep 2 00:12:30.003 20:13:07 -- common/autotest_common.sh@1183 -- # (( i++ <= 15 )) 00:12:30.003 20:13:07 -- common/autotest_common.sh@1184 -- # lsblk -l -o NAME,SERIAL 00:12:30.003 20:13:07 -- common/autotest_common.sh@1184 -- # grep -c SPDKISFASTANDAWESOME 00:12:30.003 20:13:07 -- common/autotest_common.sh@1184 -- # nvme_devices=1 00:12:30.003 20:13:07 -- common/autotest_common.sh@1185 -- # (( nvme_devices == nvme_device_counter )) 00:12:30.003 20:13:07 -- common/autotest_common.sh@1185 -- # return 0 00:12:30.003 20:13:07 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:30.003 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:30.003 20:13:07 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:30.003 20:13:07 -- common/autotest_common.sh@1196 -- # local i=0 00:12:30.003 20:13:07 -- common/autotest_common.sh@1197 -- # lsblk -o NAME,SERIAL 00:12:30.003 20:13:07 -- common/autotest_common.sh@1197 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:30.003 20:13:07 -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:12:30.003 20:13:07 -- common/autotest_common.sh@1204 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:30.003 20:13:07 -- common/autotest_common.sh@1208 -- # return 0 00:12:30.003 20:13:07 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:30.003 20:13:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:30.003 20:13:07 -- common/autotest_common.sh@10 -- # set +x 00:12:30.003 20:13:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:30.003 20:13:07 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:30.003 20:13:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:30.003 20:13:07 -- common/autotest_common.sh@10 -- # set +x 00:12:30.262 20:13:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:30.262 20:13:07 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:30.262 20:13:07 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:30.262 20:13:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:30.262 20:13:07 -- common/autotest_common.sh@10 -- # set +x 00:12:30.262 20:13:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:30.262 20:13:07 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:30.262 20:13:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:30.262 20:13:07 -- common/autotest_common.sh@10 -- # set +x 00:12:30.262 [2024-02-14 20:13:07.439978] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:30.262 20:13:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:30.262 20:13:07 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:30.262 20:13:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:30.262 20:13:07 -- common/autotest_common.sh@10 -- # set +x 00:12:30.262 20:13:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:30.262 20:13:07 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:30.262 20:13:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:30.262 20:13:07 -- common/autotest_common.sh@10 -- # set +x 00:12:30.262 20:13:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:30.262 20:13:07 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:31.637 20:13:08 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:31.637 20:13:08 -- common/autotest_common.sh@1175 -- # local i=0 00:12:31.637 20:13:08 -- common/autotest_common.sh@1176 -- # local nvme_device_counter=1 nvme_devices=0 00:12:31.637 20:13:08 -- common/autotest_common.sh@1177 -- # [[ -n '' ]] 00:12:31.637 20:13:08 -- common/autotest_common.sh@1182 -- # sleep 2 00:12:33.541 20:13:10 -- common/autotest_common.sh@1183 -- # (( i++ <= 15 )) 00:12:33.541 20:13:10 -- common/autotest_common.sh@1184 -- # lsblk -l -o NAME,SERIAL 00:12:33.541 20:13:10 -- common/autotest_common.sh@1184 -- # grep -c SPDKISFASTANDAWESOME 00:12:33.541 20:13:10 -- common/autotest_common.sh@1184 -- # nvme_devices=1 00:12:33.541 20:13:10 -- common/autotest_common.sh@1185 -- # (( nvme_devices == nvme_device_counter )) 00:12:33.541 20:13:10 -- common/autotest_common.sh@1185 -- # return 0 00:12:33.541 20:13:10 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:33.541 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:33.541 20:13:10 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:33.541 20:13:10 -- common/autotest_common.sh@1196 -- # local i=0 00:12:33.541 20:13:10 -- common/autotest_common.sh@1197 -- # lsblk -o NAME,SERIAL 00:12:33.541 20:13:10 -- common/autotest_common.sh@1197 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:33.541 20:13:10 -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:12:33.541 20:13:10 -- common/autotest_common.sh@1204 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:33.541 20:13:10 -- common/autotest_common.sh@1208 -- # return 0 00:12:33.541 20:13:10 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:33.541 20:13:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:33.541 20:13:10 -- common/autotest_common.sh@10 -- # set +x 00:12:33.541 20:13:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:33.541 20:13:10 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:33.541 20:13:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:33.541 20:13:10 -- common/autotest_common.sh@10 -- # set +x 00:12:33.541 20:13:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:33.541 20:13:10 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:33.541 20:13:10 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:33.541 20:13:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:33.541 20:13:10 -- common/autotest_common.sh@10 -- # set +x 00:12:33.541 20:13:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:33.541 20:13:10 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:33.541 20:13:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:33.541 20:13:10 -- common/autotest_common.sh@10 -- # set +x 00:12:33.541 [2024-02-14 20:13:10.803869] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:33.541 20:13:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:33.541 20:13:10 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:33.541 20:13:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:33.541 20:13:10 -- common/autotest_common.sh@10 -- # set +x 00:12:33.541 20:13:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:33.541 20:13:10 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:33.541 20:13:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:33.541 20:13:10 -- common/autotest_common.sh@10 -- # set +x 00:12:33.541 20:13:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:33.541 20:13:10 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:34.917 20:13:11 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:34.917 20:13:11 -- common/autotest_common.sh@1175 -- # local i=0 00:12:34.917 20:13:11 -- common/autotest_common.sh@1176 -- # local nvme_device_counter=1 nvme_devices=0 00:12:34.917 20:13:11 -- common/autotest_common.sh@1177 -- # [[ -n '' ]] 00:12:34.917 20:13:11 -- common/autotest_common.sh@1182 -- # sleep 2 00:12:36.817 20:13:13 -- common/autotest_common.sh@1183 -- # (( i++ <= 15 )) 00:12:36.818 20:13:13 -- common/autotest_common.sh@1184 -- # lsblk -l -o NAME,SERIAL 00:12:36.818 20:13:13 -- common/autotest_common.sh@1184 -- # grep -c SPDKISFASTANDAWESOME 00:12:36.818 20:13:13 -- common/autotest_common.sh@1184 -- # nvme_devices=1 00:12:36.818 20:13:13 -- common/autotest_common.sh@1185 -- # (( nvme_devices == nvme_device_counter )) 00:12:36.818 20:13:13 -- common/autotest_common.sh@1185 -- # return 0 00:12:36.818 20:13:13 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:36.818 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:36.818 20:13:14 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:36.818 20:13:14 -- common/autotest_common.sh@1196 -- # local i=0 00:12:36.818 20:13:14 -- common/autotest_common.sh@1197 -- # lsblk -o NAME,SERIAL 00:12:36.818 20:13:14 -- common/autotest_common.sh@1197 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:36.818 20:13:14 -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:12:36.818 20:13:14 -- common/autotest_common.sh@1204 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:36.818 20:13:14 -- common/autotest_common.sh@1208 -- # return 0 00:12:36.818 20:13:14 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:36.818 20:13:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:36.818 20:13:14 -- common/autotest_common.sh@10 -- # set +x 00:12:36.818 20:13:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:36.818 20:13:14 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:36.818 20:13:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:36.818 20:13:14 -- common/autotest_common.sh@10 -- # set +x 00:12:36.818 20:13:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:36.818 20:13:14 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:36.818 20:13:14 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:36.818 20:13:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:36.818 20:13:14 -- common/autotest_common.sh@10 -- # set +x 00:12:36.818 20:13:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:36.818 20:13:14 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:36.818 20:13:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:36.818 20:13:14 -- common/autotest_common.sh@10 -- # set +x 00:12:36.818 [2024-02-14 20:13:14.123021] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:36.818 20:13:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:36.818 20:13:14 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:36.818 20:13:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:36.818 20:13:14 -- common/autotest_common.sh@10 -- # set +x 00:12:36.818 20:13:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:36.818 20:13:14 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:36.818 20:13:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:36.818 20:13:14 -- common/autotest_common.sh@10 -- # set +x 00:12:36.818 20:13:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:36.818 20:13:14 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:38.192 20:13:15 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:38.192 20:13:15 -- common/autotest_common.sh@1175 -- # local i=0 00:12:38.192 20:13:15 -- common/autotest_common.sh@1176 -- # local nvme_device_counter=1 nvme_devices=0 00:12:38.192 20:13:15 -- common/autotest_common.sh@1177 -- # [[ -n '' ]] 00:12:38.192 20:13:15 -- common/autotest_common.sh@1182 -- # sleep 2 00:12:40.095 20:13:17 -- common/autotest_common.sh@1183 -- # (( i++ <= 15 )) 00:12:40.095 20:13:17 -- common/autotest_common.sh@1184 -- # lsblk -l -o NAME,SERIAL 00:12:40.095 20:13:17 -- common/autotest_common.sh@1184 -- # grep -c SPDKISFASTANDAWESOME 00:12:40.095 20:13:17 -- common/autotest_common.sh@1184 -- # nvme_devices=1 00:12:40.095 20:13:17 -- common/autotest_common.sh@1185 -- # (( nvme_devices == nvme_device_counter )) 00:12:40.095 20:13:17 -- common/autotest_common.sh@1185 -- # return 0 00:12:40.095 20:13:17 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:40.095 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:40.095 20:13:17 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:40.095 20:13:17 -- common/autotest_common.sh@1196 -- # local i=0 00:12:40.095 20:13:17 -- common/autotest_common.sh@1197 -- # lsblk -o NAME,SERIAL 00:12:40.095 20:13:17 -- common/autotest_common.sh@1197 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:40.095 20:13:17 -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:12:40.095 20:13:17 -- common/autotest_common.sh@1204 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:40.095 20:13:17 -- common/autotest_common.sh@1208 -- # return 0 00:12:40.095 20:13:17 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:40.095 20:13:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:40.095 20:13:17 -- common/autotest_common.sh@10 -- # set +x 00:12:40.354 20:13:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:40.354 20:13:17 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:40.354 20:13:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:40.354 20:13:17 -- common/autotest_common.sh@10 -- # set +x 00:12:40.354 20:13:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:40.354 20:13:17 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:40.354 20:13:17 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:40.354 20:13:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:40.354 20:13:17 -- common/autotest_common.sh@10 -- # set +x 00:12:40.354 20:13:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:40.354 20:13:17 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:40.354 20:13:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:40.354 20:13:17 -- common/autotest_common.sh@10 -- # set +x 00:12:40.354 [2024-02-14 20:13:17.536519] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:40.354 20:13:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:40.354 20:13:17 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:40.354 20:13:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:40.354 20:13:17 -- common/autotest_common.sh@10 -- # set +x 00:12:40.354 20:13:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:40.354 20:13:17 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:40.354 20:13:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:40.354 20:13:17 -- common/autotest_common.sh@10 -- # set +x 00:12:40.354 20:13:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:40.354 20:13:17 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:41.750 20:13:18 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:41.750 20:13:18 -- common/autotest_common.sh@1175 -- # local i=0 00:12:41.750 20:13:18 -- common/autotest_common.sh@1176 -- # local nvme_device_counter=1 nvme_devices=0 00:12:41.750 20:13:18 -- common/autotest_common.sh@1177 -- # [[ -n '' ]] 00:12:41.750 20:13:18 -- common/autotest_common.sh@1182 -- # sleep 2 00:12:43.697 20:13:20 -- common/autotest_common.sh@1183 -- # (( i++ <= 15 )) 00:12:43.697 20:13:20 -- common/autotest_common.sh@1184 -- # lsblk -l -o NAME,SERIAL 00:12:43.697 20:13:20 -- common/autotest_common.sh@1184 -- # grep -c SPDKISFASTANDAWESOME 00:12:43.697 20:13:20 -- common/autotest_common.sh@1184 -- # nvme_devices=1 00:12:43.697 20:13:20 -- common/autotest_common.sh@1185 -- # (( nvme_devices == nvme_device_counter )) 00:12:43.697 20:13:20 -- common/autotest_common.sh@1185 -- # return 0 00:12:43.697 20:13:20 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:43.697 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:43.697 20:13:20 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:43.697 20:13:20 -- common/autotest_common.sh@1196 -- # local i=0 00:12:43.697 20:13:20 -- common/autotest_common.sh@1197 -- # lsblk -o NAME,SERIAL 00:12:43.697 20:13:20 -- common/autotest_common.sh@1197 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:43.697 20:13:20 -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:12:43.697 20:13:20 -- common/autotest_common.sh@1204 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:43.697 20:13:20 -- common/autotest_common.sh@1208 -- # return 0 00:12:43.697 20:13:20 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:43.697 20:13:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:43.697 20:13:20 -- common/autotest_common.sh@10 -- # set +x 00:12:43.697 20:13:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:43.697 20:13:20 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:43.697 20:13:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:43.697 20:13:20 -- common/autotest_common.sh@10 -- # set +x 00:12:43.697 20:13:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:43.697 20:13:20 -- target/rpc.sh@99 -- # seq 1 5 00:12:43.697 20:13:20 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:43.697 20:13:20 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:43.698 20:13:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:43.698 20:13:20 -- common/autotest_common.sh@10 -- # set +x 00:12:43.698 20:13:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:43.698 20:13:20 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:43.698 20:13:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:43.698 20:13:20 -- common/autotest_common.sh@10 -- # set +x 00:12:43.698 [2024-02-14 20:13:20.906892] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:43.698 20:13:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:43.698 20:13:20 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:43.698 20:13:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:43.698 20:13:20 -- common/autotest_common.sh@10 -- # set +x 00:12:43.698 20:13:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:43.698 20:13:20 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:43.698 20:13:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:43.698 20:13:20 -- common/autotest_common.sh@10 -- # set +x 00:12:43.698 20:13:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:43.698 20:13:20 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:43.698 20:13:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:43.698 20:13:20 -- common/autotest_common.sh@10 -- # set +x 00:12:43.698 20:13:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:43.698 20:13:20 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:43.698 20:13:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:43.698 20:13:20 -- common/autotest_common.sh@10 -- # set +x 00:12:43.698 20:13:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:43.698 20:13:20 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:43.698 20:13:20 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:43.698 20:13:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:43.698 20:13:20 -- common/autotest_common.sh@10 -- # set +x 00:12:43.698 20:13:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:43.698 20:13:20 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:43.698 20:13:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:43.698 20:13:20 -- common/autotest_common.sh@10 -- # set +x 00:12:43.698 [2024-02-14 20:13:20.955004] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:43.698 20:13:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:43.698 20:13:20 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:43.698 20:13:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:43.698 20:13:20 -- common/autotest_common.sh@10 -- # set +x 00:12:43.698 20:13:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:43.698 20:13:20 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:43.698 20:13:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:43.698 20:13:20 -- common/autotest_common.sh@10 -- # set +x 00:12:43.698 20:13:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:43.698 20:13:20 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:43.698 20:13:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:43.698 20:13:20 -- common/autotest_common.sh@10 -- # set +x 00:12:43.698 20:13:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:43.698 20:13:20 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:43.698 20:13:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:43.698 20:13:20 -- common/autotest_common.sh@10 -- # set +x 00:12:43.698 20:13:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:43.698 20:13:20 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:43.698 20:13:20 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:43.698 20:13:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:43.698 20:13:20 -- common/autotest_common.sh@10 -- # set +x 00:12:43.698 20:13:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:43.698 20:13:20 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:43.698 20:13:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:43.698 20:13:20 -- common/autotest_common.sh@10 -- # set +x 00:12:43.698 [2024-02-14 20:13:21.003140] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:43.698 20:13:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:43.698 20:13:21 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:43.698 20:13:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:43.698 20:13:21 -- common/autotest_common.sh@10 -- # set +x 00:12:43.698 20:13:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:43.698 20:13:21 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:43.698 20:13:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:43.698 20:13:21 -- common/autotest_common.sh@10 -- # set +x 00:12:43.698 20:13:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:43.698 20:13:21 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:43.698 20:13:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:43.698 20:13:21 -- common/autotest_common.sh@10 -- # set +x 00:12:43.698 20:13:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:43.698 20:13:21 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:43.698 20:13:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:43.698 20:13:21 -- common/autotest_common.sh@10 -- # set +x 00:12:43.698 20:13:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:43.698 20:13:21 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:43.698 20:13:21 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:43.698 20:13:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:43.698 20:13:21 -- common/autotest_common.sh@10 -- # set +x 00:12:43.698 20:13:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:43.698 20:13:21 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:43.698 20:13:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:43.698 20:13:21 -- common/autotest_common.sh@10 -- # set +x 00:12:43.698 [2024-02-14 20:13:21.055313] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:43.698 20:13:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:43.698 20:13:21 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:43.698 20:13:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:43.698 20:13:21 -- common/autotest_common.sh@10 -- # set +x 00:12:43.698 20:13:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:43.698 20:13:21 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:43.698 20:13:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:43.698 20:13:21 -- common/autotest_common.sh@10 -- # set +x 00:12:43.698 20:13:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:43.698 20:13:21 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:43.698 20:13:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:43.698 20:13:21 -- common/autotest_common.sh@10 -- # set +x 00:12:43.698 20:13:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:43.698 20:13:21 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:43.698 20:13:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:43.698 20:13:21 -- common/autotest_common.sh@10 -- # set +x 00:12:43.698 20:13:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:43.698 20:13:21 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:43.698 20:13:21 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:43.698 20:13:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:43.698 20:13:21 -- common/autotest_common.sh@10 -- # set +x 00:12:43.698 20:13:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:43.698 20:13:21 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:43.698 20:13:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:43.698 20:13:21 -- common/autotest_common.sh@10 -- # set +x 00:12:43.698 [2024-02-14 20:13:21.103482] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:43.698 20:13:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:43.698 20:13:21 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:43.698 20:13:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:43.698 20:13:21 -- common/autotest_common.sh@10 -- # set +x 00:12:43.958 20:13:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:43.958 20:13:21 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:43.958 20:13:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:43.958 20:13:21 -- common/autotest_common.sh@10 -- # set +x 00:12:43.958 20:13:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:43.958 20:13:21 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:43.958 20:13:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:43.958 20:13:21 -- common/autotest_common.sh@10 -- # set +x 00:12:43.958 20:13:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:43.958 20:13:21 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:43.958 20:13:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:43.958 20:13:21 -- common/autotest_common.sh@10 -- # set +x 00:12:43.958 20:13:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:43.958 20:13:21 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:43.958 20:13:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:43.958 20:13:21 -- common/autotest_common.sh@10 -- # set +x 00:12:43.958 20:13:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:43.958 20:13:21 -- target/rpc.sh@110 -- # stats='{ 00:12:43.958 "tick_rate": 2100000000, 00:12:43.958 "poll_groups": [ 00:12:43.958 { 00:12:43.958 "name": "nvmf_tgt_poll_group_0", 00:12:43.958 "admin_qpairs": 2, 00:12:43.958 "io_qpairs": 168, 00:12:43.958 "current_admin_qpairs": 0, 00:12:43.958 "current_io_qpairs": 0, 00:12:43.958 "pending_bdev_io": 0, 00:12:43.958 "completed_nvme_io": 219, 00:12:43.958 "transports": [ 00:12:43.958 { 00:12:43.958 "trtype": "TCP" 00:12:43.958 } 00:12:43.958 ] 00:12:43.958 }, 00:12:43.958 { 00:12:43.958 "name": "nvmf_tgt_poll_group_1", 00:12:43.958 "admin_qpairs": 2, 00:12:43.958 "io_qpairs": 168, 00:12:43.958 "current_admin_qpairs": 0, 00:12:43.958 "current_io_qpairs": 0, 00:12:43.958 "pending_bdev_io": 0, 00:12:43.958 "completed_nvme_io": 315, 00:12:43.958 "transports": [ 00:12:43.958 { 00:12:43.958 "trtype": "TCP" 00:12:43.958 } 00:12:43.958 ] 00:12:43.958 }, 00:12:43.958 { 00:12:43.958 "name": "nvmf_tgt_poll_group_2", 00:12:43.958 "admin_qpairs": 1, 00:12:43.958 "io_qpairs": 168, 00:12:43.958 "current_admin_qpairs": 0, 00:12:43.958 "current_io_qpairs": 0, 00:12:43.958 "pending_bdev_io": 0, 00:12:43.958 "completed_nvme_io": 220, 00:12:43.958 "transports": [ 00:12:43.958 { 00:12:43.958 "trtype": "TCP" 00:12:43.958 } 00:12:43.958 ] 00:12:43.958 }, 00:12:43.958 { 00:12:43.958 "name": "nvmf_tgt_poll_group_3", 00:12:43.958 "admin_qpairs": 2, 00:12:43.958 "io_qpairs": 168, 00:12:43.958 "current_admin_qpairs": 0, 00:12:43.958 "current_io_qpairs": 0, 00:12:43.958 "pending_bdev_io": 0, 00:12:43.958 "completed_nvme_io": 268, 00:12:43.958 "transports": [ 00:12:43.958 { 00:12:43.958 "trtype": "TCP" 00:12:43.958 } 00:12:43.958 ] 00:12:43.958 } 00:12:43.958 ] 00:12:43.958 }' 00:12:43.958 20:13:21 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:43.958 20:13:21 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:43.958 20:13:21 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:43.958 20:13:21 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:43.958 20:13:21 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:43.958 20:13:21 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:43.958 20:13:21 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:43.958 20:13:21 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:43.958 20:13:21 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:43.958 20:13:21 -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:12:43.958 20:13:21 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:43.958 20:13:21 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:43.958 20:13:21 -- target/rpc.sh@123 -- # nvmftestfini 00:12:43.958 20:13:21 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:43.958 20:13:21 -- nvmf/common.sh@116 -- # sync 00:12:43.958 20:13:21 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:43.958 20:13:21 -- nvmf/common.sh@119 -- # set +e 00:12:43.958 20:13:21 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:43.958 20:13:21 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:43.958 rmmod nvme_tcp 00:12:43.958 rmmod nvme_fabrics 00:12:43.958 rmmod nvme_keyring 00:12:43.958 20:13:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:43.958 20:13:21 -- nvmf/common.sh@123 -- # set -e 00:12:43.958 20:13:21 -- nvmf/common.sh@124 -- # return 0 00:12:43.958 20:13:21 -- nvmf/common.sh@477 -- # '[' -n 1691536 ']' 00:12:43.958 20:13:21 -- nvmf/common.sh@478 -- # killprocess 1691536 00:12:43.958 20:13:21 -- common/autotest_common.sh@924 -- # '[' -z 1691536 ']' 00:12:43.958 20:13:21 -- common/autotest_common.sh@928 -- # kill -0 1691536 00:12:43.958 20:13:21 -- common/autotest_common.sh@929 -- # uname 00:12:43.958 20:13:21 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:12:43.958 20:13:21 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 1691536 00:12:43.958 20:13:21 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:12:43.958 20:13:21 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:12:43.958 20:13:21 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 1691536' 00:12:43.958 killing process with pid 1691536 00:12:43.958 20:13:21 -- common/autotest_common.sh@943 -- # kill 1691536 00:12:43.958 20:13:21 -- common/autotest_common.sh@948 -- # wait 1691536 00:12:44.217 20:13:21 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:44.217 20:13:21 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:44.217 20:13:21 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:44.217 20:13:21 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:44.217 20:13:21 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:44.217 20:13:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:44.217 20:13:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:44.217 20:13:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:46.753 20:13:23 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:12:46.753 00:12:46.753 real 0m34.007s 00:12:46.753 user 1m42.573s 00:12:46.753 sys 0m6.597s 00:12:46.753 20:13:23 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:46.753 20:13:23 -- common/autotest_common.sh@10 -- # set +x 00:12:46.753 ************************************ 00:12:46.753 END TEST nvmf_rpc 00:12:46.753 ************************************ 00:12:46.753 20:13:23 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:46.753 20:13:23 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:12:46.753 20:13:23 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:12:46.753 20:13:23 -- common/autotest_common.sh@10 -- # set +x 00:12:46.753 ************************************ 00:12:46.753 START TEST nvmf_invalid 00:12:46.753 ************************************ 00:12:46.753 20:13:23 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:46.754 * Looking for test storage... 00:12:46.754 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:46.754 20:13:23 -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:46.754 20:13:23 -- nvmf/common.sh@7 -- # uname -s 00:12:46.754 20:13:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:46.754 20:13:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:46.754 20:13:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:46.754 20:13:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:46.754 20:13:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:46.754 20:13:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:46.754 20:13:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:46.754 20:13:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:46.754 20:13:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:46.754 20:13:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:46.754 20:13:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:12:46.754 20:13:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:12:46.754 20:13:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:46.754 20:13:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:46.754 20:13:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:46.754 20:13:23 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:46.754 20:13:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:46.754 20:13:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:46.754 20:13:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:46.754 20:13:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.754 20:13:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.754 20:13:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.754 20:13:23 -- paths/export.sh@5 -- # export PATH 00:12:46.754 20:13:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.754 20:13:23 -- nvmf/common.sh@46 -- # : 0 00:12:46.754 20:13:23 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:46.754 20:13:23 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:46.754 20:13:23 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:46.754 20:13:23 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:46.754 20:13:23 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:46.754 20:13:23 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:46.754 20:13:23 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:46.754 20:13:23 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:46.754 20:13:23 -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:46.754 20:13:23 -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:46.754 20:13:23 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:46.754 20:13:23 -- target/invalid.sh@14 -- # target=foobar 00:12:46.754 20:13:23 -- target/invalid.sh@16 -- # RANDOM=0 00:12:46.754 20:13:23 -- target/invalid.sh@34 -- # nvmftestinit 00:12:46.754 20:13:23 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:46.754 20:13:23 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:46.754 20:13:23 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:46.754 20:13:23 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:46.754 20:13:23 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:46.754 20:13:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:46.754 20:13:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:46.754 20:13:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:46.754 20:13:23 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:12:46.754 20:13:23 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:12:46.754 20:13:23 -- nvmf/common.sh@284 -- # xtrace_disable 00:12:46.754 20:13:23 -- common/autotest_common.sh@10 -- # set +x 00:12:53.323 20:13:29 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:53.323 20:13:29 -- nvmf/common.sh@290 -- # pci_devs=() 00:12:53.323 20:13:29 -- nvmf/common.sh@290 -- # local -a pci_devs 00:12:53.323 20:13:29 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:12:53.323 20:13:29 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:12:53.323 20:13:29 -- nvmf/common.sh@292 -- # pci_drivers=() 00:12:53.323 20:13:29 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:12:53.323 20:13:29 -- nvmf/common.sh@294 -- # net_devs=() 00:12:53.323 20:13:29 -- nvmf/common.sh@294 -- # local -ga net_devs 00:12:53.323 20:13:29 -- nvmf/common.sh@295 -- # e810=() 00:12:53.323 20:13:29 -- nvmf/common.sh@295 -- # local -ga e810 00:12:53.323 20:13:29 -- nvmf/common.sh@296 -- # x722=() 00:12:53.323 20:13:29 -- nvmf/common.sh@296 -- # local -ga x722 00:12:53.323 20:13:29 -- nvmf/common.sh@297 -- # mlx=() 00:12:53.323 20:13:29 -- nvmf/common.sh@297 -- # local -ga mlx 00:12:53.323 20:13:29 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:53.323 20:13:29 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:53.323 20:13:29 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:53.323 20:13:29 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:53.323 20:13:29 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:53.323 20:13:29 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:53.323 20:13:29 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:53.323 20:13:29 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:53.323 20:13:29 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:53.323 20:13:29 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:53.323 20:13:29 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:53.323 20:13:29 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:12:53.323 20:13:29 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:12:53.323 20:13:29 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:12:53.323 20:13:29 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:12:53.323 20:13:29 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:12:53.323 20:13:29 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:12:53.323 20:13:29 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:53.323 20:13:29 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:53.323 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:53.323 20:13:29 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:53.323 20:13:29 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:53.323 20:13:29 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:53.323 20:13:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:53.323 20:13:29 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:53.323 20:13:29 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:53.323 20:13:29 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:53.323 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:53.323 20:13:29 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:53.323 20:13:29 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:53.323 20:13:29 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:53.323 20:13:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:53.323 20:13:29 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:53.323 20:13:29 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:12:53.323 20:13:29 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:12:53.323 20:13:29 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:12:53.323 20:13:29 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:53.323 20:13:29 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:53.323 20:13:29 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:53.323 20:13:29 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:53.323 20:13:29 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:53.323 Found net devices under 0000:af:00.0: cvl_0_0 00:12:53.323 20:13:29 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:53.323 20:13:29 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:53.323 20:13:29 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:53.323 20:13:29 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:53.323 20:13:29 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:53.323 20:13:29 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:53.323 Found net devices under 0000:af:00.1: cvl_0_1 00:12:53.323 20:13:29 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:53.323 20:13:29 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:12:53.323 20:13:29 -- nvmf/common.sh@402 -- # is_hw=yes 00:12:53.323 20:13:29 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:12:53.323 20:13:29 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:12:53.323 20:13:29 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:12:53.323 20:13:29 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:53.323 20:13:29 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:53.323 20:13:29 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:53.323 20:13:29 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:12:53.323 20:13:29 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:53.323 20:13:29 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:53.323 20:13:29 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:12:53.323 20:13:29 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:53.323 20:13:29 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:53.323 20:13:29 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:12:53.323 20:13:29 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:12:53.323 20:13:29 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:12:53.323 20:13:29 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:53.323 20:13:29 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:53.323 20:13:29 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:53.323 20:13:29 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:12:53.323 20:13:29 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:53.323 20:13:30 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:53.323 20:13:30 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:53.323 20:13:30 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:12:53.323 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:53.324 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:12:53.324 00:12:53.324 --- 10.0.0.2 ping statistics --- 00:12:53.324 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:53.324 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:12:53.324 20:13:30 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:53.324 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:53.324 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.294 ms 00:12:53.324 00:12:53.324 --- 10.0.0.1 ping statistics --- 00:12:53.324 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:53.324 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:12:53.324 20:13:30 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:53.324 20:13:30 -- nvmf/common.sh@410 -- # return 0 00:12:53.324 20:13:30 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:53.324 20:13:30 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:53.324 20:13:30 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:53.324 20:13:30 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:53.324 20:13:30 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:53.324 20:13:30 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:53.324 20:13:30 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:53.324 20:13:30 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:53.324 20:13:30 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:53.324 20:13:30 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:53.324 20:13:30 -- common/autotest_common.sh@10 -- # set +x 00:12:53.324 20:13:30 -- nvmf/common.sh@469 -- # nvmfpid=1699875 00:12:53.324 20:13:30 -- nvmf/common.sh@470 -- # waitforlisten 1699875 00:12:53.324 20:13:30 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:53.324 20:13:30 -- common/autotest_common.sh@817 -- # '[' -z 1699875 ']' 00:12:53.324 20:13:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:53.324 20:13:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:53.324 20:13:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:53.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:53.324 20:13:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:53.324 20:13:30 -- common/autotest_common.sh@10 -- # set +x 00:12:53.324 [2024-02-14 20:13:30.150110] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:12:53.324 [2024-02-14 20:13:30.150153] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:53.324 EAL: No free 2048 kB hugepages reported on node 1 00:12:53.324 [2024-02-14 20:13:30.211738] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:53.324 [2024-02-14 20:13:30.286773] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:53.324 [2024-02-14 20:13:30.286883] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:53.324 [2024-02-14 20:13:30.286890] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:53.324 [2024-02-14 20:13:30.286896] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:53.324 [2024-02-14 20:13:30.287009] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:53.324 [2024-02-14 20:13:30.287087] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:53.324 [2024-02-14 20:13:30.287304] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:53.324 [2024-02-14 20:13:30.287306] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:53.583 20:13:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:53.583 20:13:30 -- common/autotest_common.sh@850 -- # return 0 00:12:53.583 20:13:30 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:53.583 20:13:30 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:53.583 20:13:30 -- common/autotest_common.sh@10 -- # set +x 00:12:53.583 20:13:30 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:53.583 20:13:30 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:53.583 20:13:30 -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode27223 00:12:53.842 [2024-02-14 20:13:31.129316] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:53.842 20:13:31 -- target/invalid.sh@40 -- # out='request: 00:12:53.842 { 00:12:53.842 "nqn": "nqn.2016-06.io.spdk:cnode27223", 00:12:53.842 "tgt_name": "foobar", 00:12:53.842 "method": "nvmf_create_subsystem", 00:12:53.842 "req_id": 1 00:12:53.842 } 00:12:53.842 Got JSON-RPC error response 00:12:53.842 response: 00:12:53.842 { 00:12:53.842 "code": -32603, 00:12:53.842 "message": "Unable to find target foobar" 00:12:53.842 }' 00:12:53.842 20:13:31 -- target/invalid.sh@41 -- # [[ request: 00:12:53.842 { 00:12:53.842 "nqn": "nqn.2016-06.io.spdk:cnode27223", 00:12:53.842 "tgt_name": "foobar", 00:12:53.842 "method": "nvmf_create_subsystem", 00:12:53.842 "req_id": 1 00:12:53.842 } 00:12:53.842 Got JSON-RPC error response 00:12:53.842 response: 00:12:53.842 { 00:12:53.842 "code": -32603, 00:12:53.842 "message": "Unable to find target foobar" 00:12:53.842 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:53.842 20:13:31 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:53.842 20:13:31 -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode11806 00:12:54.101 [2024-02-14 20:13:31.326037] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11806: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:54.101 20:13:31 -- target/invalid.sh@45 -- # out='request: 00:12:54.101 { 00:12:54.101 "nqn": "nqn.2016-06.io.spdk:cnode11806", 00:12:54.101 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:54.101 "method": "nvmf_create_subsystem", 00:12:54.101 "req_id": 1 00:12:54.101 } 00:12:54.101 Got JSON-RPC error response 00:12:54.101 response: 00:12:54.101 { 00:12:54.101 "code": -32602, 00:12:54.101 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:54.101 }' 00:12:54.101 20:13:31 -- target/invalid.sh@46 -- # [[ request: 00:12:54.101 { 00:12:54.101 "nqn": "nqn.2016-06.io.spdk:cnode11806", 00:12:54.101 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:54.101 "method": "nvmf_create_subsystem", 00:12:54.101 "req_id": 1 00:12:54.101 } 00:12:54.101 Got JSON-RPC error response 00:12:54.101 response: 00:12:54.101 { 00:12:54.101 "code": -32602, 00:12:54.101 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:54.101 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:54.101 20:13:31 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:54.101 20:13:31 -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode32070 00:12:54.101 [2024-02-14 20:13:31.514650] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32070: invalid model number 'SPDK_Controller' 00:12:54.361 20:13:31 -- target/invalid.sh@50 -- # out='request: 00:12:54.361 { 00:12:54.361 "nqn": "nqn.2016-06.io.spdk:cnode32070", 00:12:54.361 "model_number": "SPDK_Controller\u001f", 00:12:54.361 "method": "nvmf_create_subsystem", 00:12:54.361 "req_id": 1 00:12:54.361 } 00:12:54.361 Got JSON-RPC error response 00:12:54.361 response: 00:12:54.361 { 00:12:54.361 "code": -32602, 00:12:54.361 "message": "Invalid MN SPDK_Controller\u001f" 00:12:54.361 }' 00:12:54.361 20:13:31 -- target/invalid.sh@51 -- # [[ request: 00:12:54.361 { 00:12:54.361 "nqn": "nqn.2016-06.io.spdk:cnode32070", 00:12:54.361 "model_number": "SPDK_Controller\u001f", 00:12:54.361 "method": "nvmf_create_subsystem", 00:12:54.361 "req_id": 1 00:12:54.361 } 00:12:54.361 Got JSON-RPC error response 00:12:54.361 response: 00:12:54.361 { 00:12:54.361 "code": -32602, 00:12:54.361 "message": "Invalid MN SPDK_Controller\u001f" 00:12:54.361 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:54.361 20:13:31 -- target/invalid.sh@54 -- # gen_random_s 21 00:12:54.361 20:13:31 -- target/invalid.sh@19 -- # local length=21 ll 00:12:54.361 20:13:31 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:54.361 20:13:31 -- target/invalid.sh@21 -- # local chars 00:12:54.361 20:13:31 -- target/invalid.sh@22 -- # local string 00:12:54.361 20:13:31 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:54.361 20:13:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.361 20:13:31 -- target/invalid.sh@25 -- # printf %x 38 00:12:54.361 20:13:31 -- target/invalid.sh@25 -- # echo -e '\x26' 00:12:54.361 20:13:31 -- target/invalid.sh@25 -- # string+='&' 00:12:54.361 20:13:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.361 20:13:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.361 20:13:31 -- target/invalid.sh@25 -- # printf %x 101 00:12:54.361 20:13:31 -- target/invalid.sh@25 -- # echo -e '\x65' 00:12:54.361 20:13:31 -- target/invalid.sh@25 -- # string+=e 00:12:54.361 20:13:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.361 20:13:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.361 20:13:31 -- target/invalid.sh@25 -- # printf %x 90 00:12:54.361 20:13:31 -- target/invalid.sh@25 -- # echo -e '\x5a' 00:12:54.361 20:13:31 -- target/invalid.sh@25 -- # string+=Z 00:12:54.361 20:13:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.361 20:13:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.361 20:13:31 -- target/invalid.sh@25 -- # printf %x 112 00:12:54.361 20:13:31 -- target/invalid.sh@25 -- # echo -e '\x70' 00:12:54.361 20:13:31 -- target/invalid.sh@25 -- # string+=p 00:12:54.361 20:13:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.361 20:13:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.361 20:13:31 -- target/invalid.sh@25 -- # printf %x 36 00:12:54.361 20:13:31 -- target/invalid.sh@25 -- # echo -e '\x24' 00:12:54.361 20:13:31 -- target/invalid.sh@25 -- # string+='$' 00:12:54.361 20:13:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.361 20:13:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.361 20:13:31 -- target/invalid.sh@25 -- # printf %x 98 00:12:54.361 20:13:31 -- target/invalid.sh@25 -- # echo -e '\x62' 00:12:54.361 20:13:31 -- target/invalid.sh@25 -- # string+=b 00:12:54.361 20:13:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.361 20:13:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.361 20:13:31 -- target/invalid.sh@25 -- # printf %x 121 00:12:54.361 20:13:31 -- target/invalid.sh@25 -- # echo -e '\x79' 00:12:54.361 20:13:31 -- target/invalid.sh@25 -- # string+=y 00:12:54.361 20:13:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.361 20:13:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.361 20:13:31 -- target/invalid.sh@25 -- # printf %x 34 00:12:54.361 20:13:31 -- target/invalid.sh@25 -- # echo -e '\x22' 00:12:54.361 20:13:31 -- target/invalid.sh@25 -- # string+='"' 00:12:54.361 20:13:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.361 20:13:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.361 20:13:31 -- target/invalid.sh@25 -- # printf %x 43 00:12:54.361 20:13:31 -- target/invalid.sh@25 -- # echo -e '\x2b' 00:12:54.361 20:13:31 -- target/invalid.sh@25 -- # string+=+ 00:12:54.361 20:13:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.361 20:13:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.361 20:13:31 -- target/invalid.sh@25 -- # printf %x 123 00:12:54.361 20:13:31 -- target/invalid.sh@25 -- # echo -e '\x7b' 00:12:54.361 20:13:31 -- target/invalid.sh@25 -- # string+='{' 00:12:54.361 20:13:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.361 20:13:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.361 20:13:31 -- target/invalid.sh@25 -- # printf %x 45 00:12:54.361 20:13:31 -- target/invalid.sh@25 -- # echo -e '\x2d' 00:12:54.361 20:13:31 -- target/invalid.sh@25 -- # string+=- 00:12:54.361 20:13:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.361 20:13:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.361 20:13:31 -- target/invalid.sh@25 -- # printf %x 48 00:12:54.361 20:13:31 -- target/invalid.sh@25 -- # echo -e '\x30' 00:12:54.361 20:13:31 -- target/invalid.sh@25 -- # string+=0 00:12:54.361 20:13:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.361 20:13:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.361 20:13:31 -- target/invalid.sh@25 -- # printf %x 102 00:12:54.361 20:13:31 -- target/invalid.sh@25 -- # echo -e '\x66' 00:12:54.361 20:13:31 -- target/invalid.sh@25 -- # string+=f 00:12:54.361 20:13:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.361 20:13:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.361 20:13:31 -- target/invalid.sh@25 -- # printf %x 56 00:12:54.361 20:13:31 -- target/invalid.sh@25 -- # echo -e '\x38' 00:12:54.361 20:13:31 -- target/invalid.sh@25 -- # string+=8 00:12:54.361 20:13:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.361 20:13:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.361 20:13:31 -- target/invalid.sh@25 -- # printf %x 61 00:12:54.362 20:13:31 -- target/invalid.sh@25 -- # echo -e '\x3d' 00:12:54.362 20:13:31 -- target/invalid.sh@25 -- # string+== 00:12:54.362 20:13:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.362 20:13:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.362 20:13:31 -- target/invalid.sh@25 -- # printf %x 68 00:12:54.362 20:13:31 -- target/invalid.sh@25 -- # echo -e '\x44' 00:12:54.362 20:13:31 -- target/invalid.sh@25 -- # string+=D 00:12:54.362 20:13:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.362 20:13:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.362 20:13:31 -- target/invalid.sh@25 -- # printf %x 109 00:12:54.362 20:13:31 -- target/invalid.sh@25 -- # echo -e '\x6d' 00:12:54.362 20:13:31 -- target/invalid.sh@25 -- # string+=m 00:12:54.362 20:13:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.362 20:13:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.362 20:13:31 -- target/invalid.sh@25 -- # printf %x 106 00:12:54.362 20:13:31 -- target/invalid.sh@25 -- # echo -e '\x6a' 00:12:54.362 20:13:31 -- target/invalid.sh@25 -- # string+=j 00:12:54.362 20:13:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.362 20:13:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.362 20:13:31 -- target/invalid.sh@25 -- # printf %x 94 00:12:54.362 20:13:31 -- target/invalid.sh@25 -- # echo -e '\x5e' 00:12:54.362 20:13:31 -- target/invalid.sh@25 -- # string+='^' 00:12:54.362 20:13:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.362 20:13:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.362 20:13:31 -- target/invalid.sh@25 -- # printf %x 99 00:12:54.362 20:13:31 -- target/invalid.sh@25 -- # echo -e '\x63' 00:12:54.362 20:13:31 -- target/invalid.sh@25 -- # string+=c 00:12:54.362 20:13:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.362 20:13:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.362 20:13:31 -- target/invalid.sh@25 -- # printf %x 76 00:12:54.362 20:13:31 -- target/invalid.sh@25 -- # echo -e '\x4c' 00:12:54.362 20:13:31 -- target/invalid.sh@25 -- # string+=L 00:12:54.362 20:13:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.362 20:13:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.362 20:13:31 -- target/invalid.sh@28 -- # [[ & == \- ]] 00:12:54.362 20:13:31 -- target/invalid.sh@31 -- # echo '&eZp$by"+{-0f8=Dmj^cL' 00:12:54.362 20:13:31 -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '&eZp$by"+{-0f8=Dmj^cL' nqn.2016-06.io.spdk:cnode24314 00:12:54.621 [2024-02-14 20:13:31.831692] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24314: invalid serial number '&eZp$by"+{-0f8=Dmj^cL' 00:12:54.621 20:13:31 -- target/invalid.sh@54 -- # out='request: 00:12:54.621 { 00:12:54.621 "nqn": "nqn.2016-06.io.spdk:cnode24314", 00:12:54.621 "serial_number": "&eZp$by\"+{-0f8=Dmj^cL", 00:12:54.621 "method": "nvmf_create_subsystem", 00:12:54.621 "req_id": 1 00:12:54.621 } 00:12:54.621 Got JSON-RPC error response 00:12:54.621 response: 00:12:54.621 { 00:12:54.621 "code": -32602, 00:12:54.621 "message": "Invalid SN &eZp$by\"+{-0f8=Dmj^cL" 00:12:54.621 }' 00:12:54.621 20:13:31 -- target/invalid.sh@55 -- # [[ request: 00:12:54.621 { 00:12:54.621 "nqn": "nqn.2016-06.io.spdk:cnode24314", 00:12:54.621 "serial_number": "&eZp$by\"+{-0f8=Dmj^cL", 00:12:54.621 "method": "nvmf_create_subsystem", 00:12:54.621 "req_id": 1 00:12:54.621 } 00:12:54.621 Got JSON-RPC error response 00:12:54.621 response: 00:12:54.621 { 00:12:54.621 "code": -32602, 00:12:54.621 "message": "Invalid SN &eZp$by\"+{-0f8=Dmj^cL" 00:12:54.621 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:54.622 20:13:31 -- target/invalid.sh@58 -- # gen_random_s 41 00:12:54.622 20:13:31 -- target/invalid.sh@19 -- # local length=41 ll 00:12:54.622 20:13:31 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:54.622 20:13:31 -- target/invalid.sh@21 -- # local chars 00:12:54.622 20:13:31 -- target/invalid.sh@22 -- # local string 00:12:54.622 20:13:31 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:54.622 20:13:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.622 20:13:31 -- target/invalid.sh@25 -- # printf %x 80 00:12:54.622 20:13:31 -- target/invalid.sh@25 -- # echo -e '\x50' 00:12:54.622 20:13:31 -- target/invalid.sh@25 -- # string+=P 00:12:54.622 20:13:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.622 20:13:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.622 20:13:31 -- target/invalid.sh@25 -- # printf %x 67 00:12:54.622 20:13:31 -- target/invalid.sh@25 -- # echo -e '\x43' 00:12:54.622 20:13:31 -- target/invalid.sh@25 -- # string+=C 00:12:54.622 20:13:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.622 20:13:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.622 20:13:31 -- target/invalid.sh@25 -- # printf %x 127 00:12:54.622 20:13:31 -- target/invalid.sh@25 -- # echo -e '\x7f' 00:12:54.622 20:13:31 -- target/invalid.sh@25 -- # string+=$'\177' 00:12:54.622 20:13:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.622 20:13:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.622 20:13:31 -- target/invalid.sh@25 -- # printf %x 87 00:12:54.622 20:13:31 -- target/invalid.sh@25 -- # echo -e '\x57' 00:12:54.622 20:13:31 -- target/invalid.sh@25 -- # string+=W 00:12:54.622 20:13:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.622 20:13:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.622 20:13:31 -- target/invalid.sh@25 -- # printf %x 36 00:12:54.622 20:13:31 -- target/invalid.sh@25 -- # echo -e '\x24' 00:12:54.622 20:13:31 -- target/invalid.sh@25 -- # string+='$' 00:12:54.622 20:13:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.622 20:13:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.622 20:13:31 -- target/invalid.sh@25 -- # printf %x 95 00:12:54.622 20:13:31 -- target/invalid.sh@25 -- # echo -e '\x5f' 00:12:54.622 20:13:31 -- target/invalid.sh@25 -- # string+=_ 00:12:54.622 20:13:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.622 20:13:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.622 20:13:31 -- target/invalid.sh@25 -- # printf %x 95 00:12:54.622 20:13:31 -- target/invalid.sh@25 -- # echo -e '\x5f' 00:12:54.622 20:13:31 -- target/invalid.sh@25 -- # string+=_ 00:12:54.622 20:13:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.622 20:13:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.622 20:13:31 -- target/invalid.sh@25 -- # printf %x 56 00:12:54.622 20:13:31 -- target/invalid.sh@25 -- # echo -e '\x38' 00:12:54.622 20:13:31 -- target/invalid.sh@25 -- # string+=8 00:12:54.622 20:13:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.622 20:13:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.622 20:13:31 -- target/invalid.sh@25 -- # printf %x 72 00:12:54.622 20:13:31 -- target/invalid.sh@25 -- # echo -e '\x48' 00:12:54.622 20:13:31 -- target/invalid.sh@25 -- # string+=H 00:12:54.622 20:13:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.622 20:13:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.622 20:13:31 -- target/invalid.sh@25 -- # printf %x 66 00:12:54.622 20:13:31 -- target/invalid.sh@25 -- # echo -e '\x42' 00:12:54.622 20:13:31 -- target/invalid.sh@25 -- # string+=B 00:12:54.622 20:13:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.622 20:13:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.622 20:13:31 -- target/invalid.sh@25 -- # printf %x 98 00:12:54.622 20:13:31 -- target/invalid.sh@25 -- # echo -e '\x62' 00:12:54.622 20:13:31 -- target/invalid.sh@25 -- # string+=b 00:12:54.622 20:13:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.622 20:13:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.622 20:13:31 -- target/invalid.sh@25 -- # printf %x 66 00:12:54.622 20:13:31 -- target/invalid.sh@25 -- # echo -e '\x42' 00:12:54.622 20:13:31 -- target/invalid.sh@25 -- # string+=B 00:12:54.622 20:13:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.622 20:13:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.622 20:13:31 -- target/invalid.sh@25 -- # printf %x 73 00:12:54.622 20:13:31 -- target/invalid.sh@25 -- # echo -e '\x49' 00:12:54.622 20:13:31 -- target/invalid.sh@25 -- # string+=I 00:12:54.622 20:13:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.622 20:13:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.622 20:13:31 -- target/invalid.sh@25 -- # printf %x 113 00:12:54.622 20:13:31 -- target/invalid.sh@25 -- # echo -e '\x71' 00:12:54.622 20:13:31 -- target/invalid.sh@25 -- # string+=q 00:12:54.622 20:13:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.622 20:13:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.622 20:13:31 -- target/invalid.sh@25 -- # printf %x 45 00:12:54.622 20:13:31 -- target/invalid.sh@25 -- # echo -e '\x2d' 00:12:54.622 20:13:31 -- target/invalid.sh@25 -- # string+=- 00:12:54.622 20:13:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.622 20:13:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.622 20:13:31 -- target/invalid.sh@25 -- # printf %x 56 00:12:54.622 20:13:31 -- target/invalid.sh@25 -- # echo -e '\x38' 00:12:54.622 20:13:31 -- target/invalid.sh@25 -- # string+=8 00:12:54.622 20:13:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.622 20:13:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.622 20:13:31 -- target/invalid.sh@25 -- # printf %x 49 00:12:54.622 20:13:31 -- target/invalid.sh@25 -- # echo -e '\x31' 00:12:54.622 20:13:31 -- target/invalid.sh@25 -- # string+=1 00:12:54.622 20:13:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.622 20:13:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.622 20:13:31 -- target/invalid.sh@25 -- # printf %x 107 00:12:54.622 20:13:31 -- target/invalid.sh@25 -- # echo -e '\x6b' 00:12:54.622 20:13:31 -- target/invalid.sh@25 -- # string+=k 00:12:54.622 20:13:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.622 20:13:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.622 20:13:31 -- target/invalid.sh@25 -- # printf %x 56 00:12:54.622 20:13:31 -- target/invalid.sh@25 -- # echo -e '\x38' 00:12:54.622 20:13:31 -- target/invalid.sh@25 -- # string+=8 00:12:54.622 20:13:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.622 20:13:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.622 20:13:31 -- target/invalid.sh@25 -- # printf %x 75 00:12:54.622 20:13:31 -- target/invalid.sh@25 -- # echo -e '\x4b' 00:12:54.622 20:13:31 -- target/invalid.sh@25 -- # string+=K 00:12:54.622 20:13:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.622 20:13:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.622 20:13:31 -- target/invalid.sh@25 -- # printf %x 66 00:12:54.622 20:13:31 -- target/invalid.sh@25 -- # echo -e '\x42' 00:12:54.622 20:13:31 -- target/invalid.sh@25 -- # string+=B 00:12:54.622 20:13:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.622 20:13:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.622 20:13:31 -- target/invalid.sh@25 -- # printf %x 52 00:12:54.622 20:13:31 -- target/invalid.sh@25 -- # echo -e '\x34' 00:12:54.622 20:13:31 -- target/invalid.sh@25 -- # string+=4 00:12:54.622 20:13:31 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.622 20:13:31 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.622 20:13:31 -- target/invalid.sh@25 -- # printf %x 113 00:12:54.622 20:13:32 -- target/invalid.sh@25 -- # echo -e '\x71' 00:12:54.622 20:13:32 -- target/invalid.sh@25 -- # string+=q 00:12:54.622 20:13:32 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.622 20:13:32 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.622 20:13:32 -- target/invalid.sh@25 -- # printf %x 123 00:12:54.622 20:13:32 -- target/invalid.sh@25 -- # echo -e '\x7b' 00:12:54.622 20:13:32 -- target/invalid.sh@25 -- # string+='{' 00:12:54.622 20:13:32 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.622 20:13:32 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.622 20:13:32 -- target/invalid.sh@25 -- # printf %x 37 00:12:54.622 20:13:32 -- target/invalid.sh@25 -- # echo -e '\x25' 00:12:54.622 20:13:32 -- target/invalid.sh@25 -- # string+=% 00:12:54.622 20:13:32 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.622 20:13:32 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.622 20:13:32 -- target/invalid.sh@25 -- # printf %x 82 00:12:54.622 20:13:32 -- target/invalid.sh@25 -- # echo -e '\x52' 00:12:54.622 20:13:32 -- target/invalid.sh@25 -- # string+=R 00:12:54.622 20:13:32 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.622 20:13:32 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.622 20:13:32 -- target/invalid.sh@25 -- # printf %x 44 00:12:54.622 20:13:32 -- target/invalid.sh@25 -- # echo -e '\x2c' 00:12:54.622 20:13:32 -- target/invalid.sh@25 -- # string+=, 00:12:54.622 20:13:32 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.622 20:13:32 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.622 20:13:32 -- target/invalid.sh@25 -- # printf %x 104 00:12:54.622 20:13:32 -- target/invalid.sh@25 -- # echo -e '\x68' 00:12:54.622 20:13:32 -- target/invalid.sh@25 -- # string+=h 00:12:54.622 20:13:32 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.622 20:13:32 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.622 20:13:32 -- target/invalid.sh@25 -- # printf %x 68 00:12:54.882 20:13:32 -- target/invalid.sh@25 -- # echo -e '\x44' 00:12:54.882 20:13:32 -- target/invalid.sh@25 -- # string+=D 00:12:54.882 20:13:32 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.882 20:13:32 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.882 20:13:32 -- target/invalid.sh@25 -- # printf %x 69 00:12:54.882 20:13:32 -- target/invalid.sh@25 -- # echo -e '\x45' 00:12:54.882 20:13:32 -- target/invalid.sh@25 -- # string+=E 00:12:54.882 20:13:32 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.882 20:13:32 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.882 20:13:32 -- target/invalid.sh@25 -- # printf %x 117 00:12:54.882 20:13:32 -- target/invalid.sh@25 -- # echo -e '\x75' 00:12:54.882 20:13:32 -- target/invalid.sh@25 -- # string+=u 00:12:54.882 20:13:32 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.882 20:13:32 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.882 20:13:32 -- target/invalid.sh@25 -- # printf %x 62 00:12:54.882 20:13:32 -- target/invalid.sh@25 -- # echo -e '\x3e' 00:12:54.882 20:13:32 -- target/invalid.sh@25 -- # string+='>' 00:12:54.882 20:13:32 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.882 20:13:32 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.882 20:13:32 -- target/invalid.sh@25 -- # printf %x 61 00:12:54.882 20:13:32 -- target/invalid.sh@25 -- # echo -e '\x3d' 00:12:54.882 20:13:32 -- target/invalid.sh@25 -- # string+== 00:12:54.882 20:13:32 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.882 20:13:32 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.882 20:13:32 -- target/invalid.sh@25 -- # printf %x 126 00:12:54.882 20:13:32 -- target/invalid.sh@25 -- # echo -e '\x7e' 00:12:54.882 20:13:32 -- target/invalid.sh@25 -- # string+='~' 00:12:54.882 20:13:32 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.882 20:13:32 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.882 20:13:32 -- target/invalid.sh@25 -- # printf %x 85 00:12:54.882 20:13:32 -- target/invalid.sh@25 -- # echo -e '\x55' 00:12:54.882 20:13:32 -- target/invalid.sh@25 -- # string+=U 00:12:54.882 20:13:32 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.882 20:13:32 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.882 20:13:32 -- target/invalid.sh@25 -- # printf %x 101 00:12:54.882 20:13:32 -- target/invalid.sh@25 -- # echo -e '\x65' 00:12:54.882 20:13:32 -- target/invalid.sh@25 -- # string+=e 00:12:54.882 20:13:32 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.882 20:13:32 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.882 20:13:32 -- target/invalid.sh@25 -- # printf %x 96 00:12:54.882 20:13:32 -- target/invalid.sh@25 -- # echo -e '\x60' 00:12:54.882 20:13:32 -- target/invalid.sh@25 -- # string+='`' 00:12:54.882 20:13:32 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.882 20:13:32 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.882 20:13:32 -- target/invalid.sh@25 -- # printf %x 65 00:12:54.882 20:13:32 -- target/invalid.sh@25 -- # echo -e '\x41' 00:12:54.882 20:13:32 -- target/invalid.sh@25 -- # string+=A 00:12:54.882 20:13:32 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.882 20:13:32 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.882 20:13:32 -- target/invalid.sh@25 -- # printf %x 59 00:12:54.882 20:13:32 -- target/invalid.sh@25 -- # echo -e '\x3b' 00:12:54.882 20:13:32 -- target/invalid.sh@25 -- # string+=';' 00:12:54.882 20:13:32 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.882 20:13:32 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.882 20:13:32 -- target/invalid.sh@25 -- # printf %x 69 00:12:54.882 20:13:32 -- target/invalid.sh@25 -- # echo -e '\x45' 00:12:54.882 20:13:32 -- target/invalid.sh@25 -- # string+=E 00:12:54.882 20:13:32 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.882 20:13:32 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.882 20:13:32 -- target/invalid.sh@25 -- # printf %x 118 00:12:54.882 20:13:32 -- target/invalid.sh@25 -- # echo -e '\x76' 00:12:54.882 20:13:32 -- target/invalid.sh@25 -- # string+=v 00:12:54.882 20:13:32 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.882 20:13:32 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.882 20:13:32 -- target/invalid.sh@28 -- # [[ P == \- ]] 00:12:54.882 20:13:32 -- target/invalid.sh@31 -- # echo 'PCW$__8HBbBIq-81k8KB4q{%R,hDEu>=~Ue`A;Ev' 00:12:54.882 20:13:32 -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'PCW$__8HBbBIq-81k8KB4q{%R,hDEu>=~Ue`A;Ev' nqn.2016-06.io.spdk:cnode10164 00:12:54.882 [2024-02-14 20:13:32.261091] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10164: invalid model number 'PCW$__8HBbBIq-81k8KB4q{%R,hDEu>=~Ue`A;Ev' 00:12:54.882 20:13:32 -- target/invalid.sh@58 -- # out='request: 00:12:54.882 { 00:12:54.882 "nqn": "nqn.2016-06.io.spdk:cnode10164", 00:12:54.882 "model_number": "PC\u007fW$__8HBbBIq-81k8KB4q{%R,hDEu>=~Ue`A;Ev", 00:12:54.882 "method": "nvmf_create_subsystem", 00:12:54.882 "req_id": 1 00:12:54.882 } 00:12:54.882 Got JSON-RPC error response 00:12:54.882 response: 00:12:54.882 { 00:12:54.882 "code": -32602, 00:12:54.882 "message": "Invalid MN PC\u007fW$__8HBbBIq-81k8KB4q{%R,hDEu>=~Ue`A;Ev" 00:12:54.882 }' 00:12:54.882 20:13:32 -- target/invalid.sh@59 -- # [[ request: 00:12:54.882 { 00:12:54.882 "nqn": "nqn.2016-06.io.spdk:cnode10164", 00:12:54.882 "model_number": "PC\u007fW$__8HBbBIq-81k8KB4q{%R,hDEu>=~Ue`A;Ev", 00:12:54.882 "method": "nvmf_create_subsystem", 00:12:54.882 "req_id": 1 00:12:54.882 } 00:12:54.882 Got JSON-RPC error response 00:12:54.882 response: 00:12:54.882 { 00:12:54.882 "code": -32602, 00:12:54.882 "message": "Invalid MN PC\u007fW$__8HBbBIq-81k8KB4q{%R,hDEu>=~Ue`A;Ev" 00:12:54.882 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:54.882 20:13:32 -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:12:55.141 [2024-02-14 20:13:32.437736] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:55.141 20:13:32 -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:12:55.401 20:13:32 -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:12:55.401 20:13:32 -- target/invalid.sh@67 -- # echo '' 00:12:55.401 20:13:32 -- target/invalid.sh@67 -- # head -n 1 00:12:55.401 20:13:32 -- target/invalid.sh@67 -- # IP= 00:12:55.401 20:13:32 -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:12:55.660 [2024-02-14 20:13:32.828341] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:12:55.660 20:13:32 -- target/invalid.sh@69 -- # out='request: 00:12:55.660 { 00:12:55.660 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:55.660 "listen_address": { 00:12:55.660 "trtype": "tcp", 00:12:55.660 "traddr": "", 00:12:55.660 "trsvcid": "4421" 00:12:55.660 }, 00:12:55.660 "method": "nvmf_subsystem_remove_listener", 00:12:55.660 "req_id": 1 00:12:55.660 } 00:12:55.660 Got JSON-RPC error response 00:12:55.660 response: 00:12:55.660 { 00:12:55.660 "code": -32602, 00:12:55.660 "message": "Invalid parameters" 00:12:55.660 }' 00:12:55.660 20:13:32 -- target/invalid.sh@70 -- # [[ request: 00:12:55.660 { 00:12:55.660 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:55.660 "listen_address": { 00:12:55.660 "trtype": "tcp", 00:12:55.660 "traddr": "", 00:12:55.660 "trsvcid": "4421" 00:12:55.660 }, 00:12:55.660 "method": "nvmf_subsystem_remove_listener", 00:12:55.660 "req_id": 1 00:12:55.660 } 00:12:55.660 Got JSON-RPC error response 00:12:55.660 response: 00:12:55.660 { 00:12:55.660 "code": -32602, 00:12:55.660 "message": "Invalid parameters" 00:12:55.660 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:12:55.660 20:13:32 -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode16140 -i 0 00:12:55.660 [2024-02-14 20:13:33.004909] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16140: invalid cntlid range [0-65519] 00:12:55.660 20:13:33 -- target/invalid.sh@73 -- # out='request: 00:12:55.660 { 00:12:55.660 "nqn": "nqn.2016-06.io.spdk:cnode16140", 00:12:55.660 "min_cntlid": 0, 00:12:55.660 "method": "nvmf_create_subsystem", 00:12:55.660 "req_id": 1 00:12:55.660 } 00:12:55.660 Got JSON-RPC error response 00:12:55.660 response: 00:12:55.660 { 00:12:55.660 "code": -32602, 00:12:55.660 "message": "Invalid cntlid range [0-65519]" 00:12:55.660 }' 00:12:55.660 20:13:33 -- target/invalid.sh@74 -- # [[ request: 00:12:55.660 { 00:12:55.660 "nqn": "nqn.2016-06.io.spdk:cnode16140", 00:12:55.660 "min_cntlid": 0, 00:12:55.660 "method": "nvmf_create_subsystem", 00:12:55.660 "req_id": 1 00:12:55.660 } 00:12:55.660 Got JSON-RPC error response 00:12:55.660 response: 00:12:55.660 { 00:12:55.660 "code": -32602, 00:12:55.660 "message": "Invalid cntlid range [0-65519]" 00:12:55.660 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:55.660 20:13:33 -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode27524 -i 65520 00:12:55.919 [2024-02-14 20:13:33.181528] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27524: invalid cntlid range [65520-65519] 00:12:55.919 20:13:33 -- target/invalid.sh@75 -- # out='request: 00:12:55.919 { 00:12:55.919 "nqn": "nqn.2016-06.io.spdk:cnode27524", 00:12:55.919 "min_cntlid": 65520, 00:12:55.919 "method": "nvmf_create_subsystem", 00:12:55.919 "req_id": 1 00:12:55.919 } 00:12:55.919 Got JSON-RPC error response 00:12:55.919 response: 00:12:55.919 { 00:12:55.919 "code": -32602, 00:12:55.919 "message": "Invalid cntlid range [65520-65519]" 00:12:55.919 }' 00:12:55.919 20:13:33 -- target/invalid.sh@76 -- # [[ request: 00:12:55.919 { 00:12:55.919 "nqn": "nqn.2016-06.io.spdk:cnode27524", 00:12:55.919 "min_cntlid": 65520, 00:12:55.919 "method": "nvmf_create_subsystem", 00:12:55.919 "req_id": 1 00:12:55.919 } 00:12:55.919 Got JSON-RPC error response 00:12:55.919 response: 00:12:55.919 { 00:12:55.919 "code": -32602, 00:12:55.919 "message": "Invalid cntlid range [65520-65519]" 00:12:55.919 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:55.919 20:13:33 -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10216 -I 0 00:12:56.178 [2024-02-14 20:13:33.358186] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10216: invalid cntlid range [1-0] 00:12:56.178 20:13:33 -- target/invalid.sh@77 -- # out='request: 00:12:56.178 { 00:12:56.178 "nqn": "nqn.2016-06.io.spdk:cnode10216", 00:12:56.178 "max_cntlid": 0, 00:12:56.178 "method": "nvmf_create_subsystem", 00:12:56.178 "req_id": 1 00:12:56.178 } 00:12:56.178 Got JSON-RPC error response 00:12:56.178 response: 00:12:56.178 { 00:12:56.178 "code": -32602, 00:12:56.178 "message": "Invalid cntlid range [1-0]" 00:12:56.178 }' 00:12:56.178 20:13:33 -- target/invalid.sh@78 -- # [[ request: 00:12:56.178 { 00:12:56.178 "nqn": "nqn.2016-06.io.spdk:cnode10216", 00:12:56.178 "max_cntlid": 0, 00:12:56.178 "method": "nvmf_create_subsystem", 00:12:56.178 "req_id": 1 00:12:56.178 } 00:12:56.178 Got JSON-RPC error response 00:12:56.178 response: 00:12:56.178 { 00:12:56.178 "code": -32602, 00:12:56.178 "message": "Invalid cntlid range [1-0]" 00:12:56.178 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:56.178 20:13:33 -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode18109 -I 65520 00:12:56.178 [2024-02-14 20:13:33.534746] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18109: invalid cntlid range [1-65520] 00:12:56.178 20:13:33 -- target/invalid.sh@79 -- # out='request: 00:12:56.178 { 00:12:56.178 "nqn": "nqn.2016-06.io.spdk:cnode18109", 00:12:56.178 "max_cntlid": 65520, 00:12:56.178 "method": "nvmf_create_subsystem", 00:12:56.178 "req_id": 1 00:12:56.178 } 00:12:56.178 Got JSON-RPC error response 00:12:56.178 response: 00:12:56.178 { 00:12:56.178 "code": -32602, 00:12:56.178 "message": "Invalid cntlid range [1-65520]" 00:12:56.178 }' 00:12:56.178 20:13:33 -- target/invalid.sh@80 -- # [[ request: 00:12:56.178 { 00:12:56.178 "nqn": "nqn.2016-06.io.spdk:cnode18109", 00:12:56.178 "max_cntlid": 65520, 00:12:56.178 "method": "nvmf_create_subsystem", 00:12:56.178 "req_id": 1 00:12:56.178 } 00:12:56.178 Got JSON-RPC error response 00:12:56.178 response: 00:12:56.178 { 00:12:56.178 "code": -32602, 00:12:56.178 "message": "Invalid cntlid range [1-65520]" 00:12:56.178 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:56.178 20:13:33 -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode15669 -i 6 -I 5 00:12:56.436 [2024-02-14 20:13:33.707360] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15669: invalid cntlid range [6-5] 00:12:56.436 20:13:33 -- target/invalid.sh@83 -- # out='request: 00:12:56.436 { 00:12:56.436 "nqn": "nqn.2016-06.io.spdk:cnode15669", 00:12:56.436 "min_cntlid": 6, 00:12:56.436 "max_cntlid": 5, 00:12:56.436 "method": "nvmf_create_subsystem", 00:12:56.436 "req_id": 1 00:12:56.436 } 00:12:56.436 Got JSON-RPC error response 00:12:56.436 response: 00:12:56.436 { 00:12:56.436 "code": -32602, 00:12:56.436 "message": "Invalid cntlid range [6-5]" 00:12:56.436 }' 00:12:56.436 20:13:33 -- target/invalid.sh@84 -- # [[ request: 00:12:56.436 { 00:12:56.436 "nqn": "nqn.2016-06.io.spdk:cnode15669", 00:12:56.436 "min_cntlid": 6, 00:12:56.436 "max_cntlid": 5, 00:12:56.436 "method": "nvmf_create_subsystem", 00:12:56.436 "req_id": 1 00:12:56.436 } 00:12:56.436 Got JSON-RPC error response 00:12:56.436 response: 00:12:56.436 { 00:12:56.436 "code": -32602, 00:12:56.436 "message": "Invalid cntlid range [6-5]" 00:12:56.436 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:56.437 20:13:33 -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:12:56.437 20:13:33 -- target/invalid.sh@87 -- # out='request: 00:12:56.437 { 00:12:56.437 "name": "foobar", 00:12:56.437 "method": "nvmf_delete_target", 00:12:56.437 "req_id": 1 00:12:56.437 } 00:12:56.437 Got JSON-RPC error response 00:12:56.437 response: 00:12:56.437 { 00:12:56.437 "code": -32602, 00:12:56.437 "message": "The specified target doesn'\''t exist, cannot delete it." 00:12:56.437 }' 00:12:56.437 20:13:33 -- target/invalid.sh@88 -- # [[ request: 00:12:56.437 { 00:12:56.437 "name": "foobar", 00:12:56.437 "method": "nvmf_delete_target", 00:12:56.437 "req_id": 1 00:12:56.437 } 00:12:56.437 Got JSON-RPC error response 00:12:56.437 response: 00:12:56.437 { 00:12:56.437 "code": -32602, 00:12:56.437 "message": "The specified target doesn't exist, cannot delete it." 00:12:56.437 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:12:56.437 20:13:33 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:12:56.437 20:13:33 -- target/invalid.sh@91 -- # nvmftestfini 00:12:56.437 20:13:33 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:56.437 20:13:33 -- nvmf/common.sh@116 -- # sync 00:12:56.437 20:13:33 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:56.437 20:13:33 -- nvmf/common.sh@119 -- # set +e 00:12:56.437 20:13:33 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:56.437 20:13:33 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:56.437 rmmod nvme_tcp 00:12:56.696 rmmod nvme_fabrics 00:12:56.696 rmmod nvme_keyring 00:12:56.696 20:13:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:56.696 20:13:33 -- nvmf/common.sh@123 -- # set -e 00:12:56.696 20:13:33 -- nvmf/common.sh@124 -- # return 0 00:12:56.696 20:13:33 -- nvmf/common.sh@477 -- # '[' -n 1699875 ']' 00:12:56.696 20:13:33 -- nvmf/common.sh@478 -- # killprocess 1699875 00:12:56.696 20:13:33 -- common/autotest_common.sh@924 -- # '[' -z 1699875 ']' 00:12:56.696 20:13:33 -- common/autotest_common.sh@928 -- # kill -0 1699875 00:12:56.696 20:13:33 -- common/autotest_common.sh@929 -- # uname 00:12:56.696 20:13:33 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:12:56.696 20:13:33 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 1699875 00:12:56.696 20:13:33 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:12:56.696 20:13:33 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:12:56.696 20:13:33 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 1699875' 00:12:56.696 killing process with pid 1699875 00:12:56.696 20:13:33 -- common/autotest_common.sh@943 -- # kill 1699875 00:12:56.696 20:13:33 -- common/autotest_common.sh@948 -- # wait 1699875 00:12:56.956 20:13:34 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:56.956 20:13:34 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:56.956 20:13:34 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:56.956 20:13:34 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:56.956 20:13:34 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:56.956 20:13:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:56.956 20:13:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:56.956 20:13:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:58.860 20:13:36 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:12:58.860 00:12:58.860 real 0m12.533s 00:12:58.860 user 0m19.274s 00:12:58.860 sys 0m5.674s 00:12:58.860 20:13:36 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:58.860 20:13:36 -- common/autotest_common.sh@10 -- # set +x 00:12:58.860 ************************************ 00:12:58.860 END TEST nvmf_invalid 00:12:58.860 ************************************ 00:12:58.860 20:13:36 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:12:58.860 20:13:36 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:12:58.860 20:13:36 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:12:58.860 20:13:36 -- common/autotest_common.sh@10 -- # set +x 00:12:58.860 ************************************ 00:12:58.860 START TEST nvmf_abort 00:12:58.860 ************************************ 00:12:58.860 20:13:36 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:12:59.119 * Looking for test storage... 00:12:59.119 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:59.119 20:13:36 -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:59.119 20:13:36 -- nvmf/common.sh@7 -- # uname -s 00:12:59.119 20:13:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:59.119 20:13:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:59.119 20:13:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:59.119 20:13:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:59.119 20:13:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:59.119 20:13:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:59.119 20:13:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:59.119 20:13:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:59.119 20:13:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:59.119 20:13:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:59.119 20:13:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:12:59.119 20:13:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:12:59.119 20:13:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:59.119 20:13:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:59.119 20:13:36 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:59.119 20:13:36 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:59.119 20:13:36 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:59.119 20:13:36 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:59.120 20:13:36 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:59.120 20:13:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.120 20:13:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.120 20:13:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.120 20:13:36 -- paths/export.sh@5 -- # export PATH 00:12:59.120 20:13:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.120 20:13:36 -- nvmf/common.sh@46 -- # : 0 00:12:59.120 20:13:36 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:59.120 20:13:36 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:59.120 20:13:36 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:59.120 20:13:36 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:59.120 20:13:36 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:59.120 20:13:36 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:59.120 20:13:36 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:59.120 20:13:36 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:59.120 20:13:36 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:59.120 20:13:36 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:12:59.120 20:13:36 -- target/abort.sh@14 -- # nvmftestinit 00:12:59.120 20:13:36 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:59.120 20:13:36 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:59.120 20:13:36 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:59.120 20:13:36 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:59.120 20:13:36 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:59.120 20:13:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:59.120 20:13:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:59.120 20:13:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:59.120 20:13:36 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:12:59.120 20:13:36 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:12:59.120 20:13:36 -- nvmf/common.sh@284 -- # xtrace_disable 00:12:59.120 20:13:36 -- common/autotest_common.sh@10 -- # set +x 00:13:05.722 20:13:42 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:05.722 20:13:42 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:05.722 20:13:42 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:05.722 20:13:42 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:05.722 20:13:42 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:05.722 20:13:42 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:05.722 20:13:42 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:05.722 20:13:42 -- nvmf/common.sh@294 -- # net_devs=() 00:13:05.722 20:13:42 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:05.722 20:13:42 -- nvmf/common.sh@295 -- # e810=() 00:13:05.722 20:13:42 -- nvmf/common.sh@295 -- # local -ga e810 00:13:05.722 20:13:42 -- nvmf/common.sh@296 -- # x722=() 00:13:05.722 20:13:42 -- nvmf/common.sh@296 -- # local -ga x722 00:13:05.722 20:13:42 -- nvmf/common.sh@297 -- # mlx=() 00:13:05.723 20:13:42 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:05.723 20:13:42 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:05.723 20:13:42 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:05.723 20:13:42 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:05.723 20:13:42 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:05.723 20:13:42 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:05.723 20:13:42 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:05.723 20:13:42 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:05.723 20:13:42 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:05.723 20:13:42 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:05.723 20:13:42 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:05.723 20:13:42 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:05.723 20:13:42 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:05.723 20:13:42 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:05.723 20:13:42 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:05.723 20:13:42 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:05.723 20:13:42 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:05.723 20:13:42 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:05.723 20:13:42 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:05.723 20:13:42 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:05.723 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:05.723 20:13:42 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:05.723 20:13:42 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:05.723 20:13:42 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:05.723 20:13:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:05.723 20:13:42 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:05.723 20:13:42 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:05.723 20:13:42 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:05.723 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:05.723 20:13:42 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:05.723 20:13:42 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:05.723 20:13:42 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:05.723 20:13:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:05.723 20:13:42 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:05.723 20:13:42 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:05.723 20:13:42 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:05.723 20:13:42 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:05.723 20:13:42 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:05.723 20:13:42 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:05.723 20:13:42 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:05.723 20:13:42 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:05.723 20:13:42 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:05.723 Found net devices under 0000:af:00.0: cvl_0_0 00:13:05.723 20:13:42 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:05.723 20:13:42 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:05.723 20:13:42 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:05.723 20:13:42 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:05.723 20:13:42 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:05.723 20:13:42 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:05.723 Found net devices under 0000:af:00.1: cvl_0_1 00:13:05.723 20:13:42 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:05.723 20:13:42 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:05.723 20:13:42 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:05.723 20:13:42 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:05.723 20:13:42 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:05.723 20:13:42 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:05.723 20:13:42 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:05.723 20:13:42 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:05.723 20:13:42 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:05.723 20:13:42 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:05.723 20:13:42 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:05.723 20:13:42 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:05.723 20:13:42 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:05.723 20:13:42 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:05.723 20:13:42 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:05.723 20:13:42 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:05.723 20:13:42 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:05.723 20:13:42 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:05.723 20:13:42 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:05.723 20:13:42 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:05.723 20:13:42 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:05.723 20:13:42 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:05.723 20:13:42 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:05.723 20:13:42 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:05.723 20:13:42 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:05.723 20:13:42 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:05.723 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:05.723 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:13:05.723 00:13:05.723 --- 10.0.0.2 ping statistics --- 00:13:05.723 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:05.723 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:13:05.723 20:13:42 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:05.723 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:05.723 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:13:05.723 00:13:05.723 --- 10.0.0.1 ping statistics --- 00:13:05.723 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:05.723 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:13:05.723 20:13:42 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:05.723 20:13:42 -- nvmf/common.sh@410 -- # return 0 00:13:05.723 20:13:42 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:05.723 20:13:42 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:05.723 20:13:42 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:05.723 20:13:42 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:05.723 20:13:42 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:05.723 20:13:42 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:05.723 20:13:42 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:05.723 20:13:42 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:13:05.723 20:13:42 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:05.723 20:13:42 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:05.723 20:13:42 -- common/autotest_common.sh@10 -- # set +x 00:13:05.723 20:13:42 -- nvmf/common.sh@469 -- # nvmfpid=1704535 00:13:05.723 20:13:42 -- nvmf/common.sh@470 -- # waitforlisten 1704535 00:13:05.723 20:13:42 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:05.723 20:13:42 -- common/autotest_common.sh@817 -- # '[' -z 1704535 ']' 00:13:05.723 20:13:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:05.723 20:13:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:05.723 20:13:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:05.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:05.723 20:13:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:05.723 20:13:42 -- common/autotest_common.sh@10 -- # set +x 00:13:05.723 [2024-02-14 20:13:42.643740] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:13:05.723 [2024-02-14 20:13:42.643781] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:05.723 EAL: No free 2048 kB hugepages reported on node 1 00:13:05.723 [2024-02-14 20:13:42.705678] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:05.723 [2024-02-14 20:13:42.777443] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:05.723 [2024-02-14 20:13:42.777553] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:05.723 [2024-02-14 20:13:42.777561] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:05.723 [2024-02-14 20:13:42.777567] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:05.723 [2024-02-14 20:13:42.777692] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:05.723 [2024-02-14 20:13:42.777861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:05.723 [2024-02-14 20:13:42.777863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:06.291 20:13:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:06.291 20:13:43 -- common/autotest_common.sh@850 -- # return 0 00:13:06.291 20:13:43 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:06.291 20:13:43 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:06.291 20:13:43 -- common/autotest_common.sh@10 -- # set +x 00:13:06.291 20:13:43 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:06.291 20:13:43 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:13:06.291 20:13:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:06.291 20:13:43 -- common/autotest_common.sh@10 -- # set +x 00:13:06.291 [2024-02-14 20:13:43.473149] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:06.291 20:13:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:06.291 20:13:43 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:13:06.291 20:13:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:06.291 20:13:43 -- common/autotest_common.sh@10 -- # set +x 00:13:06.291 Malloc0 00:13:06.291 20:13:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:06.291 20:13:43 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:06.291 20:13:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:06.291 20:13:43 -- common/autotest_common.sh@10 -- # set +x 00:13:06.291 Delay0 00:13:06.291 20:13:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:06.291 20:13:43 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:06.291 20:13:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:06.291 20:13:43 -- common/autotest_common.sh@10 -- # set +x 00:13:06.291 20:13:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:06.291 20:13:43 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:13:06.291 20:13:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:06.291 20:13:43 -- common/autotest_common.sh@10 -- # set +x 00:13:06.291 20:13:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:06.291 20:13:43 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:06.291 20:13:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:06.291 20:13:43 -- common/autotest_common.sh@10 -- # set +x 00:13:06.291 [2024-02-14 20:13:43.534531] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:06.291 20:13:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:06.291 20:13:43 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:06.291 20:13:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:06.291 20:13:43 -- common/autotest_common.sh@10 -- # set +x 00:13:06.291 20:13:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:06.291 20:13:43 -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:13:06.291 EAL: No free 2048 kB hugepages reported on node 1 00:13:06.291 [2024-02-14 20:13:43.681955] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:08.821 Initializing NVMe Controllers 00:13:08.821 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:08.821 controller IO queue size 128 less than required 00:13:08.821 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:13:08.821 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:13:08.821 Initialization complete. Launching workers. 00:13:08.821 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 124, failed: 42989 00:13:08.821 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 43051, failed to submit 62 00:13:08.821 success 42989, unsuccess 62, failed 0 00:13:08.821 20:13:45 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:08.821 20:13:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:08.821 20:13:45 -- common/autotest_common.sh@10 -- # set +x 00:13:08.821 20:13:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:08.821 20:13:45 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:13:08.821 20:13:45 -- target/abort.sh@38 -- # nvmftestfini 00:13:08.821 20:13:45 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:08.821 20:13:45 -- nvmf/common.sh@116 -- # sync 00:13:08.821 20:13:45 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:08.821 20:13:45 -- nvmf/common.sh@119 -- # set +e 00:13:08.821 20:13:45 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:08.821 20:13:45 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:08.821 rmmod nvme_tcp 00:13:08.821 rmmod nvme_fabrics 00:13:08.821 rmmod nvme_keyring 00:13:08.821 20:13:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:08.821 20:13:45 -- nvmf/common.sh@123 -- # set -e 00:13:08.821 20:13:45 -- nvmf/common.sh@124 -- # return 0 00:13:08.821 20:13:45 -- nvmf/common.sh@477 -- # '[' -n 1704535 ']' 00:13:08.821 20:13:45 -- nvmf/common.sh@478 -- # killprocess 1704535 00:13:08.821 20:13:45 -- common/autotest_common.sh@924 -- # '[' -z 1704535 ']' 00:13:08.821 20:13:45 -- common/autotest_common.sh@928 -- # kill -0 1704535 00:13:08.821 20:13:45 -- common/autotest_common.sh@929 -- # uname 00:13:08.821 20:13:45 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:13:08.821 20:13:45 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 1704535 00:13:08.821 20:13:45 -- common/autotest_common.sh@930 -- # process_name=reactor_1 00:13:08.821 20:13:45 -- common/autotest_common.sh@934 -- # '[' reactor_1 = sudo ']' 00:13:08.821 20:13:45 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 1704535' 00:13:08.821 killing process with pid 1704535 00:13:08.821 20:13:45 -- common/autotest_common.sh@943 -- # kill 1704535 00:13:08.821 20:13:45 -- common/autotest_common.sh@948 -- # wait 1704535 00:13:08.821 20:13:46 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:08.821 20:13:46 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:08.821 20:13:46 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:08.821 20:13:46 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:08.821 20:13:46 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:08.821 20:13:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:08.821 20:13:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:08.821 20:13:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:10.724 20:13:48 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:10.724 00:13:10.724 real 0m11.878s 00:13:10.724 user 0m13.131s 00:13:10.724 sys 0m5.712s 00:13:10.724 20:13:48 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:10.724 20:13:48 -- common/autotest_common.sh@10 -- # set +x 00:13:10.724 ************************************ 00:13:10.724 END TEST nvmf_abort 00:13:10.724 ************************************ 00:13:10.982 20:13:48 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:10.982 20:13:48 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:13:10.982 20:13:48 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:13:10.982 20:13:48 -- common/autotest_common.sh@10 -- # set +x 00:13:10.982 ************************************ 00:13:10.982 START TEST nvmf_ns_hotplug_stress 00:13:10.982 ************************************ 00:13:10.982 20:13:48 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:10.982 * Looking for test storage... 00:13:10.982 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:10.982 20:13:48 -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:10.982 20:13:48 -- nvmf/common.sh@7 -- # uname -s 00:13:10.982 20:13:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:10.982 20:13:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:10.982 20:13:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:10.983 20:13:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:10.983 20:13:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:10.983 20:13:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:10.983 20:13:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:10.983 20:13:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:10.983 20:13:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:10.983 20:13:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:10.983 20:13:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:13:10.983 20:13:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:13:10.983 20:13:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:10.983 20:13:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:10.983 20:13:48 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:10.983 20:13:48 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:10.983 20:13:48 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:10.983 20:13:48 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:10.983 20:13:48 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:10.983 20:13:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.983 20:13:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.983 20:13:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.983 20:13:48 -- paths/export.sh@5 -- # export PATH 00:13:10.983 20:13:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.983 20:13:48 -- nvmf/common.sh@46 -- # : 0 00:13:10.983 20:13:48 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:10.983 20:13:48 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:10.983 20:13:48 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:10.983 20:13:48 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:10.983 20:13:48 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:10.983 20:13:48 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:10.983 20:13:48 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:10.983 20:13:48 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:10.983 20:13:48 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:10.983 20:13:48 -- target/ns_hotplug_stress.sh@13 -- # nvmftestinit 00:13:10.983 20:13:48 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:10.983 20:13:48 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:10.983 20:13:48 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:10.983 20:13:48 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:10.983 20:13:48 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:10.983 20:13:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:10.983 20:13:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:10.983 20:13:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:10.983 20:13:48 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:10.983 20:13:48 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:10.983 20:13:48 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:10.983 20:13:48 -- common/autotest_common.sh@10 -- # set +x 00:13:17.547 20:13:53 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:17.547 20:13:53 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:17.547 20:13:53 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:17.547 20:13:53 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:17.547 20:13:53 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:17.547 20:13:53 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:17.547 20:13:53 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:17.547 20:13:53 -- nvmf/common.sh@294 -- # net_devs=() 00:13:17.547 20:13:53 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:17.547 20:13:53 -- nvmf/common.sh@295 -- # e810=() 00:13:17.547 20:13:53 -- nvmf/common.sh@295 -- # local -ga e810 00:13:17.547 20:13:53 -- nvmf/common.sh@296 -- # x722=() 00:13:17.547 20:13:53 -- nvmf/common.sh@296 -- # local -ga x722 00:13:17.547 20:13:53 -- nvmf/common.sh@297 -- # mlx=() 00:13:17.547 20:13:53 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:17.547 20:13:53 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:17.547 20:13:53 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:17.547 20:13:53 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:17.547 20:13:53 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:17.547 20:13:53 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:17.547 20:13:53 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:17.547 20:13:53 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:17.547 20:13:53 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:17.547 20:13:53 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:17.547 20:13:53 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:17.547 20:13:53 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:17.547 20:13:53 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:17.547 20:13:53 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:17.547 20:13:53 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:17.547 20:13:53 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:17.547 20:13:53 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:17.547 20:13:53 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:17.547 20:13:53 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:17.547 20:13:53 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:17.547 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:17.547 20:13:53 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:17.547 20:13:53 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:17.547 20:13:53 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:17.547 20:13:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:17.547 20:13:53 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:17.547 20:13:53 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:17.547 20:13:53 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:17.547 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:17.547 20:13:53 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:17.547 20:13:53 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:17.547 20:13:53 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:17.547 20:13:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:17.547 20:13:53 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:17.547 20:13:53 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:17.547 20:13:53 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:17.547 20:13:53 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:17.547 20:13:53 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:17.547 20:13:53 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:17.547 20:13:53 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:17.547 20:13:53 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:17.547 20:13:53 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:17.547 Found net devices under 0000:af:00.0: cvl_0_0 00:13:17.547 20:13:53 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:17.547 20:13:53 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:17.547 20:13:53 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:17.547 20:13:53 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:17.547 20:13:53 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:17.547 20:13:53 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:17.547 Found net devices under 0000:af:00.1: cvl_0_1 00:13:17.547 20:13:53 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:17.547 20:13:53 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:17.547 20:13:53 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:17.547 20:13:53 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:17.547 20:13:53 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:17.547 20:13:53 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:17.547 20:13:53 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:17.547 20:13:53 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:17.547 20:13:53 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:17.547 20:13:53 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:17.547 20:13:53 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:17.547 20:13:53 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:17.547 20:13:53 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:17.547 20:13:53 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:17.547 20:13:53 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:17.547 20:13:53 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:17.547 20:13:53 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:17.547 20:13:53 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:17.547 20:13:53 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:17.547 20:13:54 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:17.547 20:13:54 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:17.547 20:13:54 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:17.547 20:13:54 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:17.547 20:13:54 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:17.547 20:13:54 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:17.547 20:13:54 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:17.547 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:17.547 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:13:17.547 00:13:17.547 --- 10.0.0.2 ping statistics --- 00:13:17.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:17.547 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:13:17.547 20:13:54 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:17.547 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:17.547 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.367 ms 00:13:17.547 00:13:17.547 --- 10.0.0.1 ping statistics --- 00:13:17.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:17.547 rtt min/avg/max/mdev = 0.367/0.367/0.367/0.000 ms 00:13:17.547 20:13:54 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:17.547 20:13:54 -- nvmf/common.sh@410 -- # return 0 00:13:17.547 20:13:54 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:17.547 20:13:54 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:17.547 20:13:54 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:17.547 20:13:54 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:17.547 20:13:54 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:17.547 20:13:54 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:17.547 20:13:54 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:17.547 20:13:54 -- target/ns_hotplug_stress.sh@14 -- # nvmfappstart -m 0xE 00:13:17.547 20:13:54 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:17.547 20:13:54 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:17.547 20:13:54 -- common/autotest_common.sh@10 -- # set +x 00:13:17.547 20:13:54 -- nvmf/common.sh@469 -- # nvmfpid=1708821 00:13:17.547 20:13:54 -- nvmf/common.sh@470 -- # waitforlisten 1708821 00:13:17.547 20:13:54 -- common/autotest_common.sh@817 -- # '[' -z 1708821 ']' 00:13:17.547 20:13:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:17.547 20:13:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:17.547 20:13:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:17.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:17.547 20:13:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:17.547 20:13:54 -- common/autotest_common.sh@10 -- # set +x 00:13:17.547 20:13:54 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:17.547 [2024-02-14 20:13:54.206727] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:13:17.547 [2024-02-14 20:13:54.206771] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:17.547 EAL: No free 2048 kB hugepages reported on node 1 00:13:17.547 [2024-02-14 20:13:54.269915] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:17.547 [2024-02-14 20:13:54.344990] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:17.548 [2024-02-14 20:13:54.345095] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:17.548 [2024-02-14 20:13:54.345103] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:17.548 [2024-02-14 20:13:54.345109] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:17.548 [2024-02-14 20:13:54.345147] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:17.548 [2024-02-14 20:13:54.345252] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:17.548 [2024-02-14 20:13:54.345262] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:17.806 20:13:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:17.806 20:13:55 -- common/autotest_common.sh@850 -- # return 0 00:13:17.806 20:13:55 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:17.806 20:13:55 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:17.806 20:13:55 -- common/autotest_common.sh@10 -- # set +x 00:13:17.806 20:13:55 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:17.806 20:13:55 -- target/ns_hotplug_stress.sh@16 -- # null_size=1000 00:13:17.806 20:13:55 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:17.806 [2024-02-14 20:13:55.185487] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:17.806 20:13:55 -- target/ns_hotplug_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:18.064 20:13:55 -- target/ns_hotplug_stress.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:18.323 [2024-02-14 20:13:55.542780] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:18.323 20:13:55 -- target/ns_hotplug_stress.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:18.323 20:13:55 -- target/ns_hotplug_stress.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:13:18.582 Malloc0 00:13:18.582 20:13:55 -- target/ns_hotplug_stress.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:18.841 Delay0 00:13:18.841 20:13:56 -- target/ns_hotplug_stress.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:19.099 20:13:56 -- target/ns_hotplug_stress.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:13:19.099 NULL1 00:13:19.099 20:13:56 -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:19.357 20:13:56 -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:13:19.357 20:13:56 -- target/ns_hotplug_stress.sh@33 -- # PERF_PID=1709311 00:13:19.357 20:13:56 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1709311 00:13:19.357 20:13:56 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:19.357 EAL: No free 2048 kB hugepages reported on node 1 00:13:20.736 Read completed with error (sct=0, sc=11) 00:13:20.736 20:13:57 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:20.736 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:20.736 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:20.736 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:20.736 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:20.736 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:20.736 20:13:57 -- target/ns_hotplug_stress.sh@40 -- # null_size=1001 00:13:20.736 20:13:57 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:13:20.736 true 00:13:20.736 20:13:58 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1709311 00:13:20.736 20:13:58 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:21.673 20:13:58 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:21.932 20:13:59 -- target/ns_hotplug_stress.sh@40 -- # null_size=1002 00:13:21.932 20:13:59 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:13:21.932 true 00:13:21.932 20:13:59 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1709311 00:13:21.932 20:13:59 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:22.191 20:13:59 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:22.450 20:13:59 -- target/ns_hotplug_stress.sh@40 -- # null_size=1003 00:13:22.450 20:13:59 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:13:22.450 true 00:13:22.450 20:13:59 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1709311 00:13:22.450 20:13:59 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:23.826 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:23.826 20:14:01 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:23.826 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:23.826 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:23.826 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:23.826 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:23.826 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:23.826 20:14:01 -- target/ns_hotplug_stress.sh@40 -- # null_size=1004 00:13:23.826 20:14:01 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:13:24.085 true 00:13:24.085 20:14:01 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1709311 00:13:24.085 20:14:01 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:25.023 20:14:02 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:25.023 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:25.023 20:14:02 -- target/ns_hotplug_stress.sh@40 -- # null_size=1005 00:13:25.023 20:14:02 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:13:25.281 true 00:13:25.281 20:14:02 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1709311 00:13:25.281 20:14:02 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:25.541 20:14:02 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:25.541 20:14:02 -- target/ns_hotplug_stress.sh@40 -- # null_size=1006 00:13:25.541 20:14:02 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:13:25.800 true 00:13:25.800 20:14:03 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1709311 00:13:25.800 20:14:03 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:26.060 20:14:03 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:26.346 20:14:03 -- target/ns_hotplug_stress.sh@40 -- # null_size=1007 00:13:26.346 20:14:03 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:13:26.346 true 00:13:26.346 20:14:03 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1709311 00:13:26.346 20:14:03 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:26.605 20:14:03 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:26.605 20:14:04 -- target/ns_hotplug_stress.sh@40 -- # null_size=1008 00:13:26.605 20:14:04 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:13:26.864 true 00:13:26.864 20:14:04 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1709311 00:13:26.864 20:14:04 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:27.122 20:14:04 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:27.381 20:14:04 -- target/ns_hotplug_stress.sh@40 -- # null_size=1009 00:13:27.382 20:14:04 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:13:27.382 true 00:13:27.382 20:14:04 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1709311 00:13:27.382 20:14:04 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:27.641 20:14:04 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:27.900 20:14:05 -- target/ns_hotplug_stress.sh@40 -- # null_size=1010 00:13:27.900 20:14:05 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:13:27.900 true 00:13:27.900 20:14:05 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1709311 00:13:27.900 20:14:05 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:28.159 20:14:05 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:28.418 20:14:05 -- target/ns_hotplug_stress.sh@40 -- # null_size=1011 00:13:28.418 20:14:05 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:13:28.418 true 00:13:28.418 20:14:05 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1709311 00:13:28.418 20:14:05 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:28.677 20:14:05 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:28.935 20:14:06 -- target/ns_hotplug_stress.sh@40 -- # null_size=1012 00:13:28.935 20:14:06 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:13:28.935 true 00:13:28.935 20:14:06 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1709311 00:13:28.935 20:14:06 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:29.195 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:29.195 20:14:06 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:29.195 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:29.195 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:29.195 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:29.487 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:29.487 [2024-02-14 20:14:06.653412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.487 [2024-02-14 20:14:06.653496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.487 [2024-02-14 20:14:06.653529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.487 [2024-02-14 20:14:06.653565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.487 [2024-02-14 20:14:06.653600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.487 [2024-02-14 20:14:06.653630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.487 [2024-02-14 20:14:06.653677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.487 [2024-02-14 20:14:06.653713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.487 [2024-02-14 20:14:06.653754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.487 [2024-02-14 20:14:06.653794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.487 [2024-02-14 20:14:06.653835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.487 [2024-02-14 20:14:06.653879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.487 [2024-02-14 20:14:06.653920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.487 [2024-02-14 20:14:06.653963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.487 [2024-02-14 20:14:06.654002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.487 [2024-02-14 20:14:06.654052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.487 [2024-02-14 20:14:06.654108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.487 [2024-02-14 20:14:06.654150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.487 [2024-02-14 20:14:06.654192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.487 [2024-02-14 20:14:06.654239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.487 [2024-02-14 20:14:06.654295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.487 [2024-02-14 20:14:06.654347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.487 [2024-02-14 20:14:06.654391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.487 [2024-02-14 20:14:06.654438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.487 [2024-02-14 20:14:06.654482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.487 [2024-02-14 20:14:06.654524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.487 [2024-02-14 20:14:06.654572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.487 [2024-02-14 20:14:06.654617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.487 [2024-02-14 20:14:06.654666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.488 [2024-02-14 20:14:06.654710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.488 [2024-02-14 20:14:06.654747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.488 [2024-02-14 20:14:06.654787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.488 [2024-02-14 20:14:06.654829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.488 [2024-02-14 20:14:06.654868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.488 [2024-02-14 20:14:06.654910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.488 [2024-02-14 20:14:06.654951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.488 [2024-02-14 20:14:06.654994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.488 [2024-02-14 20:14:06.655035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.488 [2024-02-14 20:14:06.655076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.488 [2024-02-14 20:14:06.655105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.488 [2024-02-14 20:14:06.655133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.488 [2024-02-14 20:14:06.655162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.488 [2024-02-14 20:14:06.655190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.488 [2024-02-14 20:14:06.655217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.488 [2024-02-14 20:14:06.655245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.488 [2024-02-14 20:14:06.655272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.488 [2024-02-14 20:14:06.655299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.488 [2024-02-14 20:14:06.655326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.488 [2024-02-14 20:14:06.655352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.488 [2024-02-14 20:14:06.655379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.488 [2024-02-14 20:14:06.655405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.488 [2024-02-14 20:14:06.655432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.488 [2024-02-14 20:14:06.655458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.488 [2024-02-14 20:14:06.655487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.488 [2024-02-14 20:14:06.655516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.488 [2024-02-14 20:14:06.655543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.488 [2024-02-14 20:14:06.655571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.488 [2024-02-14 20:14:06.655960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.488 [2024-02-14 20:14:06.656006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.488 [2024-02-14 20:14:06.656051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.488 [2024-02-14 20:14:06.656101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.488 [2024-02-14 20:14:06.656145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.488 [2024-02-14 20:14:06.656190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.488 [2024-02-14 20:14:06.656233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.488 [2024-02-14 20:14:06.656277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.488 [2024-02-14 20:14:06.656319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.488 [2024-02-14 20:14:06.656365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.488 [2024-02-14 20:14:06.656412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.488 [2024-02-14 20:14:06.656456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.488 [2024-02-14 20:14:06.656501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.488 [2024-02-14 20:14:06.656548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.488 [2024-02-14 20:14:06.656591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.488 [2024-02-14 20:14:06.656651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.488 [2024-02-14 20:14:06.656698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.488 [2024-02-14 20:14:06.656738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.488 [2024-02-14 20:14:06.656784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.488 [2024-02-14 20:14:06.656824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.488 [2024-02-14 20:14:06.656868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.488 [2024-02-14 20:14:06.656905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.488 [2024-02-14 20:14:06.656935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.488 [2024-02-14 20:14:06.656962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.488 [2024-02-14 20:14:06.656999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.488 [2024-02-14 20:14:06.657037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.488 [2024-02-14 20:14:06.657075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.488 [2024-02-14 20:14:06.657114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.488 [2024-02-14 20:14:06.657150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.488 [2024-02-14 20:14:06.657191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.488 [2024-02-14 20:14:06.657232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.488 [2024-02-14 20:14:06.657263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.488 [2024-02-14 20:14:06.657290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.488 [2024-02-14 20:14:06.657327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.488 [2024-02-14 20:14:06.657363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.488 [2024-02-14 20:14:06.657403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.488 [2024-02-14 20:14:06.657444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.488 [2024-02-14 20:14:06.657485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.488 [2024-02-14 20:14:06.657533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.488 [2024-02-14 20:14:06.657577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.488 [2024-02-14 20:14:06.657624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.488 [2024-02-14 20:14:06.657674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.488 [2024-02-14 20:14:06.657717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.488 [2024-02-14 20:14:06.657761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.488 [2024-02-14 20:14:06.657804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.488 [2024-02-14 20:14:06.657853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.488 [2024-02-14 20:14:06.657896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.488 [2024-02-14 20:14:06.657942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.488 [2024-02-14 20:14:06.657994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.488 [2024-02-14 20:14:06.658040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.488 [2024-02-14 20:14:06.658093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.488 [2024-02-14 20:14:06.658142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.488 [2024-02-14 20:14:06.658184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.488 [2024-02-14 20:14:06.658226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.488 [2024-02-14 20:14:06.658269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.488 [2024-02-14 20:14:06.658311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.488 [2024-02-14 20:14:06.658364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.488 [2024-02-14 20:14:06.658411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.488 [2024-02-14 20:14:06.658463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.488 [2024-02-14 20:14:06.658507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.488 [2024-02-14 20:14:06.658552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.488 [2024-02-14 20:14:06.658595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.488 [2024-02-14 20:14:06.658637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.488 [2024-02-14 20:14:06.658682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.488 [2024-02-14 20:14:06.658780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.488 [2024-02-14 20:14:06.658822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.488 [2024-02-14 20:14:06.658861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.488 [2024-02-14 20:14:06.658890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.488 [2024-02-14 20:14:06.658922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.489 [2024-02-14 20:14:06.658959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.489 [2024-02-14 20:14:06.658998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.489 [2024-02-14 20:14:06.659308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.489 [2024-02-14 20:14:06.659354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.489 [2024-02-14 20:14:06.659402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.489 [2024-02-14 20:14:06.659446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.489 [2024-02-14 20:14:06.659494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.489 [2024-02-14 20:14:06.659541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.489 [2024-02-14 20:14:06.659585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.489 [2024-02-14 20:14:06.659627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.489 [2024-02-14 20:14:06.659676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.489 [2024-02-14 20:14:06.659720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.489 [2024-02-14 20:14:06.659764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.489 [2024-02-14 20:14:06.659812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.489 [2024-02-14 20:14:06.659858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.489 [2024-02-14 20:14:06.659904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.489 [2024-02-14 20:14:06.659945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.489 [2024-02-14 20:14:06.659993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.489 [2024-02-14 20:14:06.660040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.489 [2024-02-14 20:14:06.660084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.489 [2024-02-14 20:14:06.660131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.489 [2024-02-14 20:14:06.660174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.489 [2024-02-14 20:14:06.660223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.489 [2024-02-14 20:14:06.660269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.489 [2024-02-14 20:14:06.660314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.489 [2024-02-14 20:14:06.660361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.489 [2024-02-14 20:14:06.660410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.489 [2024-02-14 20:14:06.660454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.489 [2024-02-14 20:14:06.660501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.489 [2024-02-14 20:14:06.660548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.489 [2024-02-14 20:14:06.660594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.489 [2024-02-14 20:14:06.660640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.489 [2024-02-14 20:14:06.660690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.489 [2024-02-14 20:14:06.660732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.489 [2024-02-14 20:14:06.660773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.489 [2024-02-14 20:14:06.660815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.489 [2024-02-14 20:14:06.660855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.489 [2024-02-14 20:14:06.660900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.489 [2024-02-14 20:14:06.660944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.489 [2024-02-14 20:14:06.660987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.489 [2024-02-14 20:14:06.661032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.489 [2024-02-14 20:14:06.661065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.489 [2024-02-14 20:14:06.661100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.489 [2024-02-14 20:14:06.661136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.489 [2024-02-14 20:14:06.661174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.489 [2024-02-14 20:14:06.661210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.489 [2024-02-14 20:14:06.661249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.489 [2024-02-14 20:14:06.661287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.489 [2024-02-14 20:14:06.661332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.489 [2024-02-14 20:14:06.661370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.489 [2024-02-14 20:14:06.661416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.489 [2024-02-14 20:14:06.661456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.489 [2024-02-14 20:14:06.661491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.489 [2024-02-14 20:14:06.661521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.489 [2024-02-14 20:14:06.661559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.489 [2024-02-14 20:14:06.661602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.489 [2024-02-14 20:14:06.661630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.489 [2024-02-14 20:14:06.661664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.489 [2024-02-14 20:14:06.661986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.489 [2024-02-14 20:14:06.662021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.489 [2024-02-14 20:14:06.662049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.489 [2024-02-14 20:14:06.662077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.489 [2024-02-14 20:14:06.662104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.489 [2024-02-14 20:14:06.662132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.489 [2024-02-14 20:14:06.662159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.489 [2024-02-14 20:14:06.662186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.489 [2024-02-14 20:14:06.662213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.489 [2024-02-14 20:14:06.662240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.489 [2024-02-14 20:14:06.662266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.489 [2024-02-14 20:14:06.662294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.489 [2024-02-14 20:14:06.662323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.489 [2024-02-14 20:14:06.662349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.489 [2024-02-14 20:14:06.662376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.489 [2024-02-14 20:14:06.662404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.489 [2024-02-14 20:14:06.662441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.489 [2024-02-14 20:14:06.662476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.489 [2024-02-14 20:14:06.662505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.489 [2024-02-14 20:14:06.662545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.489 [2024-02-14 20:14:06.662587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.489 [2024-02-14 20:14:06.662630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.489 [2024-02-14 20:14:06.662683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.489 [2024-02-14 20:14:06.662727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.489 [2024-02-14 20:14:06.662773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.489 [2024-02-14 20:14:06.662819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.489 [2024-02-14 20:14:06.662868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.489 [2024-02-14 20:14:06.662909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.489 [2024-02-14 20:14:06.662958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.489 [2024-02-14 20:14:06.663003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.489 [2024-02-14 20:14:06.663049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.489 [2024-02-14 20:14:06.663093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.489 [2024-02-14 20:14:06.663138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.489 [2024-02-14 20:14:06.663181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.489 [2024-02-14 20:14:06.663230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.489 [2024-02-14 20:14:06.663279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.489 [2024-02-14 20:14:06.663321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.489 [2024-02-14 20:14:06.663370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.489 [2024-02-14 20:14:06.663409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.490 [2024-02-14 20:14:06.663456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.490 [2024-02-14 20:14:06.663502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.490 [2024-02-14 20:14:06.663544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.490 [2024-02-14 20:14:06.663572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.490 [2024-02-14 20:14:06.663603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.490 [2024-02-14 20:14:06.663651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.490 [2024-02-14 20:14:06.663689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.490 [2024-02-14 20:14:06.663727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.490 [2024-02-14 20:14:06.663771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.490 [2024-02-14 20:14:06.663807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.490 [2024-02-14 20:14:06.663835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.490 [2024-02-14 20:14:06.663863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.490 [2024-02-14 20:14:06.663900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.490 [2024-02-14 20:14:06.663938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.490 [2024-02-14 20:14:06.663977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.490 [2024-02-14 20:14:06.664016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.490 [2024-02-14 20:14:06.664059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.490 [2024-02-14 20:14:06.664102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.490 [2024-02-14 20:14:06.664144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.490 [2024-02-14 20:14:06.664193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.490 [2024-02-14 20:14:06.664242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.490 [2024-02-14 20:14:06.664288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.490 [2024-02-14 20:14:06.664334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.490 [2024-02-14 20:14:06.664376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.490 [2024-02-14 20:14:06.664423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.490 [2024-02-14 20:14:06.664521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.490 [2024-02-14 20:14:06.664565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.490 [2024-02-14 20:14:06.664613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.490 [2024-02-14 20:14:06.664665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.490 [2024-02-14 20:14:06.664713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.490 [2024-02-14 20:14:06.664753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.490 [2024-02-14 20:14:06.664801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.490 [2024-02-14 20:14:06.665062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.490 [2024-02-14 20:14:06.665093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.490 [2024-02-14 20:14:06.665124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.490 [2024-02-14 20:14:06.665160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.490 [2024-02-14 20:14:06.665200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.490 [2024-02-14 20:14:06.665242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.490 [2024-02-14 20:14:06.665280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.490 [2024-02-14 20:14:06.665321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.490 [2024-02-14 20:14:06.665349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.490 [2024-02-14 20:14:06.665380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.490 [2024-02-14 20:14:06.665418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.490 [2024-02-14 20:14:06.665450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.490 [2024-02-14 20:14:06.665479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.490 [2024-02-14 20:14:06.665522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.490 [2024-02-14 20:14:06.665562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.490 [2024-02-14 20:14:06.665605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.490 [2024-02-14 20:14:06.665654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.490 [2024-02-14 20:14:06.665698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.490 [2024-02-14 20:14:06.665744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.490 [2024-02-14 20:14:06.665789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.490 [2024-02-14 20:14:06.665837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.490 [2024-02-14 20:14:06.665881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.490 [2024-02-14 20:14:06.665926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.490 [2024-02-14 20:14:06.665972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.490 [2024-02-14 20:14:06.666013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.490 [2024-02-14 20:14:06.666060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.490 [2024-02-14 20:14:06.666106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.490 [2024-02-14 20:14:06.666151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.490 [2024-02-14 20:14:06.666197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.490 [2024-02-14 20:14:06.666240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.490 [2024-02-14 20:14:06.666279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.490 [2024-02-14 20:14:06.666321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.490 [2024-02-14 20:14:06.666363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.490 [2024-02-14 20:14:06.666391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.490 [2024-02-14 20:14:06.666427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.490 [2024-02-14 20:14:06.666467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.490 [2024-02-14 20:14:06.666513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.490 [2024-02-14 20:14:06.666547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.490 [2024-02-14 20:14:06.666585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.490 [2024-02-14 20:14:06.666622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.490 [2024-02-14 20:14:06.666664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.490 [2024-02-14 20:14:06.666693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.490 [2024-02-14 20:14:06.666737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.490 [2024-02-14 20:14:06.666779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.490 [2024-02-14 20:14:06.666820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.490 [2024-02-14 20:14:06.666868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.490 [2024-02-14 20:14:06.666914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.490 [2024-02-14 20:14:06.666958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.490 [2024-02-14 20:14:06.667003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.490 [2024-02-14 20:14:06.667048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.490 [2024-02-14 20:14:06.667089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.490 [2024-02-14 20:14:06.667134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.490 [2024-02-14 20:14:06.667179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.490 [2024-02-14 20:14:06.667228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.490 [2024-02-14 20:14:06.667272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.490 [2024-02-14 20:14:06.667324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.490 [2024-02-14 20:14:06.667694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.490 [2024-02-14 20:14:06.667739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.490 [2024-02-14 20:14:06.667779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.490 [2024-02-14 20:14:06.667816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.490 [2024-02-14 20:14:06.667855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.490 [2024-02-14 20:14:06.667895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.490 [2024-02-14 20:14:06.667936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.490 [2024-02-14 20:14:06.667972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.491 [2024-02-14 20:14:06.668000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.491 [2024-02-14 20:14:06.668033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.491 [2024-02-14 20:14:06.668073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.491 [2024-02-14 20:14:06.668109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.491 [2024-02-14 20:14:06.668146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.491 [2024-02-14 20:14:06.668176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.491 [2024-02-14 20:14:06.668211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.491 [2024-02-14 20:14:06.668249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.491 [2024-02-14 20:14:06.668285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.491 [2024-02-14 20:14:06.668314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.491 [2024-02-14 20:14:06.668354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.491 [2024-02-14 20:14:06.668393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.491 [2024-02-14 20:14:06.668450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.491 [2024-02-14 20:14:06.668494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.491 [2024-02-14 20:14:06.668538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.491 [2024-02-14 20:14:06.668582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.491 [2024-02-14 20:14:06.668622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.491 [2024-02-14 20:14:06.668670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.491 [2024-02-14 20:14:06.668716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.491 [2024-02-14 20:14:06.668759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.491 [2024-02-14 20:14:06.668806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.491 [2024-02-14 20:14:06.668851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.491 [2024-02-14 20:14:06.668891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.491 [2024-02-14 20:14:06.668940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.491 [2024-02-14 20:14:06.668982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.491 [2024-02-14 20:14:06.669031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.491 [2024-02-14 20:14:06.669076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.491 [2024-02-14 20:14:06.669120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.491 [2024-02-14 20:14:06.669166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.491 [2024-02-14 20:14:06.669209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.491 [2024-02-14 20:14:06.669260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.491 [2024-02-14 20:14:06.669305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.491 [2024-02-14 20:14:06.669350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.491 [2024-02-14 20:14:06.669392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.491 [2024-02-14 20:14:06.669437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.491 [2024-02-14 20:14:06.669473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.491 [2024-02-14 20:14:06.669514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.491 [2024-02-14 20:14:06.669553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.491 [2024-02-14 20:14:06.669585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.491 [2024-02-14 20:14:06.669613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.491 [2024-02-14 20:14:06.669653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.491 [2024-02-14 20:14:06.669695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.491 [2024-02-14 20:14:06.669741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.491 [2024-02-14 20:14:06.669779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.491 [2024-02-14 20:14:06.669818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.491 [2024-02-14 20:14:06.669858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.491 [2024-02-14 20:14:06.669889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.491 [2024-02-14 20:14:06.669918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.491 [2024-02-14 20:14:06.669957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.491 [2024-02-14 20:14:06.669987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.491 [2024-02-14 20:14:06.670016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.491 [2024-02-14 20:14:06.670063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.491 [2024-02-14 20:14:06.670102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.491 [2024-02-14 20:14:06.670160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.491 [2024-02-14 20:14:06.670202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.491 [2024-02-14 20:14:06.670252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.491 [2024-02-14 20:14:06.670354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.491 [2024-02-14 20:14:06.670404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.491 [2024-02-14 20:14:06.670442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.491 [2024-02-14 20:14:06.670482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.491 [2024-02-14 20:14:06.670521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.491 [2024-02-14 20:14:06.670564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.491 [2024-02-14 20:14:06.670595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.491 [2024-02-14 20:14:06.670818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.491 [2024-02-14 20:14:06.670852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.491 [2024-02-14 20:14:06.670879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.491 [2024-02-14 20:14:06.670906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.491 [2024-02-14 20:14:06.670934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.491 [2024-02-14 20:14:06.670961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.491 [2024-02-14 20:14:06.670988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.491 [2024-02-14 20:14:06.671015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.491 [2024-02-14 20:14:06.671043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.491 [2024-02-14 20:14:06.671069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.491 [2024-02-14 20:14:06.671096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.491 [2024-02-14 20:14:06.671123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.491 [2024-02-14 20:14:06.671150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.491 [2024-02-14 20:14:06.671177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.491 [2024-02-14 20:14:06.671212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.491 [2024-02-14 20:14:06.671248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.491 [2024-02-14 20:14:06.671285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.491 [2024-02-14 20:14:06.671314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.491 [2024-02-14 20:14:06.671343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.491 [2024-02-14 20:14:06.671385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.491 [2024-02-14 20:14:06.671427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.491 [2024-02-14 20:14:06.671478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.491 [2024-02-14 20:14:06.671521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.491 [2024-02-14 20:14:06.671566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.491 [2024-02-14 20:14:06.671613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.491 [2024-02-14 20:14:06.671664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.491 [2024-02-14 20:14:06.671710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.491 [2024-02-14 20:14:06.671758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.491 [2024-02-14 20:14:06.671808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.491 [2024-02-14 20:14:06.671852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.491 [2024-02-14 20:14:06.671896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.491 [2024-02-14 20:14:06.671939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.491 [2024-02-14 20:14:06.671987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.491 [2024-02-14 20:14:06.672038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.491 [2024-02-14 20:14:06.672084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.492 [2024-02-14 20:14:06.672127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.492 [2024-02-14 20:14:06.672182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.492 [2024-02-14 20:14:06.672222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.492 [2024-02-14 20:14:06.672255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.492 [2024-02-14 20:14:06.672283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.492 [2024-02-14 20:14:06.672323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.492 [2024-02-14 20:14:06.672357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.492 [2024-02-14 20:14:06.672398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.492 [2024-02-14 20:14:06.672440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.492 [2024-02-14 20:14:06.672477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.492 [2024-02-14 20:14:06.672517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.492 [2024-02-14 20:14:06.672561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.492 [2024-02-14 20:14:06.672600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.492 [2024-02-14 20:14:06.672642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.492 [2024-02-14 20:14:06.672680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.492 [2024-02-14 20:14:06.672719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.492 [2024-02-14 20:14:06.672752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.492 [2024-02-14 20:14:06.672790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.492 [2024-02-14 20:14:06.672824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.492 [2024-02-14 20:14:06.672865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.492 [2024-02-14 20:14:06.672915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.492 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:13:29.492 [2024-02-14 20:14:06.673282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.492 [2024-02-14 20:14:06.673335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.492 [2024-02-14 20:14:06.673378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.492 [2024-02-14 20:14:06.673422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.492 [2024-02-14 20:14:06.673471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.492 [2024-02-14 20:14:06.673520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.492 [2024-02-14 20:14:06.673566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.492 [2024-02-14 20:14:06.673613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.492 [2024-02-14 20:14:06.673664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.492 [2024-02-14 20:14:06.673716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.492 [2024-02-14 20:14:06.673765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.492 [2024-02-14 20:14:06.673811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.492 [2024-02-14 20:14:06.673855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.492 [2024-02-14 20:14:06.673899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.492 [2024-02-14 20:14:06.673939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.492 [2024-02-14 20:14:06.673976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.492 [2024-02-14 20:14:06.674015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.492 [2024-02-14 20:14:06.674055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.492 [2024-02-14 20:14:06.674097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.492 [2024-02-14 20:14:06.674127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.492 [2024-02-14 20:14:06.674159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.492 [2024-02-14 20:14:06.674199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.492 [2024-02-14 20:14:06.674238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.492 [2024-02-14 20:14:06.674276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.492 [2024-02-14 20:14:06.674316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.492 [2024-02-14 20:14:06.674353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.492 [2024-02-14 20:14:06.674394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.492 [2024-02-14 20:14:06.674431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.492 [2024-02-14 20:14:06.674469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.492 [2024-02-14 20:14:06.674499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.492 [2024-02-14 20:14:06.674528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.492 [2024-02-14 20:14:06.674573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.492 [2024-02-14 20:14:06.674616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.492 [2024-02-14 20:14:06.674669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.492 [2024-02-14 20:14:06.674713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.492 [2024-02-14 20:14:06.674770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.492 [2024-02-14 20:14:06.674815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.492 [2024-02-14 20:14:06.674859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.492 [2024-02-14 20:14:06.674910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.492 [2024-02-14 20:14:06.674956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.492 [2024-02-14 20:14:06.675001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.492 [2024-02-14 20:14:06.675052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.492 [2024-02-14 20:14:06.675099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.492 [2024-02-14 20:14:06.675145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.492 [2024-02-14 20:14:06.675189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.492 [2024-02-14 20:14:06.675230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.492 [2024-02-14 20:14:06.675282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.492 [2024-02-14 20:14:06.675329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.492 [2024-02-14 20:14:06.675375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.492 [2024-02-14 20:14:06.675420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.492 [2024-02-14 20:14:06.675464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.492 [2024-02-14 20:14:06.675506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.492 [2024-02-14 20:14:06.675554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.492 [2024-02-14 20:14:06.675596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.492 [2024-02-14 20:14:06.675641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.493 [2024-02-14 20:14:06.675690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.493 [2024-02-14 20:14:06.675735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.493 [2024-02-14 20:14:06.675782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.493 [2024-02-14 20:14:06.675828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.493 [2024-02-14 20:14:06.675866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.493 [2024-02-14 20:14:06.675905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.493 [2024-02-14 20:14:06.675945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.493 [2024-02-14 20:14:06.675982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.493 [2024-02-14 20:14:06.676011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.493 [2024-02-14 20:14:06.676103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.493 [2024-02-14 20:14:06.676142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.493 [2024-02-14 20:14:06.676183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.493 [2024-02-14 20:14:06.676230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.493 [2024-02-14 20:14:06.676274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.493 [2024-02-14 20:14:06.676321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.493 [2024-02-14 20:14:06.676365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.493 [2024-02-14 20:14:06.676641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.493 [2024-02-14 20:14:06.676693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.493 [2024-02-14 20:14:06.676739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.493 [2024-02-14 20:14:06.676789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.493 [2024-02-14 20:14:06.676833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.493 [2024-02-14 20:14:06.676874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.493 [2024-02-14 20:14:06.676903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.493 [2024-02-14 20:14:06.676938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.493 [2024-02-14 20:14:06.676982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.493 [2024-02-14 20:14:06.677016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.493 [2024-02-14 20:14:06.677045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.493 [2024-02-14 20:14:06.677074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.493 [2024-02-14 20:14:06.677101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.493 [2024-02-14 20:14:06.677130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.493 [2024-02-14 20:14:06.677157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.493 [2024-02-14 20:14:06.677186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.493 [2024-02-14 20:14:06.677213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.493 [2024-02-14 20:14:06.677242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.493 [2024-02-14 20:14:06.677270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.493 [2024-02-14 20:14:06.677303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.493 [2024-02-14 20:14:06.677342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.493 [2024-02-14 20:14:06.677379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.493 [2024-02-14 20:14:06.677410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.493 [2024-02-14 20:14:06.677440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.493 [2024-02-14 20:14:06.677479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.493 [2024-02-14 20:14:06.677521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.493 [2024-02-14 20:14:06.677565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.493 [2024-02-14 20:14:06.677607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.493 [2024-02-14 20:14:06.677660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.493 [2024-02-14 20:14:06.677711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.493 [2024-02-14 20:14:06.677755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.493 [2024-02-14 20:14:06.677800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.493 [2024-02-14 20:14:06.677845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.493 [2024-02-14 20:14:06.677886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.493 [2024-02-14 20:14:06.677933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.493 [2024-02-14 20:14:06.677978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.493 [2024-02-14 20:14:06.678029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.493 [2024-02-14 20:14:06.678078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.493 [2024-02-14 20:14:06.678124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.493 [2024-02-14 20:14:06.678174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.493 [2024-02-14 20:14:06.678219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.493 [2024-02-14 20:14:06.678261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.493 [2024-02-14 20:14:06.678315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.493 [2024-02-14 20:14:06.678358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.493 [2024-02-14 20:14:06.678402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.493 [2024-02-14 20:14:06.678447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.493 [2024-02-14 20:14:06.678493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.493 [2024-02-14 20:14:06.678539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.493 [2024-02-14 20:14:06.678578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.493 [2024-02-14 20:14:06.678607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.493 [2024-02-14 20:14:06.678634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.493 [2024-02-14 20:14:06.678682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.493 [2024-02-14 20:14:06.678727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.493 [2024-02-14 20:14:06.678765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.493 [2024-02-14 20:14:06.678804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.493 [2024-02-14 20:14:06.678842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.493 [2024-02-14 20:14:06.679290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.493 [2024-02-14 20:14:06.679345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.493 [2024-02-14 20:14:06.679398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.493 [2024-02-14 20:14:06.679446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.493 [2024-02-14 20:14:06.679490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.493 [2024-02-14 20:14:06.679539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.493 [2024-02-14 20:14:06.679583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.493 [2024-02-14 20:14:06.679629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.493 [2024-02-14 20:14:06.679685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.493 [2024-02-14 20:14:06.679730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.493 [2024-02-14 20:14:06.679777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.493 [2024-02-14 20:14:06.679824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.493 [2024-02-14 20:14:06.679872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.493 [2024-02-14 20:14:06.679921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.493 [2024-02-14 20:14:06.679966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.493 [2024-02-14 20:14:06.680014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.493 [2024-02-14 20:14:06.680063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.493 [2024-02-14 20:14:06.680106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.493 [2024-02-14 20:14:06.680149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.493 [2024-02-14 20:14:06.680200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.493 [2024-02-14 20:14:06.680248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.493 [2024-02-14 20:14:06.680300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.493 [2024-02-14 20:14:06.680340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.493 [2024-02-14 20:14:06.680376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.493 20:14:06 -- target/ns_hotplug_stress.sh@40 -- # null_size=1013 00:13:29.494 [2024-02-14 20:14:06.680425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.494 [2024-02-14 20:14:06.680465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.494 [2024-02-14 20:14:06.680506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.494 [2024-02-14 20:14:06.680546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.494 [2024-02-14 20:14:06.680584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.494 [2024-02-14 20:14:06.680614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.494 [2024-02-14 20:14:06.680642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.494 [2024-02-14 20:14:06.680689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.494 [2024-02-14 20:14:06.680730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.494 20:14:06 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:13:29.494 [2024-02-14 20:14:06.680768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.494 [2024-02-14 20:14:06.680806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.494 [2024-02-14 20:14:06.680836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.494 [2024-02-14 20:14:06.680879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.494 [2024-02-14 20:14:06.680920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.494 [2024-02-14 20:14:06.680956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.494 [2024-02-14 20:14:06.680986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.494 [2024-02-14 20:14:06.681015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.494 [2024-02-14 20:14:06.681061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.494 [2024-02-14 20:14:06.681109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.494 [2024-02-14 20:14:06.681160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.494 [2024-02-14 20:14:06.681207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.494 [2024-02-14 20:14:06.681251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.494 [2024-02-14 20:14:06.681300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.494 [2024-02-14 20:14:06.681348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.494 [2024-02-14 20:14:06.681392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.494 [2024-02-14 20:14:06.681443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.494 [2024-02-14 20:14:06.681492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.494 [2024-02-14 20:14:06.681537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.494 [2024-02-14 20:14:06.681586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.494 [2024-02-14 20:14:06.681631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.494 [2024-02-14 20:14:06.681685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.494 [2024-02-14 20:14:06.681726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.494 [2024-02-14 20:14:06.681769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.494 [2024-02-14 20:14:06.681823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.494 [2024-02-14 20:14:06.681868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.494 [2024-02-14 20:14:06.681914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.494 [2024-02-14 20:14:06.681966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.494 [2024-02-14 20:14:06.682016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.494 [2024-02-14 20:14:06.682062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.494 [2024-02-14 20:14:06.682115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.494 [2024-02-14 20:14:06.682216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.494 [2024-02-14 20:14:06.682267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.494 [2024-02-14 20:14:06.682306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.494 [2024-02-14 20:14:06.682336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.494 [2024-02-14 20:14:06.682364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.494 [2024-02-14 20:14:06.682404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.494 [2024-02-14 20:14:06.682450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.494 [2024-02-14 20:14:06.682710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.494 [2024-02-14 20:14:06.682749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.494 [2024-02-14 20:14:06.682779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.494 [2024-02-14 20:14:06.682813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.494 [2024-02-14 20:14:06.682867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.494 [2024-02-14 20:14:06.682913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.494 [2024-02-14 20:14:06.682957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.494 [2024-02-14 20:14:06.683007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.494 [2024-02-14 20:14:06.683050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.494 [2024-02-14 20:14:06.683091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.494 [2024-02-14 20:14:06.683141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.494 [2024-02-14 20:14:06.683187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.494 [2024-02-14 20:14:06.683232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.494 [2024-02-14 20:14:06.683272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.494 [2024-02-14 20:14:06.683310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.494 [2024-02-14 20:14:06.683345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.494 [2024-02-14 20:14:06.683374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.494 [2024-02-14 20:14:06.683407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.494 [2024-02-14 20:14:06.683445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.494 [2024-02-14 20:14:06.683487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.494 [2024-02-14 20:14:06.683525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.494 [2024-02-14 20:14:06.683557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.494 [2024-02-14 20:14:06.683587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.494 [2024-02-14 20:14:06.683630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.494 [2024-02-14 20:14:06.683678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.494 [2024-02-14 20:14:06.683729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.494 [2024-02-14 20:14:06.683773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.494 [2024-02-14 20:14:06.683820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.494 [2024-02-14 20:14:06.683864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.494 [2024-02-14 20:14:06.683911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.494 [2024-02-14 20:14:06.683960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.494 [2024-02-14 20:14:06.684005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.494 [2024-02-14 20:14:06.684050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.494 [2024-02-14 20:14:06.684098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.494 [2024-02-14 20:14:06.684144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.494 [2024-02-14 20:14:06.684192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.494 [2024-02-14 20:14:06.684239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.494 [2024-02-14 20:14:06.684284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.494 [2024-02-14 20:14:06.684331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.494 [2024-02-14 20:14:06.684379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.494 [2024-02-14 20:14:06.684425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.494 [2024-02-14 20:14:06.684473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.494 [2024-02-14 20:14:06.684525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.494 [2024-02-14 20:14:06.684571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.494 [2024-02-14 20:14:06.684611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.494 [2024-02-14 20:14:06.684663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.494 [2024-02-14 20:14:06.684704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.494 [2024-02-14 20:14:06.684739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.494 [2024-02-14 20:14:06.684769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.494 [2024-02-14 20:14:06.684801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.495 [2024-02-14 20:14:06.684842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.495 [2024-02-14 20:14:06.684883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.495 [2024-02-14 20:14:06.684925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.495 [2024-02-14 20:14:06.684964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.495 [2024-02-14 20:14:06.685005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.495 [2024-02-14 20:14:06.685040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.495 [2024-02-14 20:14:06.685403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.495 [2024-02-14 20:14:06.685454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.495 [2024-02-14 20:14:06.685498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.495 [2024-02-14 20:14:06.685547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.495 [2024-02-14 20:14:06.685592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.495 [2024-02-14 20:14:06.685642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.495 [2024-02-14 20:14:06.685702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.495 [2024-02-14 20:14:06.685746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.495 [2024-02-14 20:14:06.685791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.495 [2024-02-14 20:14:06.685849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.495 [2024-02-14 20:14:06.685893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.495 [2024-02-14 20:14:06.685940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.495 [2024-02-14 20:14:06.685985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.495 [2024-02-14 20:14:06.686037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.495 [2024-02-14 20:14:06.686085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.495 [2024-02-14 20:14:06.686143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.495 [2024-02-14 20:14:06.686187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.495 [2024-02-14 20:14:06.686233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.495 [2024-02-14 20:14:06.686279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.495 [2024-02-14 20:14:06.686326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.495 [2024-02-14 20:14:06.686376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.495 [2024-02-14 20:14:06.686415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.495 [2024-02-14 20:14:06.686454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.495 [2024-02-14 20:14:06.686494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.495 [2024-02-14 20:14:06.686532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.495 [2024-02-14 20:14:06.686568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.495 [2024-02-14 20:14:06.686604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.495 [2024-02-14 20:14:06.686641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.495 [2024-02-14 20:14:06.686682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.495 [2024-02-14 20:14:06.686711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.495 [2024-02-14 20:14:06.686754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.495 [2024-02-14 20:14:06.686791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.495 [2024-02-14 20:14:06.686831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.495 [2024-02-14 20:14:06.686875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.495 [2024-02-14 20:14:06.686913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.495 [2024-02-14 20:14:06.686952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.495 [2024-02-14 20:14:06.686989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.495 [2024-02-14 20:14:06.687029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.495 [2024-02-14 20:14:06.687060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.495 [2024-02-14 20:14:06.687089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.495 [2024-02-14 20:14:06.687132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.495 [2024-02-14 20:14:06.687173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.495 [2024-02-14 20:14:06.687223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.495 [2024-02-14 20:14:06.687270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.495 [2024-02-14 20:14:06.687317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.495 [2024-02-14 20:14:06.687365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.495 [2024-02-14 20:14:06.687414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.495 [2024-02-14 20:14:06.687458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.495 [2024-02-14 20:14:06.687511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.495 [2024-02-14 20:14:06.687559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.495 [2024-02-14 20:14:06.687606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.495 [2024-02-14 20:14:06.687657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.495 [2024-02-14 20:14:06.687702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.495 [2024-02-14 20:14:06.687748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.495 [2024-02-14 20:14:06.687800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.495 [2024-02-14 20:14:06.687849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.495 [2024-02-14 20:14:06.687902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.495 [2024-02-14 20:14:06.687949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.495 [2024-02-14 20:14:06.687990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.495 [2024-02-14 20:14:06.688037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.495 [2024-02-14 20:14:06.688085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.495 [2024-02-14 20:14:06.688131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.495 [2024-02-14 20:14:06.688175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.495 [2024-02-14 20:14:06.688223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.495 [2024-02-14 20:14:06.688324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.495 [2024-02-14 20:14:06.688368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.495 [2024-02-14 20:14:06.688409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.495 [2024-02-14 20:14:06.688438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.495 [2024-02-14 20:14:06.688475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.495 [2024-02-14 20:14:06.688518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.495 [2024-02-14 20:14:06.688564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.495 [2024-02-14 20:14:06.688812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.495 [2024-02-14 20:14:06.688848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.495 [2024-02-14 20:14:06.688877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.495 [2024-02-14 20:14:06.688920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.495 [2024-02-14 20:14:06.688965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.495 [2024-02-14 20:14:06.689012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.495 [2024-02-14 20:14:06.689061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.495 [2024-02-14 20:14:06.689106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.495 [2024-02-14 20:14:06.689153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.495 [2024-02-14 20:14:06.689201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.495 [2024-02-14 20:14:06.689243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.495 [2024-02-14 20:14:06.689294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.495 [2024-02-14 20:14:06.689336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.495 [2024-02-14 20:14:06.689375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.495 [2024-02-14 20:14:06.689411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.495 [2024-02-14 20:14:06.689447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.495 [2024-02-14 20:14:06.689475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.495 [2024-02-14 20:14:06.689511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.495 [2024-02-14 20:14:06.689546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.495 [2024-02-14 20:14:06.689585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.495 [2024-02-14 20:14:06.689627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.496 [2024-02-14 20:14:06.689663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.496 [2024-02-14 20:14:06.689693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.496 [2024-02-14 20:14:06.689736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.496 [2024-02-14 20:14:06.689779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.496 [2024-02-14 20:14:06.689829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.496 [2024-02-14 20:14:06.689878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.496 [2024-02-14 20:14:06.689929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.496 [2024-02-14 20:14:06.689977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.496 [2024-02-14 20:14:06.690022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.496 [2024-02-14 20:14:06.690074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.496 [2024-02-14 20:14:06.690120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.496 [2024-02-14 20:14:06.690166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.496 [2024-02-14 20:14:06.690216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.496 [2024-02-14 20:14:06.690259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.496 [2024-02-14 20:14:06.690311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.496 [2024-02-14 20:14:06.690357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.496 [2024-02-14 20:14:06.690399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.496 [2024-02-14 20:14:06.690441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.496 [2024-02-14 20:14:06.690485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.496 [2024-02-14 20:14:06.690536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.496 [2024-02-14 20:14:06.690582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.496 [2024-02-14 20:14:06.690628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.496 [2024-02-14 20:14:06.690678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.496 [2024-02-14 20:14:06.690720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.496 [2024-02-14 20:14:06.690759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.496 [2024-02-14 20:14:06.690799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.496 [2024-02-14 20:14:06.690836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.496 [2024-02-14 20:14:06.690873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.496 [2024-02-14 20:14:06.690901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.496 [2024-02-14 20:14:06.690953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.496 [2024-02-14 20:14:06.690989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.496 [2024-02-14 20:14:06.691032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.496 [2024-02-14 20:14:06.691081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.496 [2024-02-14 20:14:06.691119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.496 [2024-02-14 20:14:06.691156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.496 [2024-02-14 20:14:06.691528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.496 [2024-02-14 20:14:06.691577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.496 [2024-02-14 20:14:06.691626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.496 [2024-02-14 20:14:06.691679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.496 [2024-02-14 20:14:06.691724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.496 [2024-02-14 20:14:06.691769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.496 [2024-02-14 20:14:06.691812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.496 [2024-02-14 20:14:06.691861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.496 [2024-02-14 20:14:06.691905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.496 [2024-02-14 20:14:06.691949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.496 [2024-02-14 20:14:06.692004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.496 [2024-02-14 20:14:06.692050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.496 [2024-02-14 20:14:06.692100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.496 [2024-02-14 20:14:06.692148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.496 [2024-02-14 20:14:06.692196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.496 [2024-02-14 20:14:06.692242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.496 [2024-02-14 20:14:06.692286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.496 [2024-02-14 20:14:06.692336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.496 [2024-02-14 20:14:06.692382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.496 [2024-02-14 20:14:06.692430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.496 [2024-02-14 20:14:06.692481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.496 [2024-02-14 20:14:06.692528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.496 [2024-02-14 20:14:06.692569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.496 [2024-02-14 20:14:06.692606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.496 [2024-02-14 20:14:06.692643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.496 [2024-02-14 20:14:06.692689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.496 [2024-02-14 20:14:06.692731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.496 [2024-02-14 20:14:06.692773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.496 [2024-02-14 20:14:06.692811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.496 [2024-02-14 20:14:06.692841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.496 [2024-02-14 20:14:06.692874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.496 [2024-02-14 20:14:06.692916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.496 [2024-02-14 20:14:06.692954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.496 [2024-02-14 20:14:06.692994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.496 [2024-02-14 20:14:06.693034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.496 [2024-02-14 20:14:06.693066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.496 [2024-02-14 20:14:06.693103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.496 [2024-02-14 20:14:06.693140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.496 [2024-02-14 20:14:06.693177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.496 [2024-02-14 20:14:06.693208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.496 [2024-02-14 20:14:06.693237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.496 [2024-02-14 20:14:06.693279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.496 [2024-02-14 20:14:06.693327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.496 [2024-02-14 20:14:06.693372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.496 [2024-02-14 20:14:06.693425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.496 [2024-02-14 20:14:06.693473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.496 [2024-02-14 20:14:06.693520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.496 [2024-02-14 20:14:06.693571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.497 [2024-02-14 20:14:06.693622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.497 [2024-02-14 20:14:06.693676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.497 [2024-02-14 20:14:06.693735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.497 [2024-02-14 20:14:06.693784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.497 [2024-02-14 20:14:06.693835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.497 [2024-02-14 20:14:06.693884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.497 [2024-02-14 20:14:06.693929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.497 [2024-02-14 20:14:06.693976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.497 [2024-02-14 20:14:06.694027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.497 [2024-02-14 20:14:06.694072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.497 [2024-02-14 20:14:06.694116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.497 [2024-02-14 20:14:06.694167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.497 [2024-02-14 20:14:06.694214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.497 [2024-02-14 20:14:06.694262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.497 [2024-02-14 20:14:06.694314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.497 [2024-02-14 20:14:06.694357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.497 [2024-02-14 20:14:06.694461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.497 [2024-02-14 20:14:06.694507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.497 [2024-02-14 20:14:06.694551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.497 [2024-02-14 20:14:06.694581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.497 [2024-02-14 20:14:06.694609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.497 [2024-02-14 20:14:06.694657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.497 [2024-02-14 20:14:06.694707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.497 [2024-02-14 20:14:06.694956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.497 [2024-02-14 20:14:06.694991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.497 [2024-02-14 20:14:06.695024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.497 [2024-02-14 20:14:06.695070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.497 [2024-02-14 20:14:06.695116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.497 [2024-02-14 20:14:06.695162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.497 [2024-02-14 20:14:06.695209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.497 [2024-02-14 20:14:06.695254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.497 [2024-02-14 20:14:06.695305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.497 [2024-02-14 20:14:06.695364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.497 [2024-02-14 20:14:06.695409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.497 [2024-02-14 20:14:06.695451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.497 [2024-02-14 20:14:06.695494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.497 [2024-02-14 20:14:06.695539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.497 [2024-02-14 20:14:06.695580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.497 [2024-02-14 20:14:06.695609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.497 [2024-02-14 20:14:06.695641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.497 [2024-02-14 20:14:06.695678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.497 [2024-02-14 20:14:06.695719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.497 [2024-02-14 20:14:06.695757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.497 [2024-02-14 20:14:06.695792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.497 [2024-02-14 20:14:06.695824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.497 [2024-02-14 20:14:06.695856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.497 [2024-02-14 20:14:06.695905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.497 [2024-02-14 20:14:06.695952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.497 [2024-02-14 20:14:06.695998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.497 [2024-02-14 20:14:06.696046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.497 [2024-02-14 20:14:06.696093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.497 [2024-02-14 20:14:06.696138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.497 [2024-02-14 20:14:06.696190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.497 [2024-02-14 20:14:06.696235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.497 [2024-02-14 20:14:06.696283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.497 [2024-02-14 20:14:06.696331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.497 [2024-02-14 20:14:06.696374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.497 [2024-02-14 20:14:06.696421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.497 [2024-02-14 20:14:06.696465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.497 [2024-02-14 20:14:06.696512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.497 [2024-02-14 20:14:06.696561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.497 [2024-02-14 20:14:06.696605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.497 [2024-02-14 20:14:06.696658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.497 [2024-02-14 20:14:06.696706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.497 [2024-02-14 20:14:06.696751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.497 [2024-02-14 20:14:06.696797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.497 [2024-02-14 20:14:06.696837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.497 [2024-02-14 20:14:06.696873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.497 [2024-02-14 20:14:06.696914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.497 [2024-02-14 20:14:06.696946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.497 [2024-02-14 20:14:06.696974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.497 [2024-02-14 20:14:06.697014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.497 [2024-02-14 20:14:06.697057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.497 [2024-02-14 20:14:06.697103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.497 [2024-02-14 20:14:06.697139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.497 [2024-02-14 20:14:06.697179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.497 [2024-02-14 20:14:06.697216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.497 [2024-02-14 20:14:06.697254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.497 [2024-02-14 20:14:06.697293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.497 [2024-02-14 20:14:06.697657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.497 [2024-02-14 20:14:06.697709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.497 [2024-02-14 20:14:06.697765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.497 [2024-02-14 20:14:06.697813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.497 [2024-02-14 20:14:06.697854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.497 [2024-02-14 20:14:06.697908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.497 [2024-02-14 20:14:06.697954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.497 [2024-02-14 20:14:06.698001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.497 [2024-02-14 20:14:06.698049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.497 [2024-02-14 20:14:06.698091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.497 [2024-02-14 20:14:06.698141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.497 [2024-02-14 20:14:06.698189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.497 [2024-02-14 20:14:06.698231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.497 [2024-02-14 20:14:06.698269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.497 [2024-02-14 20:14:06.698308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.497 [2024-02-14 20:14:06.698347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.497 [2024-02-14 20:14:06.698390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.497 [2024-02-14 20:14:06.698419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.497 [2024-02-14 20:14:06.698447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.498 [2024-02-14 20:14:06.698485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.498 [2024-02-14 20:14:06.698521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.498 [2024-02-14 20:14:06.698557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.498 [2024-02-14 20:14:06.698596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.498 [2024-02-14 20:14:06.698636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.498 [2024-02-14 20:14:06.698672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.498 [2024-02-14 20:14:06.698703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.498 [2024-02-14 20:14:06.698746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.498 [2024-02-14 20:14:06.698798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.498 [2024-02-14 20:14:06.698844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.498 [2024-02-14 20:14:06.698890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.498 [2024-02-14 20:14:06.698938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.498 [2024-02-14 20:14:06.698981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.498 [2024-02-14 20:14:06.699030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.498 [2024-02-14 20:14:06.699082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.498 [2024-02-14 20:14:06.699125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.498 [2024-02-14 20:14:06.699174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.498 [2024-02-14 20:14:06.699220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.498 [2024-02-14 20:14:06.699267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.498 [2024-02-14 20:14:06.699313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.498 [2024-02-14 20:14:06.699365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.498 [2024-02-14 20:14:06.699407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.498 [2024-02-14 20:14:06.699455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.498 [2024-02-14 20:14:06.699499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.498 [2024-02-14 20:14:06.699548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.498 [2024-02-14 20:14:06.699594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.498 [2024-02-14 20:14:06.699643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.498 [2024-02-14 20:14:06.699691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.498 [2024-02-14 20:14:06.699735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.498 [2024-02-14 20:14:06.699781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.498 [2024-02-14 20:14:06.699831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.498 [2024-02-14 20:14:06.699880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.498 [2024-02-14 20:14:06.699921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.498 [2024-02-14 20:14:06.699968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.498 [2024-02-14 20:14:06.700021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.498 [2024-02-14 20:14:06.700063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.498 [2024-02-14 20:14:06.700106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.498 [2024-02-14 20:14:06.700150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.498 [2024-02-14 20:14:06.700193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.498 [2024-02-14 20:14:06.700243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.498 [2024-02-14 20:14:06.700273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.498 [2024-02-14 20:14:06.700304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.498 [2024-02-14 20:14:06.700343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.498 [2024-02-14 20:14:06.700383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.498 [2024-02-14 20:14:06.700428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.498 [2024-02-14 20:14:06.700523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.498 [2024-02-14 20:14:06.700567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.498 [2024-02-14 20:14:06.700606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.498 [2024-02-14 20:14:06.700658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.498 [2024-02-14 20:14:06.700688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.498 [2024-02-14 20:14:06.700725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.498 [2024-02-14 20:14:06.700767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.498 [2024-02-14 20:14:06.701018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.498 [2024-02-14 20:14:06.701068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.498 [2024-02-14 20:14:06.701111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.498 [2024-02-14 20:14:06.701158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.498 [2024-02-14 20:14:06.701212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.498 [2024-02-14 20:14:06.701262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.498 [2024-02-14 20:14:06.701312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.498 [2024-02-14 20:14:06.701360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.498 [2024-02-14 20:14:06.701404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.498 [2024-02-14 20:14:06.701448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.498 [2024-02-14 20:14:06.701496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.498 [2024-02-14 20:14:06.701534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.498 [2024-02-14 20:14:06.701571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.498 [2024-02-14 20:14:06.701607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.498 [2024-02-14 20:14:06.701636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.498 [2024-02-14 20:14:06.701679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.498 [2024-02-14 20:14:06.701720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.498 [2024-02-14 20:14:06.701758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.498 [2024-02-14 20:14:06.701795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.498 [2024-02-14 20:14:06.701830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.498 [2024-02-14 20:14:06.701870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.498 [2024-02-14 20:14:06.701904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.498 [2024-02-14 20:14:06.701935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.498 [2024-02-14 20:14:06.701978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.498 [2024-02-14 20:14:06.702023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.498 [2024-02-14 20:14:06.702070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.498 [2024-02-14 20:14:06.702116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.498 [2024-02-14 20:14:06.702158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.498 [2024-02-14 20:14:06.702207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.498 [2024-02-14 20:14:06.702255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.498 [2024-02-14 20:14:06.702302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.498 [2024-02-14 20:14:06.702348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.498 [2024-02-14 20:14:06.702389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.498 [2024-02-14 20:14:06.702431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.498 [2024-02-14 20:14:06.702476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.498 [2024-02-14 20:14:06.702520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.498 [2024-02-14 20:14:06.702571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.498 [2024-02-14 20:14:06.702615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.498 [2024-02-14 20:14:06.702663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.498 [2024-02-14 20:14:06.702709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.498 [2024-02-14 20:14:06.702758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.498 [2024-02-14 20:14:06.702811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.498 [2024-02-14 20:14:06.702865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.498 [2024-02-14 20:14:06.702909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.498 [2024-02-14 20:14:06.702956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.499 [2024-02-14 20:14:06.702999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.499 [2024-02-14 20:14:06.703041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.499 [2024-02-14 20:14:06.703079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.499 [2024-02-14 20:14:06.703110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.499 [2024-02-14 20:14:06.703144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.499 [2024-02-14 20:14:06.703187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.499 [2024-02-14 20:14:06.703231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.499 [2024-02-14 20:14:06.703265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.499 [2024-02-14 20:14:06.703308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.499 [2024-02-14 20:14:06.703348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.499 [2024-02-14 20:14:06.703389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.499 [2024-02-14 20:14:06.703746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.499 [2024-02-14 20:14:06.703796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.499 [2024-02-14 20:14:06.703843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.499 [2024-02-14 20:14:06.703891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.499 [2024-02-14 20:14:06.703946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.499 [2024-02-14 20:14:06.703992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.499 [2024-02-14 20:14:06.704037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.499 [2024-02-14 20:14:06.704087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.499 [2024-02-14 20:14:06.704135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.499 [2024-02-14 20:14:06.704183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.499 [2024-02-14 20:14:06.704227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.499 [2024-02-14 20:14:06.704275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.499 [2024-02-14 20:14:06.704311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.499 [2024-02-14 20:14:06.704350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.499 [2024-02-14 20:14:06.704385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.499 [2024-02-14 20:14:06.704425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.499 [2024-02-14 20:14:06.704456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.499 [2024-02-14 20:14:06.704484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.499 [2024-02-14 20:14:06.704522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.499 [2024-02-14 20:14:06.704561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.499 [2024-02-14 20:14:06.704597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.499 [2024-02-14 20:14:06.704632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.499 [2024-02-14 20:14:06.704675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.499 [2024-02-14 20:14:06.704705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.499 [2024-02-14 20:14:06.704735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.499 [2024-02-14 20:14:06.704780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.499 [2024-02-14 20:14:06.704821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.499 [2024-02-14 20:14:06.704877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.499 [2024-02-14 20:14:06.704926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.499 [2024-02-14 20:14:06.704975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.499 [2024-02-14 20:14:06.705026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.499 [2024-02-14 20:14:06.705079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.499 [2024-02-14 20:14:06.705125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.499 [2024-02-14 20:14:06.705169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.499 [2024-02-14 20:14:06.705222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.499 [2024-02-14 20:14:06.705267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.499 [2024-02-14 20:14:06.705314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.499 [2024-02-14 20:14:06.705369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.499 [2024-02-14 20:14:06.705413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.499 [2024-02-14 20:14:06.705457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.499 [2024-02-14 20:14:06.705501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.499 [2024-02-14 20:14:06.705544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.499 [2024-02-14 20:14:06.705593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.499 [2024-02-14 20:14:06.705643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.499 [2024-02-14 20:14:06.705694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.499 [2024-02-14 20:14:06.705749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.499 [2024-02-14 20:14:06.705793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.499 [2024-02-14 20:14:06.705842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.499 [2024-02-14 20:14:06.705888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.499 [2024-02-14 20:14:06.705935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.499 [2024-02-14 20:14:06.705979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.499 [2024-02-14 20:14:06.706016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.499 [2024-02-14 20:14:06.706054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.499 [2024-02-14 20:14:06.706095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.499 [2024-02-14 20:14:06.706126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.499 [2024-02-14 20:14:06.706154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.499 [2024-02-14 20:14:06.706193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.499 [2024-02-14 20:14:06.706236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.499 [2024-02-14 20:14:06.706275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.499 [2024-02-14 20:14:06.706317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.499 [2024-02-14 20:14:06.706357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.499 [2024-02-14 20:14:06.706397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.499 [2024-02-14 20:14:06.706438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.499 [2024-02-14 20:14:06.706468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.499 [2024-02-14 20:14:06.706559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.499 [2024-02-14 20:14:06.706590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.499 [2024-02-14 20:14:06.706634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.499 [2024-02-14 20:14:06.706685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.499 [2024-02-14 20:14:06.706730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.499 [2024-02-14 20:14:06.706775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.499 [2024-02-14 20:14:06.706829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.499 [2024-02-14 20:14:06.707093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.499 [2024-02-14 20:14:06.707145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.499 [2024-02-14 20:14:06.707192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.499 [2024-02-14 20:14:06.707238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.499 [2024-02-14 20:14:06.707286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.499 [2024-02-14 20:14:06.707331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.499 [2024-02-14 20:14:06.707377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.499 [2024-02-14 20:14:06.707419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.499 [2024-02-14 20:14:06.707459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.499 [2024-02-14 20:14:06.707497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.499 [2024-02-14 20:14:06.707538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.499 [2024-02-14 20:14:06.707580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.499 [2024-02-14 20:14:06.707620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.499 [2024-02-14 20:14:06.707655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.499 [2024-02-14 20:14:06.707686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.499 [2024-02-14 20:14:06.707730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.500 [2024-02-14 20:14:06.707769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.500 [2024-02-14 20:14:06.707801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.500 [2024-02-14 20:14:06.707841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.500 [2024-02-14 20:14:06.707881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.500 [2024-02-14 20:14:06.707916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.500 [2024-02-14 20:14:06.707945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.500 [2024-02-14 20:14:06.707978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.500 [2024-02-14 20:14:06.708024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.500 [2024-02-14 20:14:06.708067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.500 [2024-02-14 20:14:06.708112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.500 [2024-02-14 20:14:06.708154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.500 [2024-02-14 20:14:06.708200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.500 [2024-02-14 20:14:06.708246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.500 [2024-02-14 20:14:06.708289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.500 [2024-02-14 20:14:06.708333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.500 [2024-02-14 20:14:06.708376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.500 [2024-02-14 20:14:06.708425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.500 [2024-02-14 20:14:06.708472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.500 [2024-02-14 20:14:06.708514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.500 [2024-02-14 20:14:06.708563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.500 [2024-02-14 20:14:06.708608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.500 [2024-02-14 20:14:06.708661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.500 [2024-02-14 20:14:06.708706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.500 [2024-02-14 20:14:06.708752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.500 [2024-02-14 20:14:06.708799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.500 [2024-02-14 20:14:06.708840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.500 [2024-02-14 20:14:06.708887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.500 [2024-02-14 20:14:06.708935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.500 [2024-02-14 20:14:06.708973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.500 [2024-02-14 20:14:06.709011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.500 [2024-02-14 20:14:06.709053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.500 [2024-02-14 20:14:06.709087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.500 [2024-02-14 20:14:06.709116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.500 [2024-02-14 20:14:06.709152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.500 [2024-02-14 20:14:06.709195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.500 [2024-02-14 20:14:06.709242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.500 [2024-02-14 20:14:06.709284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.500 [2024-02-14 20:14:06.709338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.500 [2024-02-14 20:14:06.709380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.500 [2024-02-14 20:14:06.709419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.500 [2024-02-14 20:14:06.709771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.500 [2024-02-14 20:14:06.709825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.500 [2024-02-14 20:14:06.709879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.500 [2024-02-14 20:14:06.709924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.500 [2024-02-14 20:14:06.709972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.500 [2024-02-14 20:14:06.710022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.500 [2024-02-14 20:14:06.710066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.500 [2024-02-14 20:14:06.710109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.500 [2024-02-14 20:14:06.710161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.500 [2024-02-14 20:14:06.710207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.500 [2024-02-14 20:14:06.710251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.500 [2024-02-14 20:14:06.710291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.500 [2024-02-14 20:14:06.710328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.500 [2024-02-14 20:14:06.710368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.500 [2024-02-14 20:14:06.710405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.500 [2024-02-14 20:14:06.710444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.500 [2024-02-14 20:14:06.710484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.500 [2024-02-14 20:14:06.710513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.500 [2024-02-14 20:14:06.710541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.500 [2024-02-14 20:14:06.710579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.500 [2024-02-14 20:14:06.710619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.500 [2024-02-14 20:14:06.710661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.500 [2024-02-14 20:14:06.710702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.500 [2024-02-14 20:14:06.710739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.500 [2024-02-14 20:14:06.710767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.500 [2024-02-14 20:14:06.710802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.500 [2024-02-14 20:14:06.710851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.500 [2024-02-14 20:14:06.710897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.500 [2024-02-14 20:14:06.710941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.500 [2024-02-14 20:14:06.710987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.500 [2024-02-14 20:14:06.711031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.500 [2024-02-14 20:14:06.711078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.500 [2024-02-14 20:14:06.711124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.500 [2024-02-14 20:14:06.711166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.500 [2024-02-14 20:14:06.711212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.500 [2024-02-14 20:14:06.711256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.500 [2024-02-14 20:14:06.711306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.500 [2024-02-14 20:14:06.711354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.500 [2024-02-14 20:14:06.711404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.500 [2024-02-14 20:14:06.711447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.500 [2024-02-14 20:14:06.711496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.500 [2024-02-14 20:14:06.711538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.500 [2024-02-14 20:14:06.711586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.500 [2024-02-14 20:14:06.711636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.500 [2024-02-14 20:14:06.711683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.500 [2024-02-14 20:14:06.711729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.500 [2024-02-14 20:14:06.711778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.500 [2024-02-14 20:14:06.711825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.500 [2024-02-14 20:14:06.711871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.500 [2024-02-14 20:14:06.711917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.500 [2024-02-14 20:14:06.711957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.500 [2024-02-14 20:14:06.711999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.500 [2024-02-14 20:14:06.712036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.500 [2024-02-14 20:14:06.712068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.500 [2024-02-14 20:14:06.712097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.500 [2024-02-14 20:14:06.712134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.500 [2024-02-14 20:14:06.712176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.501 [2024-02-14 20:14:06.712217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.501 [2024-02-14 20:14:06.712259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.501 [2024-02-14 20:14:06.712298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.501 [2024-02-14 20:14:06.712335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.501 [2024-02-14 20:14:06.712374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.501 [2024-02-14 20:14:06.712411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.501 [2024-02-14 20:14:06.712447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.501 [2024-02-14 20:14:06.712544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.501 [2024-02-14 20:14:06.712575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.501 [2024-02-14 20:14:06.712619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.501 [2024-02-14 20:14:06.712673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.501 [2024-02-14 20:14:06.712720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.501 [2024-02-14 20:14:06.712770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.501 [2024-02-14 20:14:06.712814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.501 [2024-02-14 20:14:06.713063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.501 [2024-02-14 20:14:06.713110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.501 [2024-02-14 20:14:06.713159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.501 [2024-02-14 20:14:06.713198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.501 [2024-02-14 20:14:06.713237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.501 [2024-02-14 20:14:06.713275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.501 [2024-02-14 20:14:06.713318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.501 [2024-02-14 20:14:06.713348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.501 [2024-02-14 20:14:06.713376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.501 [2024-02-14 20:14:06.713420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.501 [2024-02-14 20:14:06.713454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.501 [2024-02-14 20:14:06.713493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.501 [2024-02-14 20:14:06.713532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.501 [2024-02-14 20:14:06.713570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.501 [2024-02-14 20:14:06.713599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.501 [2024-02-14 20:14:06.713629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.501 [2024-02-14 20:14:06.713678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.501 [2024-02-14 20:14:06.713725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.501 [2024-02-14 20:14:06.713769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.501 [2024-02-14 20:14:06.713818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.501 [2024-02-14 20:14:06.713866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.501 [2024-02-14 20:14:06.713912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.501 [2024-02-14 20:14:06.713958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.501 [2024-02-14 20:14:06.714009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.501 [2024-02-14 20:14:06.714054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.501 [2024-02-14 20:14:06.714099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.501 [2024-02-14 20:14:06.714154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.501 [2024-02-14 20:14:06.714196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.501 [2024-02-14 20:14:06.714243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.501 [2024-02-14 20:14:06.714290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.501 [2024-02-14 20:14:06.714337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.501 [2024-02-14 20:14:06.714383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.501 [2024-02-14 20:14:06.714430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.501 [2024-02-14 20:14:06.714479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.501 [2024-02-14 20:14:06.714527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.501 [2024-02-14 20:14:06.714573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.501 [2024-02-14 20:14:06.714619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.501 [2024-02-14 20:14:06.714675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.501 [2024-02-14 20:14:06.714721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.501 [2024-02-14 20:14:06.714765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.501 [2024-02-14 20:14:06.714816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.501 [2024-02-14 20:14:06.714859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.501 [2024-02-14 20:14:06.714901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.501 [2024-02-14 20:14:06.714945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.501 [2024-02-14 20:14:06.714994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.501 [2024-02-14 20:14:06.715034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.501 [2024-02-14 20:14:06.715064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.501 [2024-02-14 20:14:06.715092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.501 [2024-02-14 20:14:06.715133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.501 [2024-02-14 20:14:06.715176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.501 [2024-02-14 20:14:06.715221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.501 [2024-02-14 20:14:06.715258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.501 [2024-02-14 20:14:06.715302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.501 [2024-02-14 20:14:06.715346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.501 [2024-02-14 20:14:06.715383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.501 [2024-02-14 20:14:06.715425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.501 [2024-02-14 20:14:06.715840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.501 [2024-02-14 20:14:06.715892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.501 [2024-02-14 20:14:06.715944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.501 [2024-02-14 20:14:06.715990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.501 [2024-02-14 20:14:06.716038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.501 [2024-02-14 20:14:06.716080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.501 [2024-02-14 20:14:06.716130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.501 [2024-02-14 20:14:06.716179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.501 [2024-02-14 20:14:06.716224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.501 [2024-02-14 20:14:06.716273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.501 [2024-02-14 20:14:06.716322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.501 [2024-02-14 20:14:06.716362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.501 [2024-02-14 20:14:06.716401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.502 [2024-02-14 20:14:06.716437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.502 [2024-02-14 20:14:06.716472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.502 [2024-02-14 20:14:06.716509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.502 [2024-02-14 20:14:06.716538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.502 [2024-02-14 20:14:06.716574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.502 [2024-02-14 20:14:06.716611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.502 [2024-02-14 20:14:06.716654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.502 [2024-02-14 20:14:06.716693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.502 [2024-02-14 20:14:06.716732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.502 [2024-02-14 20:14:06.716772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.502 [2024-02-14 20:14:06.716806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.502 [2024-02-14 20:14:06.716835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.502 [2024-02-14 20:14:06.716877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.502 [2024-02-14 20:14:06.716927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.502 [2024-02-14 20:14:06.716970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.502 [2024-02-14 20:14:06.717015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.502 [2024-02-14 20:14:06.717067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.502 [2024-02-14 20:14:06.717108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.502 [2024-02-14 20:14:06.717156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.502 [2024-02-14 20:14:06.717200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.502 [2024-02-14 20:14:06.717248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.502 [2024-02-14 20:14:06.717298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.502 [2024-02-14 20:14:06.717349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.502 [2024-02-14 20:14:06.717395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.502 [2024-02-14 20:14:06.717445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.502 [2024-02-14 20:14:06.717490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.502 [2024-02-14 20:14:06.717533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.502 [2024-02-14 20:14:06.717585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.502 [2024-02-14 20:14:06.717631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.502 [2024-02-14 20:14:06.717681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.502 [2024-02-14 20:14:06.717727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.502 [2024-02-14 20:14:06.717771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.502 [2024-02-14 20:14:06.717820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.502 [2024-02-14 20:14:06.717865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.502 [2024-02-14 20:14:06.717915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.502 [2024-02-14 20:14:06.717963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.502 [2024-02-14 20:14:06.718012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.502 [2024-02-14 20:14:06.718059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.502 [2024-02-14 20:14:06.718097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.502 [2024-02-14 20:14:06.718129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.502 [2024-02-14 20:14:06.718158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.502 [2024-02-14 20:14:06.718192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.502 [2024-02-14 20:14:06.718234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.502 [2024-02-14 20:14:06.718274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.502 [2024-02-14 20:14:06.718318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.502 [2024-02-14 20:14:06.718360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.502 [2024-02-14 20:14:06.718401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.502 [2024-02-14 20:14:06.718439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.502 [2024-02-14 20:14:06.718477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.502 [2024-02-14 20:14:06.718506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.502 [2024-02-14 20:14:06.718547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.502 [2024-02-14 20:14:06.718645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.502 [2024-02-14 20:14:06.718696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.502 [2024-02-14 20:14:06.718750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.502 [2024-02-14 20:14:06.718797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.502 [2024-02-14 20:14:06.718845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.502 [2024-02-14 20:14:06.718896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.502 [2024-02-14 20:14:06.718941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.502 [2024-02-14 20:14:06.719224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.502 [2024-02-14 20:14:06.719271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.502 [2024-02-14 20:14:06.719313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.502 [2024-02-14 20:14:06.719358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.502 [2024-02-14 20:14:06.719399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.502 [2024-02-14 20:14:06.719428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.502 [2024-02-14 20:14:06.719457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.502 [2024-02-14 20:14:06.719496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.502 [2024-02-14 20:14:06.719535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.502 [2024-02-14 20:14:06.719570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.502 [2024-02-14 20:14:06.719601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.502 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:13:29.502 [2024-02-14 20:14:06.719639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.502 [2024-02-14 20:14:06.719684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.502 [2024-02-14 20:14:06.719719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.502 [2024-02-14 20:14:06.719751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.502 [2024-02-14 20:14:06.719788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.502 [2024-02-14 20:14:06.719834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.502 [2024-02-14 20:14:06.719881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.502 [2024-02-14 20:14:06.719924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.502 [2024-02-14 20:14:06.719976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.502 [2024-02-14 20:14:06.720024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.502 [2024-02-14 20:14:06.720069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.502 [2024-02-14 20:14:06.720115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.502 [2024-02-14 20:14:06.720163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.502 [2024-02-14 20:14:06.720212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.502 [2024-02-14 20:14:06.720260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.502 [2024-02-14 20:14:06.720306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.502 [2024-02-14 20:14:06.720354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.502 [2024-02-14 20:14:06.720402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.502 [2024-02-14 20:14:06.720452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.502 [2024-02-14 20:14:06.720497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.502 [2024-02-14 20:14:06.720547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.502 [2024-02-14 20:14:06.720594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.502 [2024-02-14 20:14:06.720640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.502 [2024-02-14 20:14:06.720696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.502 [2024-02-14 20:14:06.720742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.502 [2024-02-14 20:14:06.720788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.502 [2024-02-14 20:14:06.720830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.503 [2024-02-14 20:14:06.720870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.503 [2024-02-14 20:14:06.720913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.503 [2024-02-14 20:14:06.720956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.503 [2024-02-14 20:14:06.721001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.503 [2024-02-14 20:14:06.721039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.503 [2024-02-14 20:14:06.721069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.503 [2024-02-14 20:14:06.721100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.503 [2024-02-14 20:14:06.721140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.503 [2024-02-14 20:14:06.721183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.503 [2024-02-14 20:14:06.721226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.503 [2024-02-14 20:14:06.721267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.503 [2024-02-14 20:14:06.721307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.503 [2024-02-14 20:14:06.721346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.503 [2024-02-14 20:14:06.721385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.503 [2024-02-14 20:14:06.721414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.503 [2024-02-14 20:14:06.721447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.503 [2024-02-14 20:14:06.721485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.503 [2024-02-14 20:14:06.721518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.503 [2024-02-14 20:14:06.721558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.503 [2024-02-14 20:14:06.721916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.503 [2024-02-14 20:14:06.721967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.503 [2024-02-14 20:14:06.722015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.503 [2024-02-14 20:14:06.722067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.503 [2024-02-14 20:14:06.722110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.503 [2024-02-14 20:14:06.722151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.503 [2024-02-14 20:14:06.722190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.503 [2024-02-14 20:14:06.722232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.503 [2024-02-14 20:14:06.722276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.503 [2024-02-14 20:14:06.722311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.503 [2024-02-14 20:14:06.722341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.503 [2024-02-14 20:14:06.722369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.503 [2024-02-14 20:14:06.722405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.503 [2024-02-14 20:14:06.722442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.503 [2024-02-14 20:14:06.722480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.503 [2024-02-14 20:14:06.722510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.503 [2024-02-14 20:14:06.722545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.503 [2024-02-14 20:14:06.722591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.503 [2024-02-14 20:14:06.722636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.503 [2024-02-14 20:14:06.722690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.503 [2024-02-14 20:14:06.722733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.503 [2024-02-14 20:14:06.722789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.503 [2024-02-14 20:14:06.722834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.503 [2024-02-14 20:14:06.722884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.503 [2024-02-14 20:14:06.722926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.503 [2024-02-14 20:14:06.722972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.503 [2024-02-14 20:14:06.723023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.503 [2024-02-14 20:14:06.723070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.503 [2024-02-14 20:14:06.723115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.503 [2024-02-14 20:14:06.723162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.503 [2024-02-14 20:14:06.723209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.503 [2024-02-14 20:14:06.723261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.503 [2024-02-14 20:14:06.723307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.503 [2024-02-14 20:14:06.723355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.503 [2024-02-14 20:14:06.723404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.503 [2024-02-14 20:14:06.723449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.503 [2024-02-14 20:14:06.723494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.503 [2024-02-14 20:14:06.723550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.503 [2024-02-14 20:14:06.723595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.503 [2024-02-14 20:14:06.723645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.503 [2024-02-14 20:14:06.723694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.503 [2024-02-14 20:14:06.723743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.503 [2024-02-14 20:14:06.723780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.503 [2024-02-14 20:14:06.723811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.503 [2024-02-14 20:14:06.723839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.503 [2024-02-14 20:14:06.723884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.503 [2024-02-14 20:14:06.723924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.503 [2024-02-14 20:14:06.723960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.503 [2024-02-14 20:14:06.723996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.503 [2024-02-14 20:14:06.724037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.503 [2024-02-14 20:14:06.724072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.503 [2024-02-14 20:14:06.724116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.503 [2024-02-14 20:14:06.724159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.503 [2024-02-14 20:14:06.724194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.503 [2024-02-14 20:14:06.724222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.503 [2024-02-14 20:14:06.724263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.503 [2024-02-14 20:14:06.724299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.503 [2024-02-14 20:14:06.724332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.503 [2024-02-14 20:14:06.724375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.503 [2024-02-14 20:14:06.724425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.503 [2024-02-14 20:14:06.724468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.503 [2024-02-14 20:14:06.724515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.503 [2024-02-14 20:14:06.724564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.503 [2024-02-14 20:14:06.724612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.503 [2024-02-14 20:14:06.724720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.503 [2024-02-14 20:14:06.724767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.503 [2024-02-14 20:14:06.724811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.503 [2024-02-14 20:14:06.724854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.503 [2024-02-14 20:14:06.724903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.503 [2024-02-14 20:14:06.724948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.503 [2024-02-14 20:14:06.724990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.503 [2024-02-14 20:14:06.725221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.503 [2024-02-14 20:14:06.725262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.503 [2024-02-14 20:14:06.725300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.503 [2024-02-14 20:14:06.725340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.503 [2024-02-14 20:14:06.725384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.503 [2024-02-14 20:14:06.725415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.503 [2024-02-14 20:14:06.725444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.503 [2024-02-14 20:14:06.725489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.504 [2024-02-14 20:14:06.725532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.504 [2024-02-14 20:14:06.725574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.504 [2024-02-14 20:14:06.725620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.504 [2024-02-14 20:14:06.725671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.504 [2024-02-14 20:14:06.725721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.504 [2024-02-14 20:14:06.725766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.504 [2024-02-14 20:14:06.725808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.504 [2024-02-14 20:14:06.725858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.504 [2024-02-14 20:14:06.725909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.504 [2024-02-14 20:14:06.725952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.504 [2024-02-14 20:14:06.726001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.504 [2024-02-14 20:14:06.726044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.504 [2024-02-14 20:14:06.726090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.504 [2024-02-14 20:14:06.726132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.504 [2024-02-14 20:14:06.726178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.504 [2024-02-14 20:14:06.726227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.504 [2024-02-14 20:14:06.726276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.504 [2024-02-14 20:14:06.726322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.504 [2024-02-14 20:14:06.726386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.504 [2024-02-14 20:14:06.726436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.504 [2024-02-14 20:14:06.726485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.504 [2024-02-14 20:14:06.726530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.504 [2024-02-14 20:14:06.726572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.504 [2024-02-14 20:14:06.726624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.504 [2024-02-14 20:14:06.726674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.504 [2024-02-14 20:14:06.726726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.504 [2024-02-14 20:14:06.726777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.504 [2024-02-14 20:14:06.726826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.504 [2024-02-14 20:14:06.726865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.504 [2024-02-14 20:14:06.726906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.504 [2024-02-14 20:14:06.726955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.504 [2024-02-14 20:14:06.726995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.504 [2024-02-14 20:14:06.727024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.504 [2024-02-14 20:14:06.727055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.504 [2024-02-14 20:14:06.727096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.504 [2024-02-14 20:14:06.727142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.504 [2024-02-14 20:14:06.727182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.504 [2024-02-14 20:14:06.727221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.504 [2024-02-14 20:14:06.727268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.504 [2024-02-14 20:14:06.727310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.504 [2024-02-14 20:14:06.727354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.504 [2024-02-14 20:14:06.727395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.504 [2024-02-14 20:14:06.727437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.504 [2024-02-14 20:14:06.727466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.504 [2024-02-14 20:14:06.727499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.504 [2024-02-14 20:14:06.727539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.504 [2024-02-14 20:14:06.727573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.504 [2024-02-14 20:14:06.727604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.504 [2024-02-14 20:14:06.727655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.504 [2024-02-14 20:14:06.728009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.504 [2024-02-14 20:14:06.728062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.504 [2024-02-14 20:14:06.728109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.504 [2024-02-14 20:14:06.728155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.504 [2024-02-14 20:14:06.728195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.504 [2024-02-14 20:14:06.728238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.504 [2024-02-14 20:14:06.728286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.504 [2024-02-14 20:14:06.728322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.504 [2024-02-14 20:14:06.728354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.504 [2024-02-14 20:14:06.728390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.504 [2024-02-14 20:14:06.728436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.504 [2024-02-14 20:14:06.728480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.504 [2024-02-14 20:14:06.728510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.504 [2024-02-14 20:14:06.728550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.504 [2024-02-14 20:14:06.728587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.504 [2024-02-14 20:14:06.728621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.504 [2024-02-14 20:14:06.728655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.504 [2024-02-14 20:14:06.728694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.504 [2024-02-14 20:14:06.728742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.504 [2024-02-14 20:14:06.728785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.504 [2024-02-14 20:14:06.728838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.504 [2024-02-14 20:14:06.728886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.504 [2024-02-14 20:14:06.728930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.504 [2024-02-14 20:14:06.728977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.504 [2024-02-14 20:14:06.729022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.504 [2024-02-14 20:14:06.729066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.504 [2024-02-14 20:14:06.729114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.504 [2024-02-14 20:14:06.729161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.504 [2024-02-14 20:14:06.729212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.504 [2024-02-14 20:14:06.729255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.504 [2024-02-14 20:14:06.729302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.504 [2024-02-14 20:14:06.729347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.504 [2024-02-14 20:14:06.729397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.504 [2024-02-14 20:14:06.729442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.504 [2024-02-14 20:14:06.729488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.504 [2024-02-14 20:14:06.729533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.504 [2024-02-14 20:14:06.729580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.504 [2024-02-14 20:14:06.729623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.504 [2024-02-14 20:14:06.729667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.504 [2024-02-14 20:14:06.729705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.504 [2024-02-14 20:14:06.729749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.504 [2024-02-14 20:14:06.729781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.504 [2024-02-14 20:14:06.729811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.504 [2024-02-14 20:14:06.729850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.504 [2024-02-14 20:14:06.729887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.504 [2024-02-14 20:14:06.729923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.504 [2024-02-14 20:14:06.729964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.504 [2024-02-14 20:14:06.730002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.504 [2024-02-14 20:14:06.730039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.505 [2024-02-14 20:14:06.730070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.505 [2024-02-14 20:14:06.730108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.505 [2024-02-14 20:14:06.730145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.505 [2024-02-14 20:14:06.730180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.505 [2024-02-14 20:14:06.730221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.505 [2024-02-14 20:14:06.730263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.505 [2024-02-14 20:14:06.730306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.505 [2024-02-14 20:14:06.730354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.505 [2024-02-14 20:14:06.730398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.505 [2024-02-14 20:14:06.730444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.505 [2024-02-14 20:14:06.730493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.505 [2024-02-14 20:14:06.730544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.505 [2024-02-14 20:14:06.730593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.505 [2024-02-14 20:14:06.730638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.505 [2024-02-14 20:14:06.730685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.505 [2024-02-14 20:14:06.730792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.505 [2024-02-14 20:14:06.730839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.505 [2024-02-14 20:14:06.730886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.505 [2024-02-14 20:14:06.730929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.505 [2024-02-14 20:14:06.730969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.505 [2024-02-14 20:14:06.731010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.505 [2024-02-14 20:14:06.731260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.505 [2024-02-14 20:14:06.731297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.505 [2024-02-14 20:14:06.731338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.505 [2024-02-14 20:14:06.731374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.505 [2024-02-14 20:14:06.731410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.505 [2024-02-14 20:14:06.731441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.505 [2024-02-14 20:14:06.731473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.505 [2024-02-14 20:14:06.731528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.505 [2024-02-14 20:14:06.731570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.505 [2024-02-14 20:14:06.731614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.505 [2024-02-14 20:14:06.731669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.505 [2024-02-14 20:14:06.731714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.505 [2024-02-14 20:14:06.731757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.505 [2024-02-14 20:14:06.731802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.505 [2024-02-14 20:14:06.731848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.505 [2024-02-14 20:14:06.731901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.505 [2024-02-14 20:14:06.731945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.505 [2024-02-14 20:14:06.731993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.505 [2024-02-14 20:14:06.732041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.505 [2024-02-14 20:14:06.732087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.505 [2024-02-14 20:14:06.732131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.505 [2024-02-14 20:14:06.732178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.505 [2024-02-14 20:14:06.732222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.505 [2024-02-14 20:14:06.732266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.505 [2024-02-14 20:14:06.732310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.505 [2024-02-14 20:14:06.732356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.505 [2024-02-14 20:14:06.732404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.505 [2024-02-14 20:14:06.732453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.505 [2024-02-14 20:14:06.732497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.505 [2024-02-14 20:14:06.732545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.505 [2024-02-14 20:14:06.732587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.505 [2024-02-14 20:14:06.732629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.505 [2024-02-14 20:14:06.732676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.505 [2024-02-14 20:14:06.732725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.505 [2024-02-14 20:14:06.732755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.505 [2024-02-14 20:14:06.732782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.505 [2024-02-14 20:14:06.732823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.505 [2024-02-14 20:14:06.732868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.505 [2024-02-14 20:14:06.732906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.505 [2024-02-14 20:14:06.732948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.505 [2024-02-14 20:14:06.732987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.505 [2024-02-14 20:14:06.733027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.505 [2024-02-14 20:14:06.733067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.505 [2024-02-14 20:14:06.733108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.505 [2024-02-14 20:14:06.733143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.505 [2024-02-14 20:14:06.733172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.505 [2024-02-14 20:14:06.733212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.505 [2024-02-14 20:14:06.733245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.505 [2024-02-14 20:14:06.733275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.505 [2024-02-14 20:14:06.733319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.505 [2024-02-14 20:14:06.733364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.505 [2024-02-14 20:14:06.733405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.505 [2024-02-14 20:14:06.733450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.505 [2024-02-14 20:14:06.733493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.505 [2024-02-14 20:14:06.733543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.505 [2024-02-14 20:14:06.733596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.505 [2024-02-14 20:14:06.733640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.505 [2024-02-14 20:14:06.733984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.505 [2024-02-14 20:14:06.734018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.505 [2024-02-14 20:14:06.734048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.505 [2024-02-14 20:14:06.734087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.505 [2024-02-14 20:14:06.734120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.505 [2024-02-14 20:14:06.734158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.505 [2024-02-14 20:14:06.734200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.505 [2024-02-14 20:14:06.734242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.505 [2024-02-14 20:14:06.734275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.505 [2024-02-14 20:14:06.734304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.506 [2024-02-14 20:14:06.734354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.506 [2024-02-14 20:14:06.734400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.506 [2024-02-14 20:14:06.734446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.506 [2024-02-14 20:14:06.734494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.506 [2024-02-14 20:14:06.734536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.506 [2024-02-14 20:14:06.734579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.506 [2024-02-14 20:14:06.734628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.506 [2024-02-14 20:14:06.734681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.506 [2024-02-14 20:14:06.734724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.506 [2024-02-14 20:14:06.734774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.506 [2024-02-14 20:14:06.734817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.506 [2024-02-14 20:14:06.734858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.506 [2024-02-14 20:14:06.734903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.506 [2024-02-14 20:14:06.734952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.506 [2024-02-14 20:14:06.734999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.506 [2024-02-14 20:14:06.735047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.506 [2024-02-14 20:14:06.735094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.506 [2024-02-14 20:14:06.735139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.506 [2024-02-14 20:14:06.735183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.506 [2024-02-14 20:14:06.735231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.506 [2024-02-14 20:14:06.735275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.506 [2024-02-14 20:14:06.735323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.506 [2024-02-14 20:14:06.735369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.506 [2024-02-14 20:14:06.735412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.506 [2024-02-14 20:14:06.735460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.506 [2024-02-14 20:14:06.735505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.506 [2024-02-14 20:14:06.735548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.506 [2024-02-14 20:14:06.735592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.506 [2024-02-14 20:14:06.735633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.506 [2024-02-14 20:14:06.735676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.506 [2024-02-14 20:14:06.735706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.506 [2024-02-14 20:14:06.735744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.506 [2024-02-14 20:14:06.735787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.506 [2024-02-14 20:14:06.735825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.506 [2024-02-14 20:14:06.735862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.506 [2024-02-14 20:14:06.735901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.506 [2024-02-14 20:14:06.735941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.506 [2024-02-14 20:14:06.735980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.506 [2024-02-14 20:14:06.736019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.506 [2024-02-14 20:14:06.736064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.506 [2024-02-14 20:14:06.736092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.506 [2024-02-14 20:14:06.736130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.506 [2024-02-14 20:14:06.736172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.506 [2024-02-14 20:14:06.736202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.506 [2024-02-14 20:14:06.736239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.506 [2024-02-14 20:14:06.736283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.506 [2024-02-14 20:14:06.736330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.506 [2024-02-14 20:14:06.736377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.506 [2024-02-14 20:14:06.736424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.506 [2024-02-14 20:14:06.736474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.506 [2024-02-14 20:14:06.736521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.506 [2024-02-14 20:14:06.736578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.506 [2024-02-14 20:14:06.736625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.506 [2024-02-14 20:14:06.736676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.506 [2024-02-14 20:14:06.736775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.506 [2024-02-14 20:14:06.736825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.506 [2024-02-14 20:14:06.736869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.506 [2024-02-14 20:14:06.736913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.506 [2024-02-14 20:14:06.736956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.506 [2024-02-14 20:14:06.736997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.506 [2024-02-14 20:14:06.737242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.506 [2024-02-14 20:14:06.737283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.506 [2024-02-14 20:14:06.737325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.506 [2024-02-14 20:14:06.737365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.506 [2024-02-14 20:14:06.737404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.506 [2024-02-14 20:14:06.737433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.506 [2024-02-14 20:14:06.737464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.506 [2024-02-14 20:14:06.737505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.506 [2024-02-14 20:14:06.737550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.506 [2024-02-14 20:14:06.737600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.506 [2024-02-14 20:14:06.737651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.506 [2024-02-14 20:14:06.737697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.506 [2024-02-14 20:14:06.737741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.506 [2024-02-14 20:14:06.737789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.506 [2024-02-14 20:14:06.737838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.506 [2024-02-14 20:14:06.737881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.506 [2024-02-14 20:14:06.737934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.506 [2024-02-14 20:14:06.737982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.506 [2024-02-14 20:14:06.738026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.506 [2024-02-14 20:14:06.738077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.506 [2024-02-14 20:14:06.738125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.506 [2024-02-14 20:14:06.738172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.506 [2024-02-14 20:14:06.738219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.506 [2024-02-14 20:14:06.738266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.506 [2024-02-14 20:14:06.738313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.506 [2024-02-14 20:14:06.738359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.506 [2024-02-14 20:14:06.738403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.506 [2024-02-14 20:14:06.738453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.506 [2024-02-14 20:14:06.738502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.506 [2024-02-14 20:14:06.738543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.506 [2024-02-14 20:14:06.738581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.506 [2024-02-14 20:14:06.738622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.506 [2024-02-14 20:14:06.738665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.506 [2024-02-14 20:14:06.738693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.507 [2024-02-14 20:14:06.738729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.507 [2024-02-14 20:14:06.738769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.507 [2024-02-14 20:14:06.738808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.507 [2024-02-14 20:14:06.738854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.507 [2024-02-14 20:14:06.738893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.507 [2024-02-14 20:14:06.738938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.507 [2024-02-14 20:14:06.738978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.507 [2024-02-14 20:14:06.739009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.507 [2024-02-14 20:14:06.739040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.507 [2024-02-14 20:14:06.739079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.507 [2024-02-14 20:14:06.739112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.507 [2024-02-14 20:14:06.739143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.507 [2024-02-14 20:14:06.739190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.507 [2024-02-14 20:14:06.739234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.507 [2024-02-14 20:14:06.739286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.507 [2024-02-14 20:14:06.739333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.507 [2024-02-14 20:14:06.739382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.507 [2024-02-14 20:14:06.739427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.507 [2024-02-14 20:14:06.739472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.507 [2024-02-14 20:14:06.739526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.507 [2024-02-14 20:14:06.739572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.507 [2024-02-14 20:14:06.739616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.507 [2024-02-14 20:14:06.739668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.507 [2024-02-14 20:14:06.740019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.507 [2024-02-14 20:14:06.740052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.507 [2024-02-14 20:14:06.740084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.507 [2024-02-14 20:14:06.740124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.507 [2024-02-14 20:14:06.740159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.507 [2024-02-14 20:14:06.740200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.507 [2024-02-14 20:14:06.740237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.507 [2024-02-14 20:14:06.740274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.507 [2024-02-14 20:14:06.740304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.507 [2024-02-14 20:14:06.740334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.507 [2024-02-14 20:14:06.740380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.507 [2024-02-14 20:14:06.740425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.507 [2024-02-14 20:14:06.740479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.507 [2024-02-14 20:14:06.740529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.507 [2024-02-14 20:14:06.740573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.507 [2024-02-14 20:14:06.740620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.507 [2024-02-14 20:14:06.740679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.507 [2024-02-14 20:14:06.740730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.507 [2024-02-14 20:14:06.740778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.507 [2024-02-14 20:14:06.740823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.507 [2024-02-14 20:14:06.740871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.507 [2024-02-14 20:14:06.740924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.507 [2024-02-14 20:14:06.740974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.507 [2024-02-14 20:14:06.741016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.507 [2024-02-14 20:14:06.741061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.507 [2024-02-14 20:14:06.741105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.507 [2024-02-14 20:14:06.741150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.507 [2024-02-14 20:14:06.741197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.507 [2024-02-14 20:14:06.741241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.507 [2024-02-14 20:14:06.741284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.507 [2024-02-14 20:14:06.741336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.507 [2024-02-14 20:14:06.741383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.507 [2024-02-14 20:14:06.741429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.507 [2024-02-14 20:14:06.741480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.507 [2024-02-14 20:14:06.741518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.507 [2024-02-14 20:14:06.741559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.507 [2024-02-14 20:14:06.741600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.507 [2024-02-14 20:14:06.741636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.507 [2024-02-14 20:14:06.741669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.507 [2024-02-14 20:14:06.741710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.507 [2024-02-14 20:14:06.741748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.507 [2024-02-14 20:14:06.741783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.507 [2024-02-14 20:14:06.741825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.507 [2024-02-14 20:14:06.741869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.507 [2024-02-14 20:14:06.741913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.507 [2024-02-14 20:14:06.741958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.507 [2024-02-14 20:14:06.741995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.507 [2024-02-14 20:14:06.742024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.507 [2024-02-14 20:14:06.742061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.507 [2024-02-14 20:14:06.742101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.507 [2024-02-14 20:14:06.742132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.507 [2024-02-14 20:14:06.742174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.507 [2024-02-14 20:14:06.742220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.507 [2024-02-14 20:14:06.742266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.507 [2024-02-14 20:14:06.742318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.507 [2024-02-14 20:14:06.742362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.507 [2024-02-14 20:14:06.742407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.507 [2024-02-14 20:14:06.742453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.507 [2024-02-14 20:14:06.742501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.507 [2024-02-14 20:14:06.742548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.507 [2024-02-14 20:14:06.742598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.507 [2024-02-14 20:14:06.742653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.507 [2024-02-14 20:14:06.742702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.507 [2024-02-14 20:14:06.742754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.507 [2024-02-14 20:14:06.742858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.507 [2024-02-14 20:14:06.742906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.507 [2024-02-14 20:14:06.742954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.507 [2024-02-14 20:14:06.742994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.507 [2024-02-14 20:14:06.743036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.507 [2024-02-14 20:14:06.743077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.507 [2024-02-14 20:14:06.743323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.507 [2024-02-14 20:14:06.743365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.507 [2024-02-14 20:14:06.743402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.507 [2024-02-14 20:14:06.743434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.508 [2024-02-14 20:14:06.743465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.508 [2024-02-14 20:14:06.743494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.508 [2024-02-14 20:14:06.743539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.508 [2024-02-14 20:14:06.743584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.508 [2024-02-14 20:14:06.743631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.508 [2024-02-14 20:14:06.743688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.508 [2024-02-14 20:14:06.743737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.508 [2024-02-14 20:14:06.743786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.508 [2024-02-14 20:14:06.743835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.508 [2024-02-14 20:14:06.743879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.508 [2024-02-14 20:14:06.743931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.508 [2024-02-14 20:14:06.743977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.508 [2024-02-14 20:14:06.744022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.508 [2024-02-14 20:14:06.744069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.508 [2024-02-14 20:14:06.744115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.508 [2024-02-14 20:14:06.744163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.508 [2024-02-14 20:14:06.744210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.508 [2024-02-14 20:14:06.744253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.508 [2024-02-14 20:14:06.744301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.508 [2024-02-14 20:14:06.744346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.508 [2024-02-14 20:14:06.744387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.508 [2024-02-14 20:14:06.744437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.508 [2024-02-14 20:14:06.744487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.508 [2024-02-14 20:14:06.744528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.508 [2024-02-14 20:14:06.744572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.508 [2024-02-14 20:14:06.744619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.508 [2024-02-14 20:14:06.744668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.508 [2024-02-14 20:14:06.744704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.508 [2024-02-14 20:14:06.744732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.508 [2024-02-14 20:14:06.744769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.508 [2024-02-14 20:14:06.744810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.508 [2024-02-14 20:14:06.744853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.508 [2024-02-14 20:14:06.744895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.508 [2024-02-14 20:14:06.744939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.508 [2024-02-14 20:14:06.744976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.508 [2024-02-14 20:14:06.745020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.508 [2024-02-14 20:14:06.745050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.508 [2024-02-14 20:14:06.745085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.508 [2024-02-14 20:14:06.745123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.508 [2024-02-14 20:14:06.745154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.508 [2024-02-14 20:14:06.745185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.508 [2024-02-14 20:14:06.745228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.508 [2024-02-14 20:14:06.745272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.508 [2024-02-14 20:14:06.745315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.508 [2024-02-14 20:14:06.745361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.508 [2024-02-14 20:14:06.745408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.508 [2024-02-14 20:14:06.745450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.508 [2024-02-14 20:14:06.745498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.508 [2024-02-14 20:14:06.745552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.508 [2024-02-14 20:14:06.745596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.508 [2024-02-14 20:14:06.745640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.508 [2024-02-14 20:14:06.745698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.508 [2024-02-14 20:14:06.745745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.508 [2024-02-14 20:14:06.746080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.508 [2024-02-14 20:14:06.746123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.508 [2024-02-14 20:14:06.746168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.508 [2024-02-14 20:14:06.746203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.508 [2024-02-14 20:14:06.746241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.508 [2024-02-14 20:14:06.746278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.508 [2024-02-14 20:14:06.746314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.508 [2024-02-14 20:14:06.746343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.508 [2024-02-14 20:14:06.746375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.508 [2024-02-14 20:14:06.746421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.508 [2024-02-14 20:14:06.746467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.508 [2024-02-14 20:14:06.746518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.508 [2024-02-14 20:14:06.746560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.508 [2024-02-14 20:14:06.746606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.508 [2024-02-14 20:14:06.746654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.508 [2024-02-14 20:14:06.746698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.508 [2024-02-14 20:14:06.746736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.508 [2024-02-14 20:14:06.746787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.508 [2024-02-14 20:14:06.746835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.508 [2024-02-14 20:14:06.746869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.508 [2024-02-14 20:14:06.746912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.508 [2024-02-14 20:14:06.746958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.508 [2024-02-14 20:14:06.747007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.508 [2024-02-14 20:14:06.747057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.508 [2024-02-14 20:14:06.747102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.508 [2024-02-14 20:14:06.747156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.508 [2024-02-14 20:14:06.747198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.508 [2024-02-14 20:14:06.747243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.508 [2024-02-14 20:14:06.747286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.508 [2024-02-14 20:14:06.747330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.508 [2024-02-14 20:14:06.747374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.508 [2024-02-14 20:14:06.747424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.508 [2024-02-14 20:14:06.747473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.508 [2024-02-14 20:14:06.747520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.508 [2024-02-14 20:14:06.747566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.508 [2024-02-14 20:14:06.747611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.508 [2024-02-14 20:14:06.747665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.508 [2024-02-14 20:14:06.747710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.508 [2024-02-14 20:14:06.747749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.508 [2024-02-14 20:14:06.747788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.508 [2024-02-14 20:14:06.747827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.508 [2024-02-14 20:14:06.747871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.508 [2024-02-14 20:14:06.747905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.508 [2024-02-14 20:14:06.747934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.508 [2024-02-14 20:14:06.747966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.509 [2024-02-14 20:14:06.748004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.509 [2024-02-14 20:14:06.748045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.509 [2024-02-14 20:14:06.748084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.509 [2024-02-14 20:14:06.748131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.509 [2024-02-14 20:14:06.748176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.509 [2024-02-14 20:14:06.748220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.509 [2024-02-14 20:14:06.748261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.509 [2024-02-14 20:14:06.748292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.509 [2024-02-14 20:14:06.748332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.509 [2024-02-14 20:14:06.748370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.509 [2024-02-14 20:14:06.748404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.509 [2024-02-14 20:14:06.748436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.509 [2024-02-14 20:14:06.748479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.509 [2024-02-14 20:14:06.748524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.509 [2024-02-14 20:14:06.748566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.509 [2024-02-14 20:14:06.748612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.509 [2024-02-14 20:14:06.748662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.509 [2024-02-14 20:14:06.748708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.509 [2024-02-14 20:14:06.748752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.509 [2024-02-14 20:14:06.748852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.509 [2024-02-14 20:14:06.748899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.509 [2024-02-14 20:14:06.748946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.509 [2024-02-14 20:14:06.748996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.509 [2024-02-14 20:14:06.749040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.509 [2024-02-14 20:14:06.749092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.509 [2024-02-14 20:14:06.749383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.509 [2024-02-14 20:14:06.749427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.509 [2024-02-14 20:14:06.749457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.509 [2024-02-14 20:14:06.749485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.509 [2024-02-14 20:14:06.749520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.509 [2024-02-14 20:14:06.749565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.509 [2024-02-14 20:14:06.749607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.509 [2024-02-14 20:14:06.749643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.509 [2024-02-14 20:14:06.749684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.509 [2024-02-14 20:14:06.749726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.509 [2024-02-14 20:14:06.749759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.509 [2024-02-14 20:14:06.749788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.509 [2024-02-14 20:14:06.749830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.509 [2024-02-14 20:14:06.749871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.509 [2024-02-14 20:14:06.749917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.509 [2024-02-14 20:14:06.749965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.509 [2024-02-14 20:14:06.750015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.509 [2024-02-14 20:14:06.750061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.509 [2024-02-14 20:14:06.750102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.509 [2024-02-14 20:14:06.750153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.509 [2024-02-14 20:14:06.750199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.509 [2024-02-14 20:14:06.750247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.509 [2024-02-14 20:14:06.750294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.509 [2024-02-14 20:14:06.750346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.509 [2024-02-14 20:14:06.750389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.509 [2024-02-14 20:14:06.750437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.509 [2024-02-14 20:14:06.750485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.509 [2024-02-14 20:14:06.750532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.509 [2024-02-14 20:14:06.750582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.509 [2024-02-14 20:14:06.750624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.509 [2024-02-14 20:14:06.750672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.509 [2024-02-14 20:14:06.750720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.509 [2024-02-14 20:14:06.750765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.509 [2024-02-14 20:14:06.750814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.509 [2024-02-14 20:14:06.750855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.509 [2024-02-14 20:14:06.750898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.509 [2024-02-14 20:14:06.750946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.509 [2024-02-14 20:14:06.750990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.509 [2024-02-14 20:14:06.751023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.509 [2024-02-14 20:14:06.751051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.509 [2024-02-14 20:14:06.751086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.509 [2024-02-14 20:14:06.751127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.509 [2024-02-14 20:14:06.751168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.509 [2024-02-14 20:14:06.751205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.509 [2024-02-14 20:14:06.751247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.509 [2024-02-14 20:14:06.751288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.509 [2024-02-14 20:14:06.751327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.509 [2024-02-14 20:14:06.751357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.509 [2024-02-14 20:14:06.751396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.509 [2024-02-14 20:14:06.751436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.509 [2024-02-14 20:14:06.751468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.509 [2024-02-14 20:14:06.751497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.509 [2024-02-14 20:14:06.751543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.509 [2024-02-14 20:14:06.751586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.509 [2024-02-14 20:14:06.751632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.509 [2024-02-14 20:14:06.751686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.509 [2024-02-14 20:14:06.751735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.509 [2024-02-14 20:14:06.751782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.509 [2024-02-14 20:14:06.752134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.509 [2024-02-14 20:14:06.752178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.509 [2024-02-14 20:14:06.752216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.509 [2024-02-14 20:14:06.752253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.509 [2024-02-14 20:14:06.752294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.509 [2024-02-14 20:14:06.752333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.509 [2024-02-14 20:14:06.752361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.509 [2024-02-14 20:14:06.752396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.509 [2024-02-14 20:14:06.752439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.509 [2024-02-14 20:14:06.752480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.509 [2024-02-14 20:14:06.752518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.509 [2024-02-14 20:14:06.752556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.509 [2024-02-14 20:14:06.752592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.509 [2024-02-14 20:14:06.752623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.509 [2024-02-14 20:14:06.752659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.510 [2024-02-14 20:14:06.752704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.510 [2024-02-14 20:14:06.752758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.510 [2024-02-14 20:14:06.752806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.510 [2024-02-14 20:14:06.752850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.510 [2024-02-14 20:14:06.752897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.510 [2024-02-14 20:14:06.752948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.510 [2024-02-14 20:14:06.752991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.510 [2024-02-14 20:14:06.753036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.510 [2024-02-14 20:14:06.753086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.510 [2024-02-14 20:14:06.753129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.510 [2024-02-14 20:14:06.753173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.510 [2024-02-14 20:14:06.753217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.510 [2024-02-14 20:14:06.753263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.510 [2024-02-14 20:14:06.753306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.510 [2024-02-14 20:14:06.753352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.510 [2024-02-14 20:14:06.753399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.510 [2024-02-14 20:14:06.753443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.510 [2024-02-14 20:14:06.753489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.510 [2024-02-14 20:14:06.753533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.510 [2024-02-14 20:14:06.753579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.510 [2024-02-14 20:14:06.753627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.510 [2024-02-14 20:14:06.753676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.510 [2024-02-14 20:14:06.753719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.510 [2024-02-14 20:14:06.753767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.510 [2024-02-14 20:14:06.753811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.510 [2024-02-14 20:14:06.753851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.510 [2024-02-14 20:14:06.753891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.510 [2024-02-14 20:14:06.753933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.510 [2024-02-14 20:14:06.753962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.510 [2024-02-14 20:14:06.753990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.510 [2024-02-14 20:14:06.754026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.510 [2024-02-14 20:14:06.754068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.510 [2024-02-14 20:14:06.754106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.510 [2024-02-14 20:14:06.754148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.510 [2024-02-14 20:14:06.754187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.510 [2024-02-14 20:14:06.754227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.510 [2024-02-14 20:14:06.754265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.510 [2024-02-14 20:14:06.754298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.510 [2024-02-14 20:14:06.754328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.510 [2024-02-14 20:14:06.754368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.510 [2024-02-14 20:14:06.754411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.510 [2024-02-14 20:14:06.754443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.510 [2024-02-14 20:14:06.754492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.510 [2024-02-14 20:14:06.754538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.510 [2024-02-14 20:14:06.754587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.510 [2024-02-14 20:14:06.754631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.510 [2024-02-14 20:14:06.754680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.510 [2024-02-14 20:14:06.754724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.510 [2024-02-14 20:14:06.754770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.510 [2024-02-14 20:14:06.754876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.510 [2024-02-14 20:14:06.754926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.510 [2024-02-14 20:14:06.754973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.510 [2024-02-14 20:14:06.755023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.510 [2024-02-14 20:14:06.755068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.510 [2024-02-14 20:14:06.755112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.510 [2024-02-14 20:14:06.755371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.510 [2024-02-14 20:14:06.755414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.510 [2024-02-14 20:14:06.755460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.510 [2024-02-14 20:14:06.755489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.510 [2024-02-14 20:14:06.755516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.510 [2024-02-14 20:14:06.755560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.510 [2024-02-14 20:14:06.755599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.510 [2024-02-14 20:14:06.755634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.510 [2024-02-14 20:14:06.755683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.510 [2024-02-14 20:14:06.755725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.510 [2024-02-14 20:14:06.755759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.510 [2024-02-14 20:14:06.755789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.510 [2024-02-14 20:14:06.755819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.510 [2024-02-14 20:14:06.755863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.510 [2024-02-14 20:14:06.755917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.510 [2024-02-14 20:14:06.755966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.510 [2024-02-14 20:14:06.756012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.510 [2024-02-14 20:14:06.756061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.510 [2024-02-14 20:14:06.756104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.510 [2024-02-14 20:14:06.756156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.510 [2024-02-14 20:14:06.756206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.510 [2024-02-14 20:14:06.756252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.510 [2024-02-14 20:14:06.756299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.510 [2024-02-14 20:14:06.756347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.510 [2024-02-14 20:14:06.756392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.510 [2024-02-14 20:14:06.756441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.510 [2024-02-14 20:14:06.756488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.511 [2024-02-14 20:14:06.756534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.511 [2024-02-14 20:14:06.756579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.511 [2024-02-14 20:14:06.756629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.511 [2024-02-14 20:14:06.756684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.511 [2024-02-14 20:14:06.756728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.511 [2024-02-14 20:14:06.756776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.511 [2024-02-14 20:14:06.756833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.511 [2024-02-14 20:14:06.756875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.511 [2024-02-14 20:14:06.756918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.511 [2024-02-14 20:14:06.756957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.511 [2024-02-14 20:14:06.756998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.511 [2024-02-14 20:14:06.757031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.511 [2024-02-14 20:14:06.757061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.511 [2024-02-14 20:14:06.757095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.511 [2024-02-14 20:14:06.757134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.511 [2024-02-14 20:14:06.757172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.511 [2024-02-14 20:14:06.757216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.511 [2024-02-14 20:14:06.757256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.511 [2024-02-14 20:14:06.757298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.511 [2024-02-14 20:14:06.757334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.511 [2024-02-14 20:14:06.757363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.511 [2024-02-14 20:14:06.757399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.511 [2024-02-14 20:14:06.757441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.511 [2024-02-14 20:14:06.757472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.511 [2024-02-14 20:14:06.757503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.511 [2024-02-14 20:14:06.757551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.511 [2024-02-14 20:14:06.757594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.511 [2024-02-14 20:14:06.757650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.511 [2024-02-14 20:14:06.757698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.511 [2024-02-14 20:14:06.757744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.511 [2024-02-14 20:14:06.757792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.511 [2024-02-14 20:14:06.758142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.511 [2024-02-14 20:14:06.758187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.511 [2024-02-14 20:14:06.758226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.511 [2024-02-14 20:14:06.758265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.511 [2024-02-14 20:14:06.758305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.511 [2024-02-14 20:14:06.758344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.511 [2024-02-14 20:14:06.758374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.511 [2024-02-14 20:14:06.758402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.511 [2024-02-14 20:14:06.758440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.511 [2024-02-14 20:14:06.758482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.511 [2024-02-14 20:14:06.758519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.511 [2024-02-14 20:14:06.758559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.511 [2024-02-14 20:14:06.758601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.511 [2024-02-14 20:14:06.758635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.511 [2024-02-14 20:14:06.758669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.511 [2024-02-14 20:14:06.758715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.511 [2024-02-14 20:14:06.758763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.511 [2024-02-14 20:14:06.758807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.511 [2024-02-14 20:14:06.758859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.511 [2024-02-14 20:14:06.758905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.511 [2024-02-14 20:14:06.758953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.511 [2024-02-14 20:14:06.759008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.511 [2024-02-14 20:14:06.759052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.511 [2024-02-14 20:14:06.759099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.511 [2024-02-14 20:14:06.759146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.511 [2024-02-14 20:14:06.759190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.511 [2024-02-14 20:14:06.759237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.511 [2024-02-14 20:14:06.759282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.511 [2024-02-14 20:14:06.759325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.511 [2024-02-14 20:14:06.759376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.511 [2024-02-14 20:14:06.759423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.511 [2024-02-14 20:14:06.759466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.511 [2024-02-14 20:14:06.759518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.511 [2024-02-14 20:14:06.759565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.511 [2024-02-14 20:14:06.759609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.511 [2024-02-14 20:14:06.759662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.511 [2024-02-14 20:14:06.759708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.511 [2024-02-14 20:14:06.759754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.511 [2024-02-14 20:14:06.759800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.511 [2024-02-14 20:14:06.759844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.511 [2024-02-14 20:14:06.759886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.511 [2024-02-14 20:14:06.759930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.511 [2024-02-14 20:14:06.759971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.511 [2024-02-14 20:14:06.760001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.511 [2024-02-14 20:14:06.760031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.511 [2024-02-14 20:14:06.760069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.511 [2024-02-14 20:14:06.760108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.511 [2024-02-14 20:14:06.760146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.511 [2024-02-14 20:14:06.760188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.511 [2024-02-14 20:14:06.760228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.511 [2024-02-14 20:14:06.760274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.511 [2024-02-14 20:14:06.760316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.511 [2024-02-14 20:14:06.760346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.511 [2024-02-14 20:14:06.760378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.511 [2024-02-14 20:14:06.760417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.511 [2024-02-14 20:14:06.760451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.511 [2024-02-14 20:14:06.760482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.511 [2024-02-14 20:14:06.760528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.511 [2024-02-14 20:14:06.760571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.511 [2024-02-14 20:14:06.760622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.511 [2024-02-14 20:14:06.760675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.511 [2024-02-14 20:14:06.760722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.511 [2024-02-14 20:14:06.760766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.511 [2024-02-14 20:14:06.760811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.511 [2024-02-14 20:14:06.760914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.511 [2024-02-14 20:14:06.760959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.511 [2024-02-14 20:14:06.761007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.512 [2024-02-14 20:14:06.761055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.512 [2024-02-14 20:14:06.761103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.512 [2024-02-14 20:14:06.761373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.512 [2024-02-14 20:14:06.761419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.512 [2024-02-14 20:14:06.761457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.512 [2024-02-14 20:14:06.761492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.512 [2024-02-14 20:14:06.761521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.512 [2024-02-14 20:14:06.761560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.512 [2024-02-14 20:14:06.761605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.512 [2024-02-14 20:14:06.761639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.512 [2024-02-14 20:14:06.761684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.512 [2024-02-14 20:14:06.761728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.512 [2024-02-14 20:14:06.761768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.512 [2024-02-14 20:14:06.761798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.512 [2024-02-14 20:14:06.761827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.512 [2024-02-14 20:14:06.761867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.512 [2024-02-14 20:14:06.761918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.512 [2024-02-14 20:14:06.761965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.512 [2024-02-14 20:14:06.762019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.512 [2024-02-14 20:14:06.762062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.512 [2024-02-14 20:14:06.762108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.512 [2024-02-14 20:14:06.762154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.512 [2024-02-14 20:14:06.762202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.512 [2024-02-14 20:14:06.762245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.512 [2024-02-14 20:14:06.762290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.512 [2024-02-14 20:14:06.762338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.512 [2024-02-14 20:14:06.762387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.512 [2024-02-14 20:14:06.762436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.512 [2024-02-14 20:14:06.762481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.512 [2024-02-14 20:14:06.762528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.512 [2024-02-14 20:14:06.762581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.512 [2024-02-14 20:14:06.762626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.512 [2024-02-14 20:14:06.762677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.512 [2024-02-14 20:14:06.762723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.512 [2024-02-14 20:14:06.762772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.512 [2024-02-14 20:14:06.762820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.512 [2024-02-14 20:14:06.762867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.512 [2024-02-14 20:14:06.762913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.512 [2024-02-14 20:14:06.762951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.512 [2024-02-14 20:14:06.762987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.512 [2024-02-14 20:14:06.763026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.512 [2024-02-14 20:14:06.763056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.512 [2024-02-14 20:14:06.763087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.512 [2024-02-14 20:14:06.763128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.512 [2024-02-14 20:14:06.763174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.512 [2024-02-14 20:14:06.763212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.512 [2024-02-14 20:14:06.763249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.512 [2024-02-14 20:14:06.763288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.512 [2024-02-14 20:14:06.763330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.512 [2024-02-14 20:14:06.763374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.512 [2024-02-14 20:14:06.763403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.512 [2024-02-14 20:14:06.763442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.512 [2024-02-14 20:14:06.763481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.512 [2024-02-14 20:14:06.763511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.512 [2024-02-14 20:14:06.763548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.512 [2024-02-14 20:14:06.763593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.512 [2024-02-14 20:14:06.763637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.512 [2024-02-14 20:14:06.763689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.512 [2024-02-14 20:14:06.763736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.512 [2024-02-14 20:14:06.763785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.512 [2024-02-14 20:14:06.764147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.512 [2024-02-14 20:14:06.764197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.512 [2024-02-14 20:14:06.764237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.512 [2024-02-14 20:14:06.764276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.512 [2024-02-14 20:14:06.764316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.512 [2024-02-14 20:14:06.764355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.512 [2024-02-14 20:14:06.764395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.512 [2024-02-14 20:14:06.764423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.512 [2024-02-14 20:14:06.764458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.512 [2024-02-14 20:14:06.764495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.512 [2024-02-14 20:14:06.764529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.512 [2024-02-14 20:14:06.764565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.512 [2024-02-14 20:14:06.764604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.512 [2024-02-14 20:14:06.764641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.512 [2024-02-14 20:14:06.764678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.512 [2024-02-14 20:14:06.764708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.512 [2024-02-14 20:14:06.764750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.512 [2024-02-14 20:14:06.764808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.512 [2024-02-14 20:14:06.764852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.512 [2024-02-14 20:14:06.764897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.512 [2024-02-14 20:14:06.764943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.512 [2024-02-14 20:14:06.764990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.512 [2024-02-14 20:14:06.765039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.512 [2024-02-14 20:14:06.765085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.512 [2024-02-14 20:14:06.765132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.512 [2024-02-14 20:14:06.765180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.512 [2024-02-14 20:14:06.765234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.512 [2024-02-14 20:14:06.765274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.512 [2024-02-14 20:14:06.765327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.512 [2024-02-14 20:14:06.765373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.512 [2024-02-14 20:14:06.765429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.512 [2024-02-14 20:14:06.765478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.512 [2024-02-14 20:14:06.765530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.512 [2024-02-14 20:14:06.765574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.512 [2024-02-14 20:14:06.765621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.512 [2024-02-14 20:14:06.765680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.512 [2024-02-14 20:14:06.765722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.512 [2024-02-14 20:14:06.765770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.513 [2024-02-14 20:14:06.765814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.513 [2024-02-14 20:14:06.765860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.513 [2024-02-14 20:14:06.765903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.513 [2024-02-14 20:14:06.765943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.513 [2024-02-14 20:14:06.765985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.513 [2024-02-14 20:14:06.766023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.513 [2024-02-14 20:14:06.766053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.513 [2024-02-14 20:14:06.766082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.513 [2024-02-14 20:14:06.766124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.513 [2024-02-14 20:14:06.766164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.513 [2024-02-14 20:14:06.766209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.513 [2024-02-14 20:14:06.766249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.513 [2024-02-14 20:14:06.766289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.513 [2024-02-14 20:14:06.766329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.513 [2024-02-14 20:14:06.766367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.513 [2024-02-14 20:14:06.766397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.513 [2024-02-14 20:14:06.766430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.513 [2024-02-14 20:14:06.766469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.513 [2024-02-14 20:14:06.766502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.513 [2024-02-14 20:14:06.766531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.513 [2024-02-14 20:14:06.766577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.513 [2024-02-14 20:14:06.766626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.513 [2024-02-14 20:14:06.766674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.513 [2024-02-14 20:14:06.766721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.513 [2024-02-14 20:14:06.766770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.513 [2024-02-14 20:14:06.766819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.513 [2024-02-14 20:14:06.766922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.513 [2024-02-14 20:14:06.766974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.513 [2024-02-14 20:14:06.767019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.513 [2024-02-14 20:14:06.767064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.513 [2024-02-14 20:14:06.767113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.513 [2024-02-14 20:14:06.767379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.513 [2024-02-14 20:14:06.767420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.513 [2024-02-14 20:14:06.767458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.513 [2024-02-14 20:14:06.767495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.513 [2024-02-14 20:14:06.767528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.513 [2024-02-14 20:14:06.767558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.513 [2024-02-14 20:14:06.767592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.513 [2024-02-14 20:14:06.767630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.513 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:13:29.513 [2024-02-14 20:14:06.767676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.513 [2024-02-14 20:14:06.767712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.513 [2024-02-14 20:14:06.767749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.513 [2024-02-14 20:14:06.767786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.513 [2024-02-14 20:14:06.767816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.513 [2024-02-14 20:14:06.767846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.513 [2024-02-14 20:14:06.767890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.513 [2024-02-14 20:14:06.767933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.513 [2024-02-14 20:14:06.767979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.513 [2024-02-14 20:14:06.768024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.513 [2024-02-14 20:14:06.768068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.513 [2024-02-14 20:14:06.768114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.513 [2024-02-14 20:14:06.768164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.513 [2024-02-14 20:14:06.768211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.513 [2024-02-14 20:14:06.768261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.513 [2024-02-14 20:14:06.768308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.513 [2024-02-14 20:14:06.768354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.513 [2024-02-14 20:14:06.768404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.513 [2024-02-14 20:14:06.768456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.513 [2024-02-14 20:14:06.768501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.513 [2024-02-14 20:14:06.768550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.513 [2024-02-14 20:14:06.768597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.513 [2024-02-14 20:14:06.768640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.513 [2024-02-14 20:14:06.768692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.513 [2024-02-14 20:14:06.768742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.513 [2024-02-14 20:14:06.768787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.513 [2024-02-14 20:14:06.768836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.513 [2024-02-14 20:14:06.768881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.513 [2024-02-14 20:14:06.768924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.513 [2024-02-14 20:14:06.768961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.513 [2024-02-14 20:14:06.769004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.513 [2024-02-14 20:14:06.769040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.513 [2024-02-14 20:14:06.769071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.513 [2024-02-14 20:14:06.769098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.513 [2024-02-14 20:14:06.769142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.513 [2024-02-14 20:14:06.769184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.513 [2024-02-14 20:14:06.769221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.513 [2024-02-14 20:14:06.769266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.513 [2024-02-14 20:14:06.769309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.513 [2024-02-14 20:14:06.769353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.513 [2024-02-14 20:14:06.769388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.513 [2024-02-14 20:14:06.769417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.513 [2024-02-14 20:14:06.769454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.513 [2024-02-14 20:14:06.769490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.513 [2024-02-14 20:14:06.769520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.513 [2024-02-14 20:14:06.769564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.513 [2024-02-14 20:14:06.769610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.513 [2024-02-14 20:14:06.769664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.513 [2024-02-14 20:14:06.769711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.513 [2024-02-14 20:14:06.769757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.513 [2024-02-14 20:14:06.770108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.513 [2024-02-14 20:14:06.770158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.513 [2024-02-14 20:14:06.770201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.513 [2024-02-14 20:14:06.770242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.513 [2024-02-14 20:14:06.770285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.513 [2024-02-14 20:14:06.770325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.513 [2024-02-14 20:14:06.770364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.513 [2024-02-14 20:14:06.770396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.513 [2024-02-14 20:14:06.770425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.514 [2024-02-14 20:14:06.770468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.514 [2024-02-14 20:14:06.770512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.514 [2024-02-14 20:14:06.770551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.514 [2024-02-14 20:14:06.770590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.514 [2024-02-14 20:14:06.770629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.514 [2024-02-14 20:14:06.770665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.514 [2024-02-14 20:14:06.770695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.514 [2024-02-14 20:14:06.770737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.514 [2024-02-14 20:14:06.770782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.514 [2024-02-14 20:14:06.770829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.514 [2024-02-14 20:14:06.770879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.514 [2024-02-14 20:14:06.770923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.514 [2024-02-14 20:14:06.770970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.514 [2024-02-14 20:14:06.771016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.514 [2024-02-14 20:14:06.771058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.514 [2024-02-14 20:14:06.771106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.514 [2024-02-14 20:14:06.771150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.514 [2024-02-14 20:14:06.771197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.514 [2024-02-14 20:14:06.771245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.514 [2024-02-14 20:14:06.771292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.514 [2024-02-14 20:14:06.771339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.514 [2024-02-14 20:14:06.771386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.514 [2024-02-14 20:14:06.771430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.514 [2024-02-14 20:14:06.771476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.514 [2024-02-14 20:14:06.771520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.514 [2024-02-14 20:14:06.771566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.514 [2024-02-14 20:14:06.771615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.514 [2024-02-14 20:14:06.771659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.514 [2024-02-14 20:14:06.771703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.514 [2024-02-14 20:14:06.771749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.514 [2024-02-14 20:14:06.771792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.514 [2024-02-14 20:14:06.771836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.514 [2024-02-14 20:14:06.771876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.514 [2024-02-14 20:14:06.771916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.514 [2024-02-14 20:14:06.771951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.514 [2024-02-14 20:14:06.771984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.514 [2024-02-14 20:14:06.772013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.514 [2024-02-14 20:14:06.772052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.514 [2024-02-14 20:14:06.772094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.514 [2024-02-14 20:14:06.772138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.514 [2024-02-14 20:14:06.772178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.514 [2024-02-14 20:14:06.772217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.514 [2024-02-14 20:14:06.772255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.514 [2024-02-14 20:14:06.772297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.514 [2024-02-14 20:14:06.772340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.514 [2024-02-14 20:14:06.772368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.514 [2024-02-14 20:14:06.772405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.514 [2024-02-14 20:14:06.772442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.514 [2024-02-14 20:14:06.772475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.514 [2024-02-14 20:14:06.772508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.514 [2024-02-14 20:14:06.772554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.514 [2024-02-14 20:14:06.772603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.514 [2024-02-14 20:14:06.772652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.514 [2024-02-14 20:14:06.772699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.514 [2024-02-14 20:14:06.772745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.514 [2024-02-14 20:14:06.772845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.514 [2024-02-14 20:14:06.772891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.514 [2024-02-14 20:14:06.772939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.514 [2024-02-14 20:14:06.772991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.514 [2024-02-14 20:14:06.773035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.514 [2024-02-14 20:14:06.773297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.514 [2024-02-14 20:14:06.773344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.514 [2024-02-14 20:14:06.773392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.514 [2024-02-14 20:14:06.773434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.514 [2024-02-14 20:14:06.773472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.514 [2024-02-14 20:14:06.773502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.514 [2024-02-14 20:14:06.773530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.514 [2024-02-14 20:14:06.773567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.514 [2024-02-14 20:14:06.773609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.514 [2024-02-14 20:14:06.773644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.514 [2024-02-14 20:14:06.773689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.514 [2024-02-14 20:14:06.773724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.514 [2024-02-14 20:14:06.773759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.514 [2024-02-14 20:14:06.773789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.514 [2024-02-14 20:14:06.773821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.514 [2024-02-14 20:14:06.773865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.514 [2024-02-14 20:14:06.773914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.514 [2024-02-14 20:14:06.773957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.514 [2024-02-14 20:14:06.774004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.514 [2024-02-14 20:14:06.774053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.514 [2024-02-14 20:14:06.774099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.514 [2024-02-14 20:14:06.774149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.514 [2024-02-14 20:14:06.774195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.514 [2024-02-14 20:14:06.774245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.514 [2024-02-14 20:14:06.774294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.514 [2024-02-14 20:14:06.774342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.514 [2024-02-14 20:14:06.774387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.514 [2024-02-14 20:14:06.774436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.514 [2024-02-14 20:14:06.774478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.514 [2024-02-14 20:14:06.774522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.514 [2024-02-14 20:14:06.774576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.514 [2024-02-14 20:14:06.774621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.514 [2024-02-14 20:14:06.774673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.514 [2024-02-14 20:14:06.774719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.514 [2024-02-14 20:14:06.774769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.514 [2024-02-14 20:14:06.774822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.514 [2024-02-14 20:14:06.774868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.514 [2024-02-14 20:14:06.774915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.515 [2024-02-14 20:14:06.774956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.515 [2024-02-14 20:14:06.774999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.515 [2024-02-14 20:14:06.775036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.515 [2024-02-14 20:14:06.775066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.515 [2024-02-14 20:14:06.775096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.515 [2024-02-14 20:14:06.775139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.515 [2024-02-14 20:14:06.775185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.515 [2024-02-14 20:14:06.775223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.515 [2024-02-14 20:14:06.775264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.515 [2024-02-14 20:14:06.775307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.515 [2024-02-14 20:14:06.775347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.515 [2024-02-14 20:14:06.775380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.515 [2024-02-14 20:14:06.775410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.515 [2024-02-14 20:14:06.775448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.515 [2024-02-14 20:14:06.775485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.515 [2024-02-14 20:14:06.775515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.515 [2024-02-14 20:14:06.775558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.515 [2024-02-14 20:14:06.775605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.515 [2024-02-14 20:14:06.775661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.515 [2024-02-14 20:14:06.775711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.515 [2024-02-14 20:14:06.776060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.515 [2024-02-14 20:14:06.776111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.515 [2024-02-14 20:14:06.776157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.515 [2024-02-14 20:14:06.776200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.515 [2024-02-14 20:14:06.776249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.515 [2024-02-14 20:14:06.776292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.515 [2024-02-14 20:14:06.776330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.515 [2024-02-14 20:14:06.776360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.515 [2024-02-14 20:14:06.776392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.515 [2024-02-14 20:14:06.776436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.515 [2024-02-14 20:14:06.776475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.515 [2024-02-14 20:14:06.776508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.515 [2024-02-14 20:14:06.776543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.515 [2024-02-14 20:14:06.776578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.515 [2024-02-14 20:14:06.776617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.515 [2024-02-14 20:14:06.776658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.515 [2024-02-14 20:14:06.776689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.515 [2024-02-14 20:14:06.776737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.515 [2024-02-14 20:14:06.776783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.515 [2024-02-14 20:14:06.776830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.515 [2024-02-14 20:14:06.776883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.515 [2024-02-14 20:14:06.776929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.515 [2024-02-14 20:14:06.776975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.515 [2024-02-14 20:14:06.777023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.515 [2024-02-14 20:14:06.777069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.515 [2024-02-14 20:14:06.777116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.515 [2024-02-14 20:14:06.777163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.515 [2024-02-14 20:14:06.777210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.515 [2024-02-14 20:14:06.777260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.515 [2024-02-14 20:14:06.777307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.515 [2024-02-14 20:14:06.777349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.515 [2024-02-14 20:14:06.777396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.515 [2024-02-14 20:14:06.777445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.515 [2024-02-14 20:14:06.777491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.515 [2024-02-14 20:14:06.777539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.515 [2024-02-14 20:14:06.777585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.515 [2024-02-14 20:14:06.777628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.515 [2024-02-14 20:14:06.777684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.515 [2024-02-14 20:14:06.777730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.515 [2024-02-14 20:14:06.777773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.515 [2024-02-14 20:14:06.777823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.515 [2024-02-14 20:14:06.777867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.515 [2024-02-14 20:14:06.777906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.515 [2024-02-14 20:14:06.777953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.515 [2024-02-14 20:14:06.777992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.515 [2024-02-14 20:14:06.778021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.515 [2024-02-14 20:14:06.778050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.515 [2024-02-14 20:14:06.778094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.515 [2024-02-14 20:14:06.778134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.515 [2024-02-14 20:14:06.778171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.515 [2024-02-14 20:14:06.778214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.515 [2024-02-14 20:14:06.778257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.515 [2024-02-14 20:14:06.778301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.515 [2024-02-14 20:14:06.778343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.515 [2024-02-14 20:14:06.778372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.515 [2024-02-14 20:14:06.778402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.515 [2024-02-14 20:14:06.778445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.515 [2024-02-14 20:14:06.778477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.515 [2024-02-14 20:14:06.778508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.515 [2024-02-14 20:14:06.778559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.515 [2024-02-14 20:14:06.778603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.515 [2024-02-14 20:14:06.778655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.515 [2024-02-14 20:14:06.778704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.515 [2024-02-14 20:14:06.778751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.516 [2024-02-14 20:14:06.778849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.516 [2024-02-14 20:14:06.778901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.516 [2024-02-14 20:14:06.778947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.516 [2024-02-14 20:14:06.778992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.516 [2024-02-14 20:14:06.779038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.516 [2024-02-14 20:14:06.779290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.516 [2024-02-14 20:14:06.779332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.516 [2024-02-14 20:14:06.779374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.516 [2024-02-14 20:14:06.779411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.516 [2024-02-14 20:14:06.779446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.516 [2024-02-14 20:14:06.779476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.516 [2024-02-14 20:14:06.779506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.516 [2024-02-14 20:14:06.779552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.516 [2024-02-14 20:14:06.779600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.516 [2024-02-14 20:14:06.779653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.516 [2024-02-14 20:14:06.779698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.516 [2024-02-14 20:14:06.779742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.516 [2024-02-14 20:14:06.779789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.516 [2024-02-14 20:14:06.779830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.516 [2024-02-14 20:14:06.779879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.516 [2024-02-14 20:14:06.779927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.516 [2024-02-14 20:14:06.779976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.516 [2024-02-14 20:14:06.780024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.516 [2024-02-14 20:14:06.780073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.516 [2024-02-14 20:14:06.780120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.516 [2024-02-14 20:14:06.780167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.516 [2024-02-14 20:14:06.780222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.516 [2024-02-14 20:14:06.780270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.516 [2024-02-14 20:14:06.780315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.516 [2024-02-14 20:14:06.780367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.516 [2024-02-14 20:14:06.780410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.516 [2024-02-14 20:14:06.780455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.516 [2024-02-14 20:14:06.780508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.516 [2024-02-14 20:14:06.780557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.516 [2024-02-14 20:14:06.780600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.516 [2024-02-14 20:14:06.780639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.516 [2024-02-14 20:14:06.780682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.516 [2024-02-14 20:14:06.780720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.516 [2024-02-14 20:14:06.780751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.516 [2024-02-14 20:14:06.780780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.516 [2024-02-14 20:14:06.780821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.516 [2024-02-14 20:14:06.780862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.516 [2024-02-14 20:14:06.780903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.516 [2024-02-14 20:14:06.780943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.516 [2024-02-14 20:14:06.780982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.516 [2024-02-14 20:14:06.781021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.516 [2024-02-14 20:14:06.781058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.516 [2024-02-14 20:14:06.781088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.516 [2024-02-14 20:14:06.781128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.516 [2024-02-14 20:14:06.781164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.516 [2024-02-14 20:14:06.781197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.516 [2024-02-14 20:14:06.781228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.516 [2024-02-14 20:14:06.781279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.516 [2024-02-14 20:14:06.781328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.516 [2024-02-14 20:14:06.781374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.516 [2024-02-14 20:14:06.781419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.516 [2024-02-14 20:14:06.781470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.516 [2024-02-14 20:14:06.781515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.516 [2024-02-14 20:14:06.781565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.516 [2024-02-14 20:14:06.781609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.516 [2024-02-14 20:14:06.781660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.516 [2024-02-14 20:14:06.781709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.516 [2024-02-14 20:14:06.781758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.516 [2024-02-14 20:14:06.782117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.516 [2024-02-14 20:14:06.782163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.516 [2024-02-14 20:14:06.782204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.516 [2024-02-14 20:14:06.782243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.516 [2024-02-14 20:14:06.782284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.516 [2024-02-14 20:14:06.782328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.516 [2024-02-14 20:14:06.782359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.516 [2024-02-14 20:14:06.782390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.516 [2024-02-14 20:14:06.782431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.516 [2024-02-14 20:14:06.782476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.516 [2024-02-14 20:14:06.782512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.516 [2024-02-14 20:14:06.782557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.516 [2024-02-14 20:14:06.782601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.516 [2024-02-14 20:14:06.782636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.516 [2024-02-14 20:14:06.782671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.516 [2024-02-14 20:14:06.782701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.516 [2024-02-14 20:14:06.782747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.516 [2024-02-14 20:14:06.782795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.516 [2024-02-14 20:14:06.782844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.516 [2024-02-14 20:14:06.782891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.516 [2024-02-14 20:14:06.782940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.516 [2024-02-14 20:14:06.782987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.516 [2024-02-14 20:14:06.783035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.516 [2024-02-14 20:14:06.783084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.516 [2024-02-14 20:14:06.783131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.516 [2024-02-14 20:14:06.783185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.516 [2024-02-14 20:14:06.783230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.516 [2024-02-14 20:14:06.783276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.516 [2024-02-14 20:14:06.783326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.516 [2024-02-14 20:14:06.783376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.516 [2024-02-14 20:14:06.783421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.516 [2024-02-14 20:14:06.783471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.516 [2024-02-14 20:14:06.783518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.516 [2024-02-14 20:14:06.783562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.517 [2024-02-14 20:14:06.783611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.517 [2024-02-14 20:14:06.783660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.517 [2024-02-14 20:14:06.783706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.517 [2024-02-14 20:14:06.783760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.517 [2024-02-14 20:14:06.783805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.517 [2024-02-14 20:14:06.783847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.517 [2024-02-14 20:14:06.783894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.517 [2024-02-14 20:14:06.783936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.517 [2024-02-14 20:14:06.783979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.517 [2024-02-14 20:14:06.784009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.517 [2024-02-14 20:14:06.784038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.517 [2024-02-14 20:14:06.784081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.517 [2024-02-14 20:14:06.784122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.517 [2024-02-14 20:14:06.784164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.517 [2024-02-14 20:14:06.784211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.517 [2024-02-14 20:14:06.784252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.517 [2024-02-14 20:14:06.784290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.517 [2024-02-14 20:14:06.784335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.517 [2024-02-14 20:14:06.784367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.517 [2024-02-14 20:14:06.784404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.517 [2024-02-14 20:14:06.784444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.517 [2024-02-14 20:14:06.784475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.517 [2024-02-14 20:14:06.784509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.517 [2024-02-14 20:14:06.784558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.517 [2024-02-14 20:14:06.784600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.517 [2024-02-14 20:14:06.784643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.517 [2024-02-14 20:14:06.784702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.517 [2024-02-14 20:14:06.784749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.517 [2024-02-14 20:14:06.784797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.517 [2024-02-14 20:14:06.784849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.517 [2024-02-14 20:14:06.784960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.517 [2024-02-14 20:14:06.785005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.517 [2024-02-14 20:14:06.785052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.517 [2024-02-14 20:14:06.785093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.517 [2024-02-14 20:14:06.785133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.517 [2024-02-14 20:14:06.785371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.517 [2024-02-14 20:14:06.785410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.517 [2024-02-14 20:14:06.785447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.517 [2024-02-14 20:14:06.785477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.517 [2024-02-14 20:14:06.785509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.517 [2024-02-14 20:14:06.785552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.517 [2024-02-14 20:14:06.785604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.517 [2024-02-14 20:14:06.785657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.517 [2024-02-14 20:14:06.785707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.517 [2024-02-14 20:14:06.785756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.517 [2024-02-14 20:14:06.785806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.517 [2024-02-14 20:14:06.785853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.517 [2024-02-14 20:14:06.785905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.517 [2024-02-14 20:14:06.785955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.517 [2024-02-14 20:14:06.786002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.517 [2024-02-14 20:14:06.786048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.517 [2024-02-14 20:14:06.786097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.517 [2024-02-14 20:14:06.786149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.517 [2024-02-14 20:14:06.786195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.517 [2024-02-14 20:14:06.786243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.517 [2024-02-14 20:14:06.786290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.517 [2024-02-14 20:14:06.786337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.517 [2024-02-14 20:14:06.786386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.517 [2024-02-14 20:14:06.786435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.517 [2024-02-14 20:14:06.786485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.517 [2024-02-14 20:14:06.786534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.517 [2024-02-14 20:14:06.786588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.517 [2024-02-14 20:14:06.786629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.517 [2024-02-14 20:14:06.786678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.517 [2024-02-14 20:14:06.786720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.517 [2024-02-14 20:14:06.786754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.517 [2024-02-14 20:14:06.786782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.517 [2024-02-14 20:14:06.786817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.517 [2024-02-14 20:14:06.786854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.517 [2024-02-14 20:14:06.786900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.517 [2024-02-14 20:14:06.786940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.517 [2024-02-14 20:14:06.786987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.517 [2024-02-14 20:14:06.787031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.517 [2024-02-14 20:14:06.787075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.517 [2024-02-14 20:14:06.787109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.517 [2024-02-14 20:14:06.787142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.517 [2024-02-14 20:14:06.787183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.517 [2024-02-14 20:14:06.787221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.517 [2024-02-14 20:14:06.787252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.517 [2024-02-14 20:14:06.787295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.517 [2024-02-14 20:14:06.787339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.517 [2024-02-14 20:14:06.787383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.517 [2024-02-14 20:14:06.787432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.517 [2024-02-14 20:14:06.787476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.517 [2024-02-14 20:14:06.787525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.517 [2024-02-14 20:14:06.787571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.517 [2024-02-14 20:14:06.787624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.517 [2024-02-14 20:14:06.787683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.517 [2024-02-14 20:14:06.787726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.517 [2024-02-14 20:14:06.787777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.517 [2024-02-14 20:14:06.787824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.517 [2024-02-14 20:14:06.787865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.517 [2024-02-14 20:14:06.787912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.517 [2024-02-14 20:14:06.788274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.517 [2024-02-14 20:14:06.788315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.517 [2024-02-14 20:14:06.788358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.517 [2024-02-14 20:14:06.788389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.517 [2024-02-14 20:14:06.788420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.518 [2024-02-14 20:14:06.788459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.518 [2024-02-14 20:14:06.788500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.518 [2024-02-14 20:14:06.788534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.518 [2024-02-14 20:14:06.788571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.518 [2024-02-14 20:14:06.788609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.518 [2024-02-14 20:14:06.788645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.518 [2024-02-14 20:14:06.788682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.518 [2024-02-14 20:14:06.788713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.518 [2024-02-14 20:14:06.788768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.518 [2024-02-14 20:14:06.788812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.518 [2024-02-14 20:14:06.788856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.518 [2024-02-14 20:14:06.788901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.518 [2024-02-14 20:14:06.788951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.518 [2024-02-14 20:14:06.788995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.518 [2024-02-14 20:14:06.789042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.518 [2024-02-14 20:14:06.789088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.518 [2024-02-14 20:14:06.789133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.518 [2024-02-14 20:14:06.789181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.518 [2024-02-14 20:14:06.789235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.518 [2024-02-14 20:14:06.789280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.518 [2024-02-14 20:14:06.789325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.518 [2024-02-14 20:14:06.789374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.518 [2024-02-14 20:14:06.789420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.518 [2024-02-14 20:14:06.789469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.518 [2024-02-14 20:14:06.789520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.518 [2024-02-14 20:14:06.789568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.518 [2024-02-14 20:14:06.789614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.518 [2024-02-14 20:14:06.789669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.518 [2024-02-14 20:14:06.789722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.518 [2024-02-14 20:14:06.789768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.518 [2024-02-14 20:14:06.789812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.518 [2024-02-14 20:14:06.789866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.518 [2024-02-14 20:14:06.789906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.518 [2024-02-14 20:14:06.789947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.518 [2024-02-14 20:14:06.789982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.518 [2024-02-14 20:14:06.790012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.518 [2024-02-14 20:14:06.790043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.518 [2024-02-14 20:14:06.790084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.518 [2024-02-14 20:14:06.790124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.518 [2024-02-14 20:14:06.790169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.518 [2024-02-14 20:14:06.790206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.518 [2024-02-14 20:14:06.790250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.518 [2024-02-14 20:14:06.790293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.518 [2024-02-14 20:14:06.790335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.518 [2024-02-14 20:14:06.790364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.518 [2024-02-14 20:14:06.790401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.518 [2024-02-14 20:14:06.790436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.518 [2024-02-14 20:14:06.790466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.518 [2024-02-14 20:14:06.790516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.518 [2024-02-14 20:14:06.790567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.518 [2024-02-14 20:14:06.790615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.518 [2024-02-14 20:14:06.790663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.518 [2024-02-14 20:14:06.790705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.518 [2024-02-14 20:14:06.790753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.518 [2024-02-14 20:14:06.790801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.518 [2024-02-14 20:14:06.790846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.518 [2024-02-14 20:14:06.790898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.518 [2024-02-14 20:14:06.790951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.518 [2024-02-14 20:14:06.791000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.518 [2024-02-14 20:14:06.791105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.518 [2024-02-14 20:14:06.791145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.518 [2024-02-14 20:14:06.791193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.518 [2024-02-14 20:14:06.791230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.518 [2024-02-14 20:14:06.791268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.518 [2024-02-14 20:14:06.791568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.518 [2024-02-14 20:14:06.791619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.518 [2024-02-14 20:14:06.791675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.518 [2024-02-14 20:14:06.791722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.518 [2024-02-14 20:14:06.791761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.518 [2024-02-14 20:14:06.791805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.518 [2024-02-14 20:14:06.791856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.518 [2024-02-14 20:14:06.791905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.518 [2024-02-14 20:14:06.791951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.518 [2024-02-14 20:14:06.792002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.518 [2024-02-14 20:14:06.792057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.518 [2024-02-14 20:14:06.792106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.518 [2024-02-14 20:14:06.792150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.518 [2024-02-14 20:14:06.792198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.518 [2024-02-14 20:14:06.792245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.518 [2024-02-14 20:14:06.792293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.518 [2024-02-14 20:14:06.792343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.518 [2024-02-14 20:14:06.792395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.518 [2024-02-14 20:14:06.792447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.518 [2024-02-14 20:14:06.792491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.518 [2024-02-14 20:14:06.792538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.518 [2024-02-14 20:14:06.792581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.518 [2024-02-14 20:14:06.792629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.518 [2024-02-14 20:14:06.792677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.518 [2024-02-14 20:14:06.792722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.518 [2024-02-14 20:14:06.792753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.518 [2024-02-14 20:14:06.792786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.518 [2024-02-14 20:14:06.792826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.518 [2024-02-14 20:14:06.792866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.518 [2024-02-14 20:14:06.792912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.518 [2024-02-14 20:14:06.792953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.518 [2024-02-14 20:14:06.792997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.518 [2024-02-14 20:14:06.793034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.518 [2024-02-14 20:14:06.793076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.519 [2024-02-14 20:14:06.793117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.519 [2024-02-14 20:14:06.793148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.519 [2024-02-14 20:14:06.793188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.519 [2024-02-14 20:14:06.793230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.519 [2024-02-14 20:14:06.793263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.519 [2024-02-14 20:14:06.793310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.519 [2024-02-14 20:14:06.793355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.519 [2024-02-14 20:14:06.793403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.519 [2024-02-14 20:14:06.793455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.519 [2024-02-14 20:14:06.793504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.519 [2024-02-14 20:14:06.793548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.519 [2024-02-14 20:14:06.793600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.519 [2024-02-14 20:14:06.793653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.519 [2024-02-14 20:14:06.793702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.519 [2024-02-14 20:14:06.793751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.519 [2024-02-14 20:14:06.793802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.519 [2024-02-14 20:14:06.793848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.519 [2024-02-14 20:14:06.793900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.519 [2024-02-14 20:14:06.793948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.519 [2024-02-14 20:14:06.793988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.519 [2024-02-14 20:14:06.794033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.519 [2024-02-14 20:14:06.794071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.519 [2024-02-14 20:14:06.794115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.519 [2024-02-14 20:14:06.794155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.519 [2024-02-14 20:14:06.794551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.519 [2024-02-14 20:14:06.794601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.519 [2024-02-14 20:14:06.794652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.519 [2024-02-14 20:14:06.794697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.519 [2024-02-14 20:14:06.794752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.519 [2024-02-14 20:14:06.794801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.519 [2024-02-14 20:14:06.794849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.519 [2024-02-14 20:14:06.794901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.519 [2024-02-14 20:14:06.794945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.519 [2024-02-14 20:14:06.794993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.519 [2024-02-14 20:14:06.795038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.519 [2024-02-14 20:14:06.795086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.519 [2024-02-14 20:14:06.795128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.519 [2024-02-14 20:14:06.795174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.519 [2024-02-14 20:14:06.795221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.519 [2024-02-14 20:14:06.795265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.519 [2024-02-14 20:14:06.795317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.519 [2024-02-14 20:14:06.795369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.519 [2024-02-14 20:14:06.795413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.519 [2024-02-14 20:14:06.795459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.519 [2024-02-14 20:14:06.795506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.519 [2024-02-14 20:14:06.795558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.519 [2024-02-14 20:14:06.795607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.519 [2024-02-14 20:14:06.795662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.519 [2024-02-14 20:14:06.795709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.519 [2024-02-14 20:14:06.795754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.519 [2024-02-14 20:14:06.795807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.519 [2024-02-14 20:14:06.795854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.519 [2024-02-14 20:14:06.795897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.519 [2024-02-14 20:14:06.795942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.519 [2024-02-14 20:14:06.795981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.519 [2024-02-14 20:14:06.796023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.519 [2024-02-14 20:14:06.796066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.519 [2024-02-14 20:14:06.796108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.519 [2024-02-14 20:14:06.796138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.519 [2024-02-14 20:14:06.796172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.519 [2024-02-14 20:14:06.796217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.519 [2024-02-14 20:14:06.796260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.519 [2024-02-14 20:14:06.796300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.519 [2024-02-14 20:14:06.796342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.519 [2024-02-14 20:14:06.796389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.519 [2024-02-14 20:14:06.796431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.519 [2024-02-14 20:14:06.796470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.519 [2024-02-14 20:14:06.796511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.519 [2024-02-14 20:14:06.796547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.519 [2024-02-14 20:14:06.796581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.519 [2024-02-14 20:14:06.796618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.519 [2024-02-14 20:14:06.796667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.519 [2024-02-14 20:14:06.796698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.519 [2024-02-14 20:14:06.796750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.519 [2024-02-14 20:14:06.796800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.519 [2024-02-14 20:14:06.796849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.519 [2024-02-14 20:14:06.796896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.519 [2024-02-14 20:14:06.796944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.519 [2024-02-14 20:14:06.796988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.519 [2024-02-14 20:14:06.797034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.519 [2024-02-14 20:14:06.797086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.519 [2024-02-14 20:14:06.797133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.519 [2024-02-14 20:14:06.797180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.520 [2024-02-14 20:14:06.797227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.520 [2024-02-14 20:14:06.797267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.520 [2024-02-14 20:14:06.797306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.520 [2024-02-14 20:14:06.797334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.520 [2024-02-14 20:14:06.797376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.520 [2024-02-14 20:14:06.797477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.520 [2024-02-14 20:14:06.797510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.520 [2024-02-14 20:14:06.797549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.520 [2024-02-14 20:14:06.797585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.520 [2024-02-14 20:14:06.797622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.520 [2024-02-14 20:14:06.797885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.520 [2024-02-14 20:14:06.797932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.520 [2024-02-14 20:14:06.797985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.520 [2024-02-14 20:14:06.798031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.520 [2024-02-14 20:14:06.798075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.520 [2024-02-14 20:14:06.798125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.520 [2024-02-14 20:14:06.798171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.520 [2024-02-14 20:14:06.798215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.520 [2024-02-14 20:14:06.798269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.520 [2024-02-14 20:14:06.798321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.520 [2024-02-14 20:14:06.798366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.520 [2024-02-14 20:14:06.798412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.520 [2024-02-14 20:14:06.798463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.520 [2024-02-14 20:14:06.798511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.520 [2024-02-14 20:14:06.798559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.520 [2024-02-14 20:14:06.798612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.520 [2024-02-14 20:14:06.798658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.520 [2024-02-14 20:14:06.798701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.520 [2024-02-14 20:14:06.798742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.520 [2024-02-14 20:14:06.798781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.520 [2024-02-14 20:14:06.798810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.520 [2024-02-14 20:14:06.798847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.520 [2024-02-14 20:14:06.798885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.520 [2024-02-14 20:14:06.798928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.520 [2024-02-14 20:14:06.798976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.520 [2024-02-14 20:14:06.799020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.520 [2024-02-14 20:14:06.799058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.520 [2024-02-14 20:14:06.799088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.520 [2024-02-14 20:14:06.799119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.520 [2024-02-14 20:14:06.799159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.520 [2024-02-14 20:14:06.799199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.520 [2024-02-14 20:14:06.799229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.520 [2024-02-14 20:14:06.799270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.520 [2024-02-14 20:14:06.799319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.520 [2024-02-14 20:14:06.799363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.520 [2024-02-14 20:14:06.799410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.520 [2024-02-14 20:14:06.799460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.520 [2024-02-14 20:14:06.799506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.520 [2024-02-14 20:14:06.799558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.520 [2024-02-14 20:14:06.799609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.520 [2024-02-14 20:14:06.799662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.520 [2024-02-14 20:14:06.799711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.520 [2024-02-14 20:14:06.799757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.520 [2024-02-14 20:14:06.799804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.520 [2024-02-14 20:14:06.799856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.520 [2024-02-14 20:14:06.799903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.520 [2024-02-14 20:14:06.799946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.520 [2024-02-14 20:14:06.799985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.520 [2024-02-14 20:14:06.800028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.520 [2024-02-14 20:14:06.800058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.520 [2024-02-14 20:14:06.800086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.520 [2024-02-14 20:14:06.800129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.520 [2024-02-14 20:14:06.800172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.520 [2024-02-14 20:14:06.800208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.520 [2024-02-14 20:14:06.800249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.520 [2024-02-14 20:14:06.800290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.520 [2024-02-14 20:14:06.800323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.520 [2024-02-14 20:14:06.800354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.520 [2024-02-14 20:14:06.800707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.520 [2024-02-14 20:14:06.800758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.520 [2024-02-14 20:14:06.800807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.520 [2024-02-14 20:14:06.800858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.520 [2024-02-14 20:14:06.800909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.520 [2024-02-14 20:14:06.800960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.520 [2024-02-14 20:14:06.801006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.520 [2024-02-14 20:14:06.801058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.520 [2024-02-14 20:14:06.801108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.520 [2024-02-14 20:14:06.801156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.520 [2024-02-14 20:14:06.801206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.520 [2024-02-14 20:14:06.801255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.520 [2024-02-14 20:14:06.801299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.520 [2024-02-14 20:14:06.801348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.520 [2024-02-14 20:14:06.801392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.520 [2024-02-14 20:14:06.801437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.520 [2024-02-14 20:14:06.801476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.520 [2024-02-14 20:14:06.801517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.520 [2024-02-14 20:14:06.801555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.520 [2024-02-14 20:14:06.801592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.520 [2024-02-14 20:14:06.801622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.520 [2024-02-14 20:14:06.801665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.520 [2024-02-14 20:14:06.801708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.520 [2024-02-14 20:14:06.801748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.520 [2024-02-14 20:14:06.801800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.520 [2024-02-14 20:14:06.801836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.520 [2024-02-14 20:14:06.801873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.520 [2024-02-14 20:14:06.801904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.520 [2024-02-14 20:14:06.801939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.521 [2024-02-14 20:14:06.801977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.521 [2024-02-14 20:14:06.802017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.521 [2024-02-14 20:14:06.802048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.521 [2024-02-14 20:14:06.802095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.521 [2024-02-14 20:14:06.802144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.521 [2024-02-14 20:14:06.802197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.521 [2024-02-14 20:14:06.802241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.521 [2024-02-14 20:14:06.802286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.521 [2024-02-14 20:14:06.802340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.521 [2024-02-14 20:14:06.802386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.521 [2024-02-14 20:14:06.802432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.521 [2024-02-14 20:14:06.802478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.521 [2024-02-14 20:14:06.802526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.521 [2024-02-14 20:14:06.802576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.521 [2024-02-14 20:14:06.802625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.521 [2024-02-14 20:14:06.802673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.521 [2024-02-14 20:14:06.802719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.521 [2024-02-14 20:14:06.802774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.521 [2024-02-14 20:14:06.802820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.521 [2024-02-14 20:14:06.802867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.521 [2024-02-14 20:14:06.802916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.521 [2024-02-14 20:14:06.802964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.521 [2024-02-14 20:14:06.803010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.521 [2024-02-14 20:14:06.803059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.521 [2024-02-14 20:14:06.803097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.521 [2024-02-14 20:14:06.803141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.521 [2024-02-14 20:14:06.803182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.521 [2024-02-14 20:14:06.803221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.521 [2024-02-14 20:14:06.803261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.521 [2024-02-14 20:14:06.803295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.521 [2024-02-14 20:14:06.803324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.521 [2024-02-14 20:14:06.803363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.521 [2024-02-14 20:14:06.803405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.521 [2024-02-14 20:14:06.803447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.521 [2024-02-14 20:14:06.803496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.521 [2024-02-14 20:14:06.803610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.521 [2024-02-14 20:14:06.803664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.521 [2024-02-14 20:14:06.803697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.521 [2024-02-14 20:14:06.803727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.521 [2024-02-14 20:14:06.803780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.521 [2024-02-14 20:14:06.804064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.521 [2024-02-14 20:14:06.804114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.521 [2024-02-14 20:14:06.804163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.521 [2024-02-14 20:14:06.804206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.521 [2024-02-14 20:14:06.804252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.521 [2024-02-14 20:14:06.804301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.521 [2024-02-14 20:14:06.804348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.521 [2024-02-14 20:14:06.804395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.521 [2024-02-14 20:14:06.804444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.521 [2024-02-14 20:14:06.804489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.521 [2024-02-14 20:14:06.804536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.521 [2024-02-14 20:14:06.804587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.521 [2024-02-14 20:14:06.804637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.521 [2024-02-14 20:14:06.804686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.521 [2024-02-14 20:14:06.804724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.521 [2024-02-14 20:14:06.804765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.521 [2024-02-14 20:14:06.804806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.521 [2024-02-14 20:14:06.804835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.521 [2024-02-14 20:14:06.804876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.521 [2024-02-14 20:14:06.804915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.521 [2024-02-14 20:14:06.804958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.521 [2024-02-14 20:14:06.804998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.521 [2024-02-14 20:14:06.805044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.521 [2024-02-14 20:14:06.805084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.521 [2024-02-14 20:14:06.805116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.521 [2024-02-14 20:14:06.805152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.521 [2024-02-14 20:14:06.805190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.521 [2024-02-14 20:14:06.805219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.521 [2024-02-14 20:14:06.805261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.521 [2024-02-14 20:14:06.805308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.521 [2024-02-14 20:14:06.805358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.521 [2024-02-14 20:14:06.805402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.521 [2024-02-14 20:14:06.805447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.521 [2024-02-14 20:14:06.805500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.521 [2024-02-14 20:14:06.805545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.521 [2024-02-14 20:14:06.805590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.521 [2024-02-14 20:14:06.805636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.521 [2024-02-14 20:14:06.805681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.521 [2024-02-14 20:14:06.805722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.521 [2024-02-14 20:14:06.805761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.521 [2024-02-14 20:14:06.805797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.521 [2024-02-14 20:14:06.805825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.521 [2024-02-14 20:14:06.805857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.521 [2024-02-14 20:14:06.805899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.521 [2024-02-14 20:14:06.805935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.521 [2024-02-14 20:14:06.805971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.521 [2024-02-14 20:14:06.806006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.521 [2024-02-14 20:14:06.806035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.521 [2024-02-14 20:14:06.806086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.521 [2024-02-14 20:14:06.806131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.521 [2024-02-14 20:14:06.806178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.521 [2024-02-14 20:14:06.806231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.521 [2024-02-14 20:14:06.806277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.521 [2024-02-14 20:14:06.806323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.521 [2024-02-14 20:14:06.806372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.521 [2024-02-14 20:14:06.806419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.521 [2024-02-14 20:14:06.806466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.521 [2024-02-14 20:14:06.806516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.522 [2024-02-14 20:14:06.806865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.522 [2024-02-14 20:14:06.806914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.522 [2024-02-14 20:14:06.806968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.522 [2024-02-14 20:14:06.807012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.522 [2024-02-14 20:14:06.807056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.522 [2024-02-14 20:14:06.807114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.522 [2024-02-14 20:14:06.807154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.522 [2024-02-14 20:14:06.807198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.522 [2024-02-14 20:14:06.807240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.522 [2024-02-14 20:14:06.807271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.522 [2024-02-14 20:14:06.807302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.522 [2024-02-14 20:14:06.807345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.522 [2024-02-14 20:14:06.807386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.522 [2024-02-14 20:14:06.807427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.522 [2024-02-14 20:14:06.807466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.522 [2024-02-14 20:14:06.807505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.522 [2024-02-14 20:14:06.807543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.522 [2024-02-14 20:14:06.807584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.522 [2024-02-14 20:14:06.807624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.522 [2024-02-14 20:14:06.807664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.522 [2024-02-14 20:14:06.807695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.522 [2024-02-14 20:14:06.807737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.522 [2024-02-14 20:14:06.807775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.522 [2024-02-14 20:14:06.807806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.522 [2024-02-14 20:14:06.807849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.522 [2024-02-14 20:14:06.807893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.522 [2024-02-14 20:14:06.807946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.522 [2024-02-14 20:14:06.807997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.522 [2024-02-14 20:14:06.808051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.522 [2024-02-14 20:14:06.808098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.522 [2024-02-14 20:14:06.808145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.522 [2024-02-14 20:14:06.808194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.522 [2024-02-14 20:14:06.808238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.522 [2024-02-14 20:14:06.808282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.522 [2024-02-14 20:14:06.808330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.522 [2024-02-14 20:14:06.808378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.522 [2024-02-14 20:14:06.808425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.522 [2024-02-14 20:14:06.808472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.522 [2024-02-14 20:14:06.808517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.522 [2024-02-14 20:14:06.808559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.522 [2024-02-14 20:14:06.808598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.522 [2024-02-14 20:14:06.808639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.522 [2024-02-14 20:14:06.808681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.522 [2024-02-14 20:14:06.808728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.522 [2024-02-14 20:14:06.808759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.522 [2024-02-14 20:14:06.808798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.522 [2024-02-14 20:14:06.808839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.522 [2024-02-14 20:14:06.808874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.522 [2024-02-14 20:14:06.808913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.522 [2024-02-14 20:14:06.808952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.522 [2024-02-14 20:14:06.808990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.522 [2024-02-14 20:14:06.809021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.522 [2024-02-14 20:14:06.809051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.522 [2024-02-14 20:14:06.809094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.522 [2024-02-14 20:14:06.809137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.522 [2024-02-14 20:14:06.809186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.522 [2024-02-14 20:14:06.809238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.522 [2024-02-14 20:14:06.809288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.522 [2024-02-14 20:14:06.809350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.522 [2024-02-14 20:14:06.809398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.522 [2024-02-14 20:14:06.809443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.522 [2024-02-14 20:14:06.809496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.522 [2024-02-14 20:14:06.809542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.522 [2024-02-14 20:14:06.809590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.522 [2024-02-14 20:14:06.809702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.522 [2024-02-14 20:14:06.809747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.522 [2024-02-14 20:14:06.809791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.522 [2024-02-14 20:14:06.809839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.522 [2024-02-14 20:14:06.809886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.522 [2024-02-14 20:14:06.810152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.522 [2024-02-14 20:14:06.810200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.522 [2024-02-14 20:14:06.810245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.522 [2024-02-14 20:14:06.810288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.522 [2024-02-14 20:14:06.810337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.522 [2024-02-14 20:14:06.810380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.522 [2024-02-14 20:14:06.810429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.522 [2024-02-14 20:14:06.810466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.522 [2024-02-14 20:14:06.810496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.522 [2024-02-14 20:14:06.810529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.522 [2024-02-14 20:14:06.810570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.522 [2024-02-14 20:14:06.810619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.522 [2024-02-14 20:14:06.810670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.522 [2024-02-14 20:14:06.810711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.522 [2024-02-14 20:14:06.810751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.522 [2024-02-14 20:14:06.810789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.522 [2024-02-14 20:14:06.810836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.522 [2024-02-14 20:14:06.810875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.522 [2024-02-14 20:14:06.810905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.522 [2024-02-14 20:14:06.810937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.522 [2024-02-14 20:14:06.810976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.522 [2024-02-14 20:14:06.811008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.522 [2024-02-14 20:14:06.811054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.522 [2024-02-14 20:14:06.811100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.522 [2024-02-14 20:14:06.811150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.522 [2024-02-14 20:14:06.811196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.522 [2024-02-14 20:14:06.811240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.522 [2024-02-14 20:14:06.811286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.522 [2024-02-14 20:14:06.811340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.523 [2024-02-14 20:14:06.811386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.523 [2024-02-14 20:14:06.811429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.523 [2024-02-14 20:14:06.811479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.523 [2024-02-14 20:14:06.811522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.523 [2024-02-14 20:14:06.811569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.523 [2024-02-14 20:14:06.811607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.523 [2024-02-14 20:14:06.811655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.523 [2024-02-14 20:14:06.811702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.523 [2024-02-14 20:14:06.811739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.523 [2024-02-14 20:14:06.811769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.523 [2024-02-14 20:14:06.811808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.523 [2024-02-14 20:14:06.811848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.523 [2024-02-14 20:14:06.811886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.523 [2024-02-14 20:14:06.811925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.523 [2024-02-14 20:14:06.811964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.523 [2024-02-14 20:14:06.811996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.523 [2024-02-14 20:14:06.812025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.523 [2024-02-14 20:14:06.812065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.523 [2024-02-14 20:14:06.812113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.523 [2024-02-14 20:14:06.812161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.523 [2024-02-14 20:14:06.812211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.523 [2024-02-14 20:14:06.812255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.523 [2024-02-14 20:14:06.812302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.523 [2024-02-14 20:14:06.812351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.523 [2024-02-14 20:14:06.812407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.523 [2024-02-14 20:14:06.812455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.523 [2024-02-14 20:14:06.812500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.523 [2024-02-14 20:14:06.812546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.523 [2024-02-14 20:14:06.812594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.523 [2024-02-14 20:14:06.812953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.523 [2024-02-14 20:14:06.812998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.523 [2024-02-14 20:14:06.813043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.523 [2024-02-14 20:14:06.813081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.523 [2024-02-14 20:14:06.813124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.523 [2024-02-14 20:14:06.813156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.523 [2024-02-14 20:14:06.813184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.523 [2024-02-14 20:14:06.813226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.523 [2024-02-14 20:14:06.813266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.523 [2024-02-14 20:14:06.813314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.523 [2024-02-14 20:14:06.813350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.523 [2024-02-14 20:14:06.813390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.523 [2024-02-14 20:14:06.813432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.523 [2024-02-14 20:14:06.813475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.523 [2024-02-14 20:14:06.813505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.523 [2024-02-14 20:14:06.813545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.523 [2024-02-14 20:14:06.813582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.523 [2024-02-14 20:14:06.813616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.523 [2024-02-14 20:14:06.813659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.523 [2024-02-14 20:14:06.813707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.523 [2024-02-14 20:14:06.813749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.523 [2024-02-14 20:14:06.813799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.523 [2024-02-14 20:14:06.813848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.523 [2024-02-14 20:14:06.813894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.523 [2024-02-14 20:14:06.813941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.523 [2024-02-14 20:14:06.813988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.523 [2024-02-14 20:14:06.814035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.523 [2024-02-14 20:14:06.814084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.523 [2024-02-14 20:14:06.814129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.523 [2024-02-14 20:14:06.814179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.523 [2024-02-14 20:14:06.814226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.523 [2024-02-14 20:14:06.814270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.523 [2024-02-14 20:14:06.814318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.523 [2024-02-14 20:14:06.814363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.523 [2024-02-14 20:14:06.814412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.523 [2024-02-14 20:14:06.814457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.523 [2024-02-14 20:14:06.814505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.523 [2024-02-14 20:14:06.814550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.523 [2024-02-14 20:14:06.814586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.523 [2024-02-14 20:14:06.814625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.523 [2024-02-14 20:14:06.814670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.523 [2024-02-14 20:14:06.814707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.523 [2024-02-14 20:14:06.814736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.523 [2024-02-14 20:14:06.814771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.523 [2024-02-14 20:14:06.814810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.523 [2024-02-14 20:14:06.814845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.523 [2024-02-14 20:14:06.814886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.523 [2024-02-14 20:14:06.814922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.523 [2024-02-14 20:14:06.814964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.523 [2024-02-14 20:14:06.814995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.523 [2024-02-14 20:14:06.815026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.523 [2024-02-14 20:14:06.815068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.523 [2024-02-14 20:14:06.815110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.523 [2024-02-14 20:14:06.815158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.523 [2024-02-14 20:14:06.815208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.523 [2024-02-14 20:14:06.815254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.523 [2024-02-14 20:14:06.815303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.523 [2024-02-14 20:14:06.815354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.523 [2024-02-14 20:14:06.815404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.523 [2024-02-14 20:14:06.815453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.523 [2024-02-14 20:14:06.815503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.523 [2024-02-14 20:14:06.815559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.523 [2024-02-14 20:14:06.815608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.523 [2024-02-14 20:14:06.815660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.523 [2024-02-14 20:14:06.815769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.523 [2024-02-14 20:14:06.815818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.523 [2024-02-14 20:14:06.815863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.523 [2024-02-14 20:14:06.815911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.523 [2024-02-14 20:14:06.815959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.523 [2024-02-14 20:14:06.816243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.523 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:13:29.523 [2024-02-14 20:14:06.816288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.523 [2024-02-14 20:14:06.816335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.523 [2024-02-14 20:14:06.816366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.523 [2024-02-14 20:14:06.816399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.524 [2024-02-14 20:14:06.816446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.524 [2024-02-14 20:14:06.816490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.524 [2024-02-14 20:14:06.816529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.524 [2024-02-14 20:14:06.816566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.524 [2024-02-14 20:14:06.816611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.524 [2024-02-14 20:14:06.816659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.524 [2024-02-14 20:14:06.816697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.524 [2024-02-14 20:14:06.816728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.524 [2024-02-14 20:14:06.816766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.524 [2024-02-14 20:14:06.816805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.524 [2024-02-14 20:14:06.816834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.524 [2024-02-14 20:14:06.816879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.524 [2024-02-14 20:14:06.816925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.524 [2024-02-14 20:14:06.816971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.524 [2024-02-14 20:14:06.817022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.524 [2024-02-14 20:14:06.817069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.524 [2024-02-14 20:14:06.817116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.524 [2024-02-14 20:14:06.817169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.524 [2024-02-14 20:14:06.817215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.524 [2024-02-14 20:14:06.817259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.524 [2024-02-14 20:14:06.817308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.524 [2024-02-14 20:14:06.817359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.524 [2024-02-14 20:14:06.817407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.524 [2024-02-14 20:14:06.817455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.524 [2024-02-14 20:14:06.817509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.524 [2024-02-14 20:14:06.817552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.524 [2024-02-14 20:14:06.817598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.524 [2024-02-14 20:14:06.817637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.524 [2024-02-14 20:14:06.817697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.524 [2024-02-14 20:14:06.817742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.524 [2024-02-14 20:14:06.817774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.524 [2024-02-14 20:14:06.817805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.524 [2024-02-14 20:14:06.817851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.524 [2024-02-14 20:14:06.817886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.524 [2024-02-14 20:14:06.817928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.524 [2024-02-14 20:14:06.817966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.524 [2024-02-14 20:14:06.817998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.524 [2024-02-14 20:14:06.818030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.524 [2024-02-14 20:14:06.818065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.524 [2024-02-14 20:14:06.818108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.524 [2024-02-14 20:14:06.818162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.524 [2024-02-14 20:14:06.818211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.524 [2024-02-14 20:14:06.818257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.524 [2024-02-14 20:14:06.818306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.524 [2024-02-14 20:14:06.818351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.524 [2024-02-14 20:14:06.818397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.524 [2024-02-14 20:14:06.818444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.524 [2024-02-14 20:14:06.818497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.524 [2024-02-14 20:14:06.818544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.524 [2024-02-14 20:14:06.818590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.524 [2024-02-14 20:14:06.818639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.524 [2024-02-14 20:14:06.818691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.524 [2024-02-14 20:14:06.818741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.524 [2024-02-14 20:14:06.819091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.524 [2024-02-14 20:14:06.819133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.524 [2024-02-14 20:14:06.819163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.524 [2024-02-14 20:14:06.819191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.524 [2024-02-14 20:14:06.819228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.524 [2024-02-14 20:14:06.819274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.524 [2024-02-14 20:14:06.819313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.524 [2024-02-14 20:14:06.819359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.524 [2024-02-14 20:14:06.819399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.524 [2024-02-14 20:14:06.819439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.524 [2024-02-14 20:14:06.819475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.524 [2024-02-14 20:14:06.819505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.524 [2024-02-14 20:14:06.819542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.524 [2024-02-14 20:14:06.819580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.524 [2024-02-14 20:14:06.819612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.524 [2024-02-14 20:14:06.819662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.524 [2024-02-14 20:14:06.819705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.524 [2024-02-14 20:14:06.819752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.524 [2024-02-14 20:14:06.819803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.524 [2024-02-14 20:14:06.819848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.524 [2024-02-14 20:14:06.819896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.524 [2024-02-14 20:14:06.819946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.524 [2024-02-14 20:14:06.819990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.524 [2024-02-14 20:14:06.820032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.524 [2024-02-14 20:14:06.820077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.524 [2024-02-14 20:14:06.820121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.524 [2024-02-14 20:14:06.820169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.524 [2024-02-14 20:14:06.820217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.524 [2024-02-14 20:14:06.820270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.524 [2024-02-14 20:14:06.820318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.524 [2024-02-14 20:14:06.820367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.524 [2024-02-14 20:14:06.820415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.524 [2024-02-14 20:14:06.820461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.524 [2024-02-14 20:14:06.820504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.524 [2024-02-14 20:14:06.820553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.524 [2024-02-14 20:14:06.820595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.524 [2024-02-14 20:14:06.820641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.524 [2024-02-14 20:14:06.820698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.820728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.820759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.820801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.820842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.820879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.820918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.820957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.820992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.821022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.821055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.821102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.821150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.821195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.821240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.821293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.821345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.821391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.821442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.821494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.821541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.821588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.821634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.821689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.821734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.821781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.821824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.821930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.821976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.822025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.822069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.822115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.822372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.822416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.822456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.822496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.822538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.822578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.822620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.822669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.822699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.822738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.822783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.822815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.822858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.822909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.822959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.823003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.823050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.823091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.823138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.823186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.823230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.823277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.823325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.823373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.823419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.823468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.823516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.823553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.823592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.823635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.823672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.823711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.823752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.823790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.823821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.823858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.823896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.823934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.823965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.824001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.824049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.824097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.824140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.824186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.824234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.824275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.824323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.824374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.824419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.824467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.824518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.824566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.824611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.824662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.824709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.824756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.824805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.824853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.825192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.825235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.825275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.825318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.825361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.825399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.825430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.825459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.825499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.825533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.825577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.825623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.825674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.825721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.825765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.525 [2024-02-14 20:14:06.825817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.825863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.825915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.825957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.826007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.826057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.826107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.826153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.826194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.826236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.826276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.826320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.826358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.826388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.826416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.826463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.826506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.826545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.826578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.826608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.826663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.826713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.826759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.826809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.826869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.826917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.826960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.827015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.827064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.827110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.827160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.827209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.827256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.827305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.827351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.827396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.827446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.827492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.827537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.827587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.827633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.827688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.827736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.827781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.827821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.827865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.827904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.827935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.827964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.828061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.828101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.828143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.828186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.828227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.828505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.828561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.828610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.828665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.828710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.828759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.828804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.828854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.828903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.828950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.828994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.829044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.829095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.829138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.829178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.829222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.829260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.829301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.829331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.829363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.829403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.829438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.829476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.829518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.829557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.829590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.829621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.829670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.829717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.829763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.829810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.829854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.829900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.829948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.830000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.830046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.830094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.830139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.830184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.830233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.830278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.830327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.830371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.830422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.830467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.830512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.830559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.830604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.830654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.830705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.830751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.830795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.526 [2024-02-14 20:14:06.830842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.830887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.830931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.830982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.831024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.831065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.831465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.831500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.831536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.831574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.831604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.831657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.831702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.831746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.831796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.831849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.831895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.831943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.831987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.832033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.832085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.832127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.832175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.832221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.832262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.832307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.832350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.832385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.832414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.832454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.832492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.832531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.832573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.832611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.832644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.832683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.832728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.832772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.832819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.832873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.832918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.832965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.833011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.833058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.833106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.833153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.833198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.833254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.833299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.833342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.833390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.833434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.833485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.833537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.833589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.833636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.833686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.833729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.833769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.833809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.833844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.833874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.833901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.833939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.833977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.834016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.834056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.834098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.834142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.834184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.834284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.834316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.834358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.834412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.834456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.834754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.834805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.834853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.834899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.834943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.834994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.835042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.835083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.835124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.835162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.835201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.835242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.835280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.835309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.835342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.835383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.835423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.835455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.835487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.835529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.835566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.835603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.835633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.835673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.835723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.835767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.835815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.835865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.835913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.835960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.836012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.527 [2024-02-14 20:14:06.836061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.528 [2024-02-14 20:14:06.836107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.528 [2024-02-14 20:14:06.836157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.528 [2024-02-14 20:14:06.836201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.528 [2024-02-14 20:14:06.836249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.528 [2024-02-14 20:14:06.836298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.528 [2024-02-14 20:14:06.836338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.528 [2024-02-14 20:14:06.836385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.528 [2024-02-14 20:14:06.836429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.528 [2024-02-14 20:14:06.836473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.528 [2024-02-14 20:14:06.836518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.528 [2024-02-14 20:14:06.836567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.528 [2024-02-14 20:14:06.836612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.528 [2024-02-14 20:14:06.836660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.528 [2024-02-14 20:14:06.836705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.528 [2024-02-14 20:14:06.836748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.528 [2024-02-14 20:14:06.836799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.528 [2024-02-14 20:14:06.836839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.528 [2024-02-14 20:14:06.836880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.528 [2024-02-14 20:14:06.836924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.528 [2024-02-14 20:14:06.836962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.528 [2024-02-14 20:14:06.836991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.528 [2024-02-14 20:14:06.837019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.528 [2024-02-14 20:14:06.837062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.528 [2024-02-14 20:14:06.837107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.528 [2024-02-14 20:14:06.837149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.528 [2024-02-14 20:14:06.837191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.528 [2024-02-14 20:14:06.837561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.528 [2024-02-14 20:14:06.837612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.528 [2024-02-14 20:14:06.837665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.528 [2024-02-14 20:14:06.837713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.528 [2024-02-14 20:14:06.837764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.528 [2024-02-14 20:14:06.837813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.528 [2024-02-14 20:14:06.837857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.528 [2024-02-14 20:14:06.837905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.528 [2024-02-14 20:14:06.837951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.528 [2024-02-14 20:14:06.837994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.528 [2024-02-14 20:14:06.838036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.528 [2024-02-14 20:14:06.838073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.528 [2024-02-14 20:14:06.838114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.528 [2024-02-14 20:14:06.838159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.528 [2024-02-14 20:14:06.838198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.528 [2024-02-14 20:14:06.838226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.528 [2024-02-14 20:14:06.838253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.528 [2024-02-14 20:14:06.838290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.528 [2024-02-14 20:14:06.838327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.528 [2024-02-14 20:14:06.838362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.528 [2024-02-14 20:14:06.838395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.528 [2024-02-14 20:14:06.838427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.528 [2024-02-14 20:14:06.838472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.528 [2024-02-14 20:14:06.838515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.528 [2024-02-14 20:14:06.838568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.528 [2024-02-14 20:14:06.838615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.528 true 00:13:29.528 [2024-02-14 20:14:06.838666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.528 [2024-02-14 20:14:06.838714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.528 [2024-02-14 20:14:06.838763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.528 [2024-02-14 20:14:06.838807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.528 [2024-02-14 20:14:06.838853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.528 [2024-02-14 20:14:06.838893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.528 [2024-02-14 20:14:06.838943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.528 [2024-02-14 20:14:06.838993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.528 [2024-02-14 20:14:06.839038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.528 [2024-02-14 20:14:06.839088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.528 [2024-02-14 20:14:06.839132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.528 [2024-02-14 20:14:06.839176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.528 [2024-02-14 20:14:06.839225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.528 [2024-02-14 20:14:06.839270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.528 [2024-02-14 20:14:06.839316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.528 [2024-02-14 20:14:06.839359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.528 [2024-02-14 20:14:06.839401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.528 [2024-02-14 20:14:06.839460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.528 [2024-02-14 20:14:06.839504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.528 [2024-02-14 20:14:06.839547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.528 [2024-02-14 20:14:06.839586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.528 [2024-02-14 20:14:06.839615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.528 [2024-02-14 20:14:06.839657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.528 [2024-02-14 20:14:06.839700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.528 [2024-02-14 20:14:06.839737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.528 [2024-02-14 20:14:06.839778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.528 [2024-02-14 20:14:06.839815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.528 [2024-02-14 20:14:06.839856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.528 [2024-02-14 20:14:06.839901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.528 [2024-02-14 20:14:06.839941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.528 [2024-02-14 20:14:06.839977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.528 [2024-02-14 20:14:06.840009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.528 [2024-02-14 20:14:06.840048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.528 [2024-02-14 20:14:06.840087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.528 [2024-02-14 20:14:06.840116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.528 [2024-02-14 20:14:06.840159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.528 [2024-02-14 20:14:06.840205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.528 [2024-02-14 20:14:06.840252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.528 [2024-02-14 20:14:06.840362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.840409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.840455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.840512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.840562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.840835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.840888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.840931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.840973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.841011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.841047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.841076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.841114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.841150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.841192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.841231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.841266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.841304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.841334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.841370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.841412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.841463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.841509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.841557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.841600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.841643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.841702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.841747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.841793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.841836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.841878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.841925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.841975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.842022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.842073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.842117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.842166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.842210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.842253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.842311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.842355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.842408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.842451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.842498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.842546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.842593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.842645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.842698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.842745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.842795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.842833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.842875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.842919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.842960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.842997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.843026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.843056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.843111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.843161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.843203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.843242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.843280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.843320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.843364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.843785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.843836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.843888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.843937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.843982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.844032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.844091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.844139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.844185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.844227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.844280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.844324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.844361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.844390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.844428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.844469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.844518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.844563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.844612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.844644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.844685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.844733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.844776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.844820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.844875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.844922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.844970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.845018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.845068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.845121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.845171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.845220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.845275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.845323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.845371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.845421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.845470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.845516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.845565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.845611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.845670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.845714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.845756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.845794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.845836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.845868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.845897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.529 [2024-02-14 20:14:06.845943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.845987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.846027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.846069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.846112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.846153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.846185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.846221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.846260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.846298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.846336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.846389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.846436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.846483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.846538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.846589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.846639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.846755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.846817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.846866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.846914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.846974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.847237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.847284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.847325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.847355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.847385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.847423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.847463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.847507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.847548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.847583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.847623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.847662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.847696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.847740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.847787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.847834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.847885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.847931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.847978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.848026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.848071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.848117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.848164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.848207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.848259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.848304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.848350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.848400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.848444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.848493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.848545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.848593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.848641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.848694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.848737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.848780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.848835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.848880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.848921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.848962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.849000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.849039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.849072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.849100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.849135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.849179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.849224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.849272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.849316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.849363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.849409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.849454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.849482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.849519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.849554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.849584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.849628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.849680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.849724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.850058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.850107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.850145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.850182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.850221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.850258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.850288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.850315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.850357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.850395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.850438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.850470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.850500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.850543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.850595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.850641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.850692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.850741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.850786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.850832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.850881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.850925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.850970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.851019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.851065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.851112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.851161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.851209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.851253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.851305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.851350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.851396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.530 [2024-02-14 20:14:06.851441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.531 [2024-02-14 20:14:06.851488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.531 [2024-02-14 20:14:06.851536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.531 [2024-02-14 20:14:06.851578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.531 [2024-02-14 20:14:06.851619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.531 [2024-02-14 20:14:06.851656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.531 [2024-02-14 20:14:06.851686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.531 [2024-02-14 20:14:06.851728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.531 [2024-02-14 20:14:06.851772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.531 [2024-02-14 20:14:06.851813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.531 [2024-02-14 20:14:06.851853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.531 [2024-02-14 20:14:06.851891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.531 [2024-02-14 20:14:06.851935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.531 [2024-02-14 20:14:06.851975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.531 [2024-02-14 20:14:06.852014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.531 [2024-02-14 20:14:06.852046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.531 [2024-02-14 20:14:06.852075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.531 [2024-02-14 20:14:06.852114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.531 [2024-02-14 20:14:06.852148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.531 [2024-02-14 20:14:06.852178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.531 [2024-02-14 20:14:06.852220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.531 [2024-02-14 20:14:06.852271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.531 [2024-02-14 20:14:06.852318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.531 [2024-02-14 20:14:06.852369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.531 [2024-02-14 20:14:06.852414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.531 [2024-02-14 20:14:06.852462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.531 [2024-02-14 20:14:06.852509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.531 [2024-02-14 20:14:06.852555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.531 [2024-02-14 20:14:06.852604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.531 [2024-02-14 20:14:06.852654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.531 [2024-02-14 20:14:06.852701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.531 [2024-02-14 20:14:06.852749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.531 [2024-02-14 20:14:06.852850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.531 [2024-02-14 20:14:06.852890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.531 [2024-02-14 20:14:06.852928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.531 [2024-02-14 20:14:06.852970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.531 [2024-02-14 20:14:06.853233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.531 [2024-02-14 20:14:06.853276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.531 [2024-02-14 20:14:06.853306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.531 [2024-02-14 20:14:06.853337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.531 [2024-02-14 20:14:06.853383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.531 [2024-02-14 20:14:06.853429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.531 [2024-02-14 20:14:06.853472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.531 [2024-02-14 20:14:06.853524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.531 [2024-02-14 20:14:06.853572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.531 [2024-02-14 20:14:06.853613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.531 [2024-02-14 20:14:06.853664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.531 [2024-02-14 20:14:06.853710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.531 [2024-02-14 20:14:06.853758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.531 [2024-02-14 20:14:06.853805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.531 [2024-02-14 20:14:06.853852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.531 [2024-02-14 20:14:06.853904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.531 [2024-02-14 20:14:06.853955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.531 [2024-02-14 20:14:06.854001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.531 [2024-02-14 20:14:06.854047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.531 [2024-02-14 20:14:06.854091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.531 [2024-02-14 20:14:06.854134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.531 [2024-02-14 20:14:06.854188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.531 [2024-02-14 20:14:06.854238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.531 [2024-02-14 20:14:06.854286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.531 [2024-02-14 20:14:06.854337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.531 [2024-02-14 20:14:06.854379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.531 [2024-02-14 20:14:06.854424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.531 [2024-02-14 20:14:06.854472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.531 [2024-02-14 20:14:06.854515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.531 [2024-02-14 20:14:06.854564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.531 [2024-02-14 20:14:06.854609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.531 [2024-02-14 20:14:06.854657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.531 [2024-02-14 20:14:06.854708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.531 [2024-02-14 20:14:06.854754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.531 [2024-02-14 20:14:06.854791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.532 [2024-02-14 20:14:06.854834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.532 [2024-02-14 20:14:06.854873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.532 [2024-02-14 20:14:06.854916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.532 [2024-02-14 20:14:06.854945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.532 [2024-02-14 20:14:06.854974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.532 [2024-02-14 20:14:06.855012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.532 [2024-02-14 20:14:06.855051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.532 [2024-02-14 20:14:06.855091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.532 [2024-02-14 20:14:06.855128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.532 [2024-02-14 20:14:06.855165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.532 [2024-02-14 20:14:06.855208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.532 [2024-02-14 20:14:06.855249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.532 [2024-02-14 20:14:06.855288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.532 [2024-02-14 20:14:06.855330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.532 [2024-02-14 20:14:06.855361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.532 [2024-02-14 20:14:06.855393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.532 [2024-02-14 20:14:06.855431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.532 [2024-02-14 20:14:06.855470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.532 [2024-02-14 20:14:06.855501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.532 [2024-02-14 20:14:06.855548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.532 [2024-02-14 20:14:06.855595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.532 [2024-02-14 20:14:06.855643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.532 [2024-02-14 20:14:06.855696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.532 [2024-02-14 20:14:06.855742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.532 [2024-02-14 20:14:06.855786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.532 [2024-02-14 20:14:06.856141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.532 [2024-02-14 20:14:06.856184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.532 [2024-02-14 20:14:06.856214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.532 [2024-02-14 20:14:06.856241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.532 [2024-02-14 20:14:06.856279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.532 [2024-02-14 20:14:06.856315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.532 [2024-02-14 20:14:06.856347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.532 [2024-02-14 20:14:06.856390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.532 [2024-02-14 20:14:06.856428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.532 [2024-02-14 20:14:06.856462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.532 [2024-02-14 20:14:06.856492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.532 [2024-02-14 20:14:06.856522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.532 [2024-02-14 20:14:06.856567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.532 [2024-02-14 20:14:06.856617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.532 [2024-02-14 20:14:06.856669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.532 [2024-02-14 20:14:06.856714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.532 [2024-02-14 20:14:06.856763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.532 [2024-02-14 20:14:06.856809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.532 [2024-02-14 20:14:06.856855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.532 [2024-02-14 20:14:06.856905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.532 [2024-02-14 20:14:06.856949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.532 [2024-02-14 20:14:06.856996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.532 [2024-02-14 20:14:06.857046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.532 [2024-02-14 20:14:06.857089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.532 [2024-02-14 20:14:06.857136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.532 [2024-02-14 20:14:06.857183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.532 [2024-02-14 20:14:06.857229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.532 [2024-02-14 20:14:06.857274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.532 [2024-02-14 20:14:06.857319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.532 [2024-02-14 20:14:06.857364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.532 [2024-02-14 20:14:06.857411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.532 [2024-02-14 20:14:06.857457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.532 [2024-02-14 20:14:06.857497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.532 [2024-02-14 20:14:06.857542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.532 [2024-02-14 20:14:06.857586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.532 [2024-02-14 20:14:06.857628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.532 [2024-02-14 20:14:06.857663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.532 [2024-02-14 20:14:06.857703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.532 [2024-02-14 20:14:06.857745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.532 [2024-02-14 20:14:06.857786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.532 [2024-02-14 20:14:06.857824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.532 [2024-02-14 20:14:06.857865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.532 [2024-02-14 20:14:06.857907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.532 [2024-02-14 20:14:06.857939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.532 [2024-02-14 20:14:06.857969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.532 [2024-02-14 20:14:06.858005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.532 [2024-02-14 20:14:06.858045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.532 [2024-02-14 20:14:06.858075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.532 [2024-02-14 20:14:06.858120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.532 [2024-02-14 20:14:06.858164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.532 [2024-02-14 20:14:06.858210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.532 [2024-02-14 20:14:06.858256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.532 [2024-02-14 20:14:06.858305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.533 [2024-02-14 20:14:06.858348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.533 [2024-02-14 20:14:06.858392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.533 [2024-02-14 20:14:06.858442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.533 [2024-02-14 20:14:06.858485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.533 [2024-02-14 20:14:06.858533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.533 [2024-02-14 20:14:06.858586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.533 [2024-02-14 20:14:06.858634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.533 [2024-02-14 20:14:06.858690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.533 [2024-02-14 20:14:06.858739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.533 [2024-02-14 20:14:06.858781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.533 [2024-02-14 20:14:06.858827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.533 [2024-02-14 20:14:06.858927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.533 [2024-02-14 20:14:06.858966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.533 [2024-02-14 20:14:06.859003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.533 [2024-02-14 20:14:06.859041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.533 [2024-02-14 20:14:06.859332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.533 [2024-02-14 20:14:06.859375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.533 [2024-02-14 20:14:06.859405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.533 [2024-02-14 20:14:06.859441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.533 [2024-02-14 20:14:06.859487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.533 [2024-02-14 20:14:06.859538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.533 [2024-02-14 20:14:06.859584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.533 [2024-02-14 20:14:06.859635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.533 [2024-02-14 20:14:06.859692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.533 [2024-02-14 20:14:06.859736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.533 [2024-02-14 20:14:06.859784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.533 [2024-02-14 20:14:06.859834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.533 [2024-02-14 20:14:06.859879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.533 [2024-02-14 20:14:06.859922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.533 [2024-02-14 20:14:06.859970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.533 [2024-02-14 20:14:06.860018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.533 [2024-02-14 20:14:06.860067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.533 [2024-02-14 20:14:06.860111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.533 [2024-02-14 20:14:06.860154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.533 [2024-02-14 20:14:06.860205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.533 [2024-02-14 20:14:06.860257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.533 [2024-02-14 20:14:06.860301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.533 [2024-02-14 20:14:06.860350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.533 [2024-02-14 20:14:06.860402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.533 [2024-02-14 20:14:06.860451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.533 [2024-02-14 20:14:06.860495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.533 [2024-02-14 20:14:06.860546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.533 [2024-02-14 20:14:06.860591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.533 [2024-02-14 20:14:06.860635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.533 [2024-02-14 20:14:06.860683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.533 [2024-02-14 20:14:06.860727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.533 [2024-02-14 20:14:06.860771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.533 [2024-02-14 20:14:06.860810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.533 [2024-02-14 20:14:06.860845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.533 [2024-02-14 20:14:06.860882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.533 [2024-02-14 20:14:06.860925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.533 [2024-02-14 20:14:06.860964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.533 [2024-02-14 20:14:06.861002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.533 [2024-02-14 20:14:06.861038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.533 [2024-02-14 20:14:06.861081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.533 [2024-02-14 20:14:06.861122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.533 [2024-02-14 20:14:06.861160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.533 [2024-02-14 20:14:06.861191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.533 [2024-02-14 20:14:06.861230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.533 20:14:06 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1709311 00:13:29.533 [2024-02-14 20:14:06.861266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.533 [2024-02-14 20:14:06.861298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.533 [2024-02-14 20:14:06.861330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.533 [2024-02-14 20:14:06.861377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.533 [2024-02-14 20:14:06.861423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.533 [2024-02-14 20:14:06.861475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.533 [2024-02-14 20:14:06.861530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.533 [2024-02-14 20:14:06.861577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.533 20:14:06 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:29.533 [2024-02-14 20:14:06.861625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.533 [2024-02-14 20:14:06.861681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.533 [2024-02-14 20:14:06.861724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.533 [2024-02-14 20:14:06.861773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.533 [2024-02-14 20:14:06.861818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.533 [2024-02-14 20:14:06.861868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.533 [2024-02-14 20:14:06.861913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.533 [2024-02-14 20:14:06.861959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.533 [2024-02-14 20:14:06.862304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.533 [2024-02-14 20:14:06.862339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.533 [2024-02-14 20:14:06.862376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.533 [2024-02-14 20:14:06.862418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.533 [2024-02-14 20:14:06.862452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.533 [2024-02-14 20:14:06.862491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.533 [2024-02-14 20:14:06.862527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.533 [2024-02-14 20:14:06.862563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.533 [2024-02-14 20:14:06.862597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.533 [2024-02-14 20:14:06.862644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.533 [2024-02-14 20:14:06.862700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.533 [2024-02-14 20:14:06.862748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.533 [2024-02-14 20:14:06.862800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.533 [2024-02-14 20:14:06.862851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.533 [2024-02-14 20:14:06.862898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.533 [2024-02-14 20:14:06.862946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.533 [2024-02-14 20:14:06.862993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.533 [2024-02-14 20:14:06.863034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.533 [2024-02-14 20:14:06.863085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.534 [2024-02-14 20:14:06.863133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.534 [2024-02-14 20:14:06.863184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.534 [2024-02-14 20:14:06.863231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.534 [2024-02-14 20:14:06.863279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.534 [2024-02-14 20:14:06.863330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.534 [2024-02-14 20:14:06.863380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.534 [2024-02-14 20:14:06.863426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.534 [2024-02-14 20:14:06.863470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.534 [2024-02-14 20:14:06.863516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.534 [2024-02-14 20:14:06.863558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.534 [2024-02-14 20:14:06.863599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.534 [2024-02-14 20:14:06.863638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.534 [2024-02-14 20:14:06.863689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.534 [2024-02-14 20:14:06.863730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.534 [2024-02-14 20:14:06.863770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.534 [2024-02-14 20:14:06.863810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.534 [2024-02-14 20:14:06.863844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.534 [2024-02-14 20:14:06.863875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.534 [2024-02-14 20:14:06.863915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.534 [2024-02-14 20:14:06.863955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.534 [2024-02-14 20:14:06.863990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.534 [2024-02-14 20:14:06.864021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.534 [2024-02-14 20:14:06.864061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.534 [2024-02-14 20:14:06.864103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.534 [2024-02-14 20:14:06.864136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.534 [2024-02-14 20:14:06.864169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.534 [2024-02-14 20:14:06.864222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.534 [2024-02-14 20:14:06.864267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.534 [2024-02-14 20:14:06.864312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.534 [2024-02-14 20:14:06.864358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.534 [2024-02-14 20:14:06.864406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.534 [2024-02-14 20:14:06.864454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.534 [2024-02-14 20:14:06.864505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.534 [2024-02-14 20:14:06.864553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.534 [2024-02-14 20:14:06.864600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.534 [2024-02-14 20:14:06.864645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.534 [2024-02-14 20:14:06.864697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.534 [2024-02-14 20:14:06.864741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.534 [2024-02-14 20:14:06.864790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.534 [2024-02-14 20:14:06.864837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.534 [2024-02-14 20:14:06.864886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.534 [2024-02-14 20:14:06.864926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.534 [2024-02-14 20:14:06.864970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.534 [2024-02-14 20:14:06.865017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.534 [2024-02-14 20:14:06.865051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.534 [2024-02-14 20:14:06.865149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.534 [2024-02-14 20:14:06.865188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.534 [2024-02-14 20:14:06.865231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.534 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:13:29.534 [2024-02-14 20:14:06.865528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.534 [2024-02-14 20:14:06.865583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.534 [2024-02-14 20:14:06.865631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.534 [2024-02-14 20:14:06.865684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.534 [2024-02-14 20:14:06.865729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.534 [2024-02-14 20:14:06.865775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.534 [2024-02-14 20:14:06.865822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.534 [2024-02-14 20:14:06.865868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.534 [2024-02-14 20:14:06.865912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.534 [2024-02-14 20:14:06.865952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.534 [2024-02-14 20:14:06.865993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.534 [2024-02-14 20:14:06.866033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.534 [2024-02-14 20:14:06.866062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.534 [2024-02-14 20:14:06.866094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.534 [2024-02-14 20:14:06.866127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.534 [2024-02-14 20:14:06.866170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.534 [2024-02-14 20:14:06.866207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.534 [2024-02-14 20:14:06.866250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.534 [2024-02-14 20:14:06.866286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.534 [2024-02-14 20:14:06.866316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.534 [2024-02-14 20:14:06.866361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.534 [2024-02-14 20:14:06.866411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.534 [2024-02-14 20:14:06.866454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.534 [2024-02-14 20:14:06.866501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.534 [2024-02-14 20:14:06.866561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.534 [2024-02-14 20:14:06.866606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.534 [2024-02-14 20:14:06.866657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.534 [2024-02-14 20:14:06.866708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.534 [2024-02-14 20:14:06.866755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.534 [2024-02-14 20:14:06.866802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.534 [2024-02-14 20:14:06.866850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.534 [2024-02-14 20:14:06.866895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.534 [2024-02-14 20:14:06.866934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.534 [2024-02-14 20:14:06.866971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.534 [2024-02-14 20:14:06.867008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.534 [2024-02-14 20:14:06.867040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.534 [2024-02-14 20:14:06.867068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.534 [2024-02-14 20:14:06.867110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.534 [2024-02-14 20:14:06.867152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.534 [2024-02-14 20:14:06.867189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.534 [2024-02-14 20:14:06.867220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.534 [2024-02-14 20:14:06.867259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.534 [2024-02-14 20:14:06.867303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.534 [2024-02-14 20:14:06.867351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.534 [2024-02-14 20:14:06.867402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.534 [2024-02-14 20:14:06.867446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.534 [2024-02-14 20:14:06.867496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.534 [2024-02-14 20:14:06.867543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.534 [2024-02-14 20:14:06.867591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.535 [2024-02-14 20:14:06.867637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.535 [2024-02-14 20:14:06.867688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.535 [2024-02-14 20:14:06.867733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.535 [2024-02-14 20:14:06.867782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.535 [2024-02-14 20:14:06.867827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.535 [2024-02-14 20:14:06.867874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.535 [2024-02-14 20:14:06.867924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.535 [2024-02-14 20:14:06.867970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.535 [2024-02-14 20:14:06.868013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.535 [2024-02-14 20:14:06.868059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.535 [2024-02-14 20:14:06.868102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.535 [2024-02-14 20:14:06.868462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.535 [2024-02-14 20:14:06.868512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.535 [2024-02-14 20:14:06.868559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.535 [2024-02-14 20:14:06.868609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.535 [2024-02-14 20:14:06.868667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.535 [2024-02-14 20:14:06.868713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.535 [2024-02-14 20:14:06.868763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.535 [2024-02-14 20:14:06.868808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.535 [2024-02-14 20:14:06.868852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.535 [2024-02-14 20:14:06.868900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.535 [2024-02-14 20:14:06.868949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.535 [2024-02-14 20:14:06.868995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.535 [2024-02-14 20:14:06.869038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.535 [2024-02-14 20:14:06.869081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.535 [2024-02-14 20:14:06.869129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.535 [2024-02-14 20:14:06.869172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.535 [2024-02-14 20:14:06.869212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.535 [2024-02-14 20:14:06.869254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.535 [2024-02-14 20:14:06.869296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.535 [2024-02-14 20:14:06.869338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.535 [2024-02-14 20:14:06.869383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.535 [2024-02-14 20:14:06.869421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.535 [2024-02-14 20:14:06.869459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.535 [2024-02-14 20:14:06.869489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.535 [2024-02-14 20:14:06.869523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.535 [2024-02-14 20:14:06.869560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.535 [2024-02-14 20:14:06.869604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.535 [2024-02-14 20:14:06.869653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.535 [2024-02-14 20:14:06.869695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.535 [2024-02-14 20:14:06.869732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.535 [2024-02-14 20:14:06.869773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.535 [2024-02-14 20:14:06.869819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.535 [2024-02-14 20:14:06.869858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.535 [2024-02-14 20:14:06.869889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.535 [2024-02-14 20:14:06.869921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.535 [2024-02-14 20:14:06.869960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.535 [2024-02-14 20:14:06.870003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.535 [2024-02-14 20:14:06.870045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.535 [2024-02-14 20:14:06.870088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.535 [2024-02-14 20:14:06.870126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.535 [2024-02-14 20:14:06.870167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.535 [2024-02-14 20:14:06.870196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.535 [2024-02-14 20:14:06.870239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.535 [2024-02-14 20:14:06.870286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.535 [2024-02-14 20:14:06.870332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.535 [2024-02-14 20:14:06.870376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.535 [2024-02-14 20:14:06.870427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.535 [2024-02-14 20:14:06.870471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.535 [2024-02-14 20:14:06.870519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.535 [2024-02-14 20:14:06.870569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.535 [2024-02-14 20:14:06.870615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.535 [2024-02-14 20:14:06.870670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.535 [2024-02-14 20:14:06.870720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.535 [2024-02-14 20:14:06.870763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.535 [2024-02-14 20:14:06.870815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.535 [2024-02-14 20:14:06.870866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.535 [2024-02-14 20:14:06.870908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.535 [2024-02-14 20:14:06.870956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.535 [2024-02-14 20:14:06.871003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.535 [2024-02-14 20:14:06.871037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.535 [2024-02-14 20:14:06.871066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.535 [2024-02-14 20:14:06.871105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.535 [2024-02-14 20:14:06.871143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.535 [2024-02-14 20:14:06.871185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.535 [2024-02-14 20:14:06.871283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.535 [2024-02-14 20:14:06.871314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.535 [2024-02-14 20:14:06.871353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.535 [2024-02-14 20:14:06.871621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.535 [2024-02-14 20:14:06.871676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.535 [2024-02-14 20:14:06.871722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.535 [2024-02-14 20:14:06.871768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.535 [2024-02-14 20:14:06.871816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.535 [2024-02-14 20:14:06.871856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.535 [2024-02-14 20:14:06.871900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.535 [2024-02-14 20:14:06.871947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.535 [2024-02-14 20:14:06.871990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.806 [2024-02-14 20:14:06.872034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.806 [2024-02-14 20:14:06.872063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.806 [2024-02-14 20:14:06.872091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.806 [2024-02-14 20:14:06.872121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.806 [2024-02-14 20:14:06.872151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.806 [2024-02-14 20:14:06.872187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.806 [2024-02-14 20:14:06.872224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.806 [2024-02-14 20:14:06.872254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.806 [2024-02-14 20:14:06.872300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.806 [2024-02-14 20:14:06.872349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.806 [2024-02-14 20:14:06.872399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.806 [2024-02-14 20:14:06.872446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.806 [2024-02-14 20:14:06.872493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.807 [2024-02-14 20:14:06.872542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.807 [2024-02-14 20:14:06.872594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.807 [2024-02-14 20:14:06.872638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.807 [2024-02-14 20:14:06.872680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.807 [2024-02-14 20:14:06.872713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.807 [2024-02-14 20:14:06.872753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.807 [2024-02-14 20:14:06.872787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.807 [2024-02-14 20:14:06.872827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.807 [2024-02-14 20:14:06.872861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.807 [2024-02-14 20:14:06.872891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.807 [2024-02-14 20:14:06.872931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.807 [2024-02-14 20:14:06.872968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.807 [2024-02-14 20:14:06.872999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.807 [2024-02-14 20:14:06.873045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.807 [2024-02-14 20:14:06.873091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.807 [2024-02-14 20:14:06.873145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.807 [2024-02-14 20:14:06.873192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.807 [2024-02-14 20:14:06.873239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.807 [2024-02-14 20:14:06.873296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.807 [2024-02-14 20:14:06.873344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.807 [2024-02-14 20:14:06.873387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.807 [2024-02-14 20:14:06.873427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.807 [2024-02-14 20:14:06.873467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.807 [2024-02-14 20:14:06.873503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.807 [2024-02-14 20:14:06.873531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.807 [2024-02-14 20:14:06.873572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.807 [2024-02-14 20:14:06.873612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.807 [2024-02-14 20:14:06.873656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.807 [2024-02-14 20:14:06.873692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.807 [2024-02-14 20:14:06.873733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.807 [2024-02-14 20:14:06.873765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.807 [2024-02-14 20:14:06.873808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.807 [2024-02-14 20:14:06.873853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.807 [2024-02-14 20:14:06.873906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.807 [2024-02-14 20:14:06.873947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.807 [2024-02-14 20:14:06.874002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.807 [2024-02-14 20:14:06.874055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.807 [2024-02-14 20:14:06.874103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.807 [2024-02-14 20:14:06.874461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.807 [2024-02-14 20:14:06.874524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.807 [2024-02-14 20:14:06.874576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.807 [2024-02-14 20:14:06.874626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.807 [2024-02-14 20:14:06.874677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.807 [2024-02-14 20:14:06.874730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.807 [2024-02-14 20:14:06.874774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.807 [2024-02-14 20:14:06.874818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.807 [2024-02-14 20:14:06.874871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.807 [2024-02-14 20:14:06.874922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.807 [2024-02-14 20:14:06.874969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.807 [2024-02-14 20:14:06.875012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.807 [2024-02-14 20:14:06.875067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.807 [2024-02-14 20:14:06.875126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.807 [2024-02-14 20:14:06.875177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.807 [2024-02-14 20:14:06.875222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.807 [2024-02-14 20:14:06.875273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.807 [2024-02-14 20:14:06.875319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.807 [2024-02-14 20:14:06.875366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.807 [2024-02-14 20:14:06.875412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.807 [2024-02-14 20:14:06.875462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.807 [2024-02-14 20:14:06.875513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.807 [2024-02-14 20:14:06.875570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.807 [2024-02-14 20:14:06.875616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.807 [2024-02-14 20:14:06.875673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.807 [2024-02-14 20:14:06.875728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.807 [2024-02-14 20:14:06.875775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.807 [2024-02-14 20:14:06.875822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.807 [2024-02-14 20:14:06.875875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.807 [2024-02-14 20:14:06.875928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.807 [2024-02-14 20:14:06.875973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.807 [2024-02-14 20:14:06.876032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.807 [2024-02-14 20:14:06.876080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.807 [2024-02-14 20:14:06.876126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.807 [2024-02-14 20:14:06.876173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.807 [2024-02-14 20:14:06.876218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.807 [2024-02-14 20:14:06.876262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.807 [2024-02-14 20:14:06.876301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.807 [2024-02-14 20:14:06.876349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.807 [2024-02-14 20:14:06.876390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.807 [2024-02-14 20:14:06.876438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.807 [2024-02-14 20:14:06.876477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.807 [2024-02-14 20:14:06.876518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.807 [2024-02-14 20:14:06.876560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.807 [2024-02-14 20:14:06.876587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.807 [2024-02-14 20:14:06.876624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.807 [2024-02-14 20:14:06.876674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.807 [2024-02-14 20:14:06.876713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.807 [2024-02-14 20:14:06.876752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.807 [2024-02-14 20:14:06.876789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.807 [2024-02-14 20:14:06.876830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.807 [2024-02-14 20:14:06.876875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.807 [2024-02-14 20:14:06.876914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.807 [2024-02-14 20:14:06.876956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.807 [2024-02-14 20:14:06.876986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.807 [2024-02-14 20:14:06.877023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.807 [2024-02-14 20:14:06.877060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.807 [2024-02-14 20:14:06.877099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.808 [2024-02-14 20:14:06.877136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.808 [2024-02-14 20:14:06.877175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.808 [2024-02-14 20:14:06.877220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.808 [2024-02-14 20:14:06.877263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.808 [2024-02-14 20:14:06.877300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.808 [2024-02-14 20:14:06.877338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.808 [2024-02-14 20:14:06.877431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.808 [2024-02-14 20:14:06.877464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.808 [2024-02-14 20:14:06.877506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.808 [2024-02-14 20:14:06.877814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.808 [2024-02-14 20:14:06.877868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.808 [2024-02-14 20:14:06.877916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.808 [2024-02-14 20:14:06.877968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.808 [2024-02-14 20:14:06.878017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.808 [2024-02-14 20:14:06.878063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.808 [2024-02-14 20:14:06.878111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.808 [2024-02-14 20:14:06.878163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.808 [2024-02-14 20:14:06.878214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.808 [2024-02-14 20:14:06.878264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.808 [2024-02-14 20:14:06.878308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.808 [2024-02-14 20:14:06.878361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.808 [2024-02-14 20:14:06.878407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.808 [2024-02-14 20:14:06.878456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.808 [2024-02-14 20:14:06.878489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.808 [2024-02-14 20:14:06.878524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.808 [2024-02-14 20:14:06.878567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.808 [2024-02-14 20:14:06.878612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.808 [2024-02-14 20:14:06.878657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.808 [2024-02-14 20:14:06.878698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.808 [2024-02-14 20:14:06.878734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.808 [2024-02-14 20:14:06.878781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.808 [2024-02-14 20:14:06.878824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.808 [2024-02-14 20:14:06.878870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.808 [2024-02-14 20:14:06.878927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.808 [2024-02-14 20:14:06.878965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.808 [2024-02-14 20:14:06.878993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.808 [2024-02-14 20:14:06.879025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.808 [2024-02-14 20:14:06.879055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.808 [2024-02-14 20:14:06.879086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.808 [2024-02-14 20:14:06.879114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.808 [2024-02-14 20:14:06.879142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.808 [2024-02-14 20:14:06.879169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.808 [2024-02-14 20:14:06.879198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.808 [2024-02-14 20:14:06.879224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.808 [2024-02-14 20:14:06.879253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.808 [2024-02-14 20:14:06.879282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.808 [2024-02-14 20:14:06.879310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.808 [2024-02-14 20:14:06.879337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.808 [2024-02-14 20:14:06.879365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.808 [2024-02-14 20:14:06.879392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.808 [2024-02-14 20:14:06.879420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.808 [2024-02-14 20:14:06.879449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.808 [2024-02-14 20:14:06.879476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.808 [2024-02-14 20:14:06.879508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.808 [2024-02-14 20:14:06.879541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.808 [2024-02-14 20:14:06.879578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.808 [2024-02-14 20:14:06.879623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.808 [2024-02-14 20:14:06.879665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.808 [2024-02-14 20:14:06.879694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.808 [2024-02-14 20:14:06.879734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.808 [2024-02-14 20:14:06.879773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.808 [2024-02-14 20:14:06.879809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.808 [2024-02-14 20:14:06.879851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.808 [2024-02-14 20:14:06.879895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.808 [2024-02-14 20:14:06.879944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.808 [2024-02-14 20:14:06.879988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.808 [2024-02-14 20:14:06.880038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.808 [2024-02-14 20:14:06.880089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.808 [2024-02-14 20:14:06.880132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.808 [2024-02-14 20:14:06.880479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.808 [2024-02-14 20:14:06.880530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.808 [2024-02-14 20:14:06.880577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.808 [2024-02-14 20:14:06.880627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.808 [2024-02-14 20:14:06.880678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.808 [2024-02-14 20:14:06.880728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.808 [2024-02-14 20:14:06.880770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.808 [2024-02-14 20:14:06.880810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.808 [2024-02-14 20:14:06.880848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.808 [2024-02-14 20:14:06.880895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.808 [2024-02-14 20:14:06.880935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.808 [2024-02-14 20:14:06.880974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.808 [2024-02-14 20:14:06.881018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.808 [2024-02-14 20:14:06.881056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.808 [2024-02-14 20:14:06.881101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.808 [2024-02-14 20:14:06.881140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.808 [2024-02-14 20:14:06.881170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.808 [2024-02-14 20:14:06.881201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.808 [2024-02-14 20:14:06.881234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.808 [2024-02-14 20:14:06.881265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.808 [2024-02-14 20:14:06.881293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.808 [2024-02-14 20:14:06.881333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.808 [2024-02-14 20:14:06.881370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.808 [2024-02-14 20:14:06.881405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.808 [2024-02-14 20:14:06.881444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.808 [2024-02-14 20:14:06.881477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.808 [2024-02-14 20:14:06.881516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.808 [2024-02-14 20:14:06.881558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.809 [2024-02-14 20:14:06.881605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.809 [2024-02-14 20:14:06.881654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.809 [2024-02-14 20:14:06.881704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.809 [2024-02-14 20:14:06.881749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.809 [2024-02-14 20:14:06.881795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.809 [2024-02-14 20:14:06.881841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.809 [2024-02-14 20:14:06.881888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.809 [2024-02-14 20:14:06.881938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.809 [2024-02-14 20:14:06.881980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.809 [2024-02-14 20:14:06.882022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.809 [2024-02-14 20:14:06.882067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.809 [2024-02-14 20:14:06.882112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.809 [2024-02-14 20:14:06.882158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.809 [2024-02-14 20:14:06.882203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.809 [2024-02-14 20:14:06.882252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.809 [2024-02-14 20:14:06.882294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.809 [2024-02-14 20:14:06.882336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.809 [2024-02-14 20:14:06.882379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.809 [2024-02-14 20:14:06.882425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.809 [2024-02-14 20:14:06.882474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.809 [2024-02-14 20:14:06.882523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.809 [2024-02-14 20:14:06.882570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.809 [2024-02-14 20:14:06.882614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.809 [2024-02-14 20:14:06.882668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.809 [2024-02-14 20:14:06.882715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.809 [2024-02-14 20:14:06.882760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.809 [2024-02-14 20:14:06.882809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.809 [2024-02-14 20:14:06.882855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.809 [2024-02-14 20:14:06.882904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.809 [2024-02-14 20:14:06.882951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.809 [2024-02-14 20:14:06.882995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.809 [2024-02-14 20:14:06.883043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.809 [2024-02-14 20:14:06.883086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.809 [2024-02-14 20:14:06.883133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.809 [2024-02-14 20:14:06.883174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.809 [2024-02-14 20:14:06.883214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.809 [2024-02-14 20:14:06.883315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.809 [2024-02-14 20:14:06.883344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.809 [2024-02-14 20:14:06.883372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.809 [2024-02-14 20:14:06.883668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.809 [2024-02-14 20:14:06.883708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.809 [2024-02-14 20:14:06.883747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.809 [2024-02-14 20:14:06.883775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.809 [2024-02-14 20:14:06.883816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.809 [2024-02-14 20:14:06.883855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.809 [2024-02-14 20:14:06.883890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.809 [2024-02-14 20:14:06.883922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.809 [2024-02-14 20:14:06.883951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.809 [2024-02-14 20:14:06.883978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.809 [2024-02-14 20:14:06.884006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.809 [2024-02-14 20:14:06.884033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.809 [2024-02-14 20:14:06.884060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.809 [2024-02-14 20:14:06.884087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.809 [2024-02-14 20:14:06.884114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.809 [2024-02-14 20:14:06.884140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.809 [2024-02-14 20:14:06.884169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.809 [2024-02-14 20:14:06.884195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.809 [2024-02-14 20:14:06.884224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.809 [2024-02-14 20:14:06.884253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.809 [2024-02-14 20:14:06.884293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.809 [2024-02-14 20:14:06.884337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.809 [2024-02-14 20:14:06.884377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.809 [2024-02-14 20:14:06.884406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.809 [2024-02-14 20:14:06.884436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.809 [2024-02-14 20:14:06.884474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.809 [2024-02-14 20:14:06.884515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.809 [2024-02-14 20:14:06.884559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.809 [2024-02-14 20:14:06.884604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.809 [2024-02-14 20:14:06.884656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.809 [2024-02-14 20:14:06.884704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.809 [2024-02-14 20:14:06.884754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.809 [2024-02-14 20:14:06.884798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.809 [2024-02-14 20:14:06.884848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.809 [2024-02-14 20:14:06.884897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.809 [2024-02-14 20:14:06.884939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.809 [2024-02-14 20:14:06.884977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.809 [2024-02-14 20:14:06.885005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.809 [2024-02-14 20:14:06.885036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.809 [2024-02-14 20:14:06.885076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.809 [2024-02-14 20:14:06.885115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.809 [2024-02-14 20:14:06.885149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.809 [2024-02-14 20:14:06.885194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.809 [2024-02-14 20:14:06.885230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.809 [2024-02-14 20:14:06.885258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.809 [2024-02-14 20:14:06.885286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.809 [2024-02-14 20:14:06.885325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.809 [2024-02-14 20:14:06.885368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.809 [2024-02-14 20:14:06.885413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.809 [2024-02-14 20:14:06.885459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.809 [2024-02-14 20:14:06.885506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.809 [2024-02-14 20:14:06.885553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.809 [2024-02-14 20:14:06.885593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.809 [2024-02-14 20:14:06.885636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.809 [2024-02-14 20:14:06.885691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.809 [2024-02-14 20:14:06.885738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.809 [2024-02-14 20:14:06.885781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.809 [2024-02-14 20:14:06.885825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.810 [2024-02-14 20:14:06.885864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.810 [2024-02-14 20:14:06.885901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.810 [2024-02-14 20:14:06.886283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.810 [2024-02-14 20:14:06.886326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.810 [2024-02-14 20:14:06.886356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.810 [2024-02-14 20:14:06.886394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.810 [2024-02-14 20:14:06.886439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.810 [2024-02-14 20:14:06.886484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.810 [2024-02-14 20:14:06.886537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.810 [2024-02-14 20:14:06.886584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.810 [2024-02-14 20:14:06.886636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.810 [2024-02-14 20:14:06.886697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.810 [2024-02-14 20:14:06.886743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.810 [2024-02-14 20:14:06.886791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.810 [2024-02-14 20:14:06.886838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.810 [2024-02-14 20:14:06.886881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.810 [2024-02-14 20:14:06.886925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.810 [2024-02-14 20:14:06.886967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.810 [2024-02-14 20:14:06.887018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.810 [2024-02-14 20:14:06.887064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.810 [2024-02-14 20:14:06.887110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.810 [2024-02-14 20:14:06.887153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.810 [2024-02-14 20:14:06.887196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.810 [2024-02-14 20:14:06.887248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.810 [2024-02-14 20:14:06.887290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.810 [2024-02-14 20:14:06.887338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.810 [2024-02-14 20:14:06.887379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.810 [2024-02-14 20:14:06.887422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.810 [2024-02-14 20:14:06.887466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.810 [2024-02-14 20:14:06.887511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.810 [2024-02-14 20:14:06.887566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.810 [2024-02-14 20:14:06.887609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.810 [2024-02-14 20:14:06.887661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.810 [2024-02-14 20:14:06.887711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.810 [2024-02-14 20:14:06.887756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.810 [2024-02-14 20:14:06.887799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.810 [2024-02-14 20:14:06.887844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.810 [2024-02-14 20:14:06.887899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.810 [2024-02-14 20:14:06.887940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.810 [2024-02-14 20:14:06.887987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.810 [2024-02-14 20:14:06.888029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.810 [2024-02-14 20:14:06.888075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.810 [2024-02-14 20:14:06.888124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.810 [2024-02-14 20:14:06.888169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.810 [2024-02-14 20:14:06.888218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.810 [2024-02-14 20:14:06.888264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.810 [2024-02-14 20:14:06.888309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.810 [2024-02-14 20:14:06.888359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.810 [2024-02-14 20:14:06.888401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.810 [2024-02-14 20:14:06.888449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.810 [2024-02-14 20:14:06.888494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.810 [2024-02-14 20:14:06.888539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.810 [2024-02-14 20:14:06.888581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.810 [2024-02-14 20:14:06.888619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.810 [2024-02-14 20:14:06.888669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.810 [2024-02-14 20:14:06.888714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.810 [2024-02-14 20:14:06.888758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.810 [2024-02-14 20:14:06.888805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.810 [2024-02-14 20:14:06.888846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.810 [2024-02-14 20:14:06.888880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.810 [2024-02-14 20:14:06.888907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.810 [2024-02-14 20:14:06.888943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.810 [2024-02-14 20:14:06.888985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.810 [2024-02-14 20:14:06.889024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.810 [2024-02-14 20:14:06.889063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.810 [2024-02-14 20:14:06.889103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.810 [2024-02-14 20:14:06.889199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.810 [2024-02-14 20:14:06.889236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.810 [2024-02-14 20:14:06.889274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.810 [2024-02-14 20:14:06.889544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.810 [2024-02-14 20:14:06.889576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.810 [2024-02-14 20:14:06.889619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.810 [2024-02-14 20:14:06.889655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.810 [2024-02-14 20:14:06.889700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.810 [2024-02-14 20:14:06.889746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.810 [2024-02-14 20:14:06.889809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.810 [2024-02-14 20:14:06.889851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.810 [2024-02-14 20:14:06.889905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.810 [2024-02-14 20:14:06.889945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.810 [2024-02-14 20:14:06.889993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.810 [2024-02-14 20:14:06.890039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.810 [2024-02-14 20:14:06.890082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.810 [2024-02-14 20:14:06.890133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.810 [2024-02-14 20:14:06.890176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.810 [2024-02-14 20:14:06.890225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.810 [2024-02-14 20:14:06.890267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.810 [2024-02-14 20:14:06.890306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.810 [2024-02-14 20:14:06.890345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.810 [2024-02-14 20:14:06.890384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.810 [2024-02-14 20:14:06.890414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.810 [2024-02-14 20:14:06.890440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.810 [2024-02-14 20:14:06.890479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.810 [2024-02-14 20:14:06.890511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.810 [2024-02-14 20:14:06.890551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.810 [2024-02-14 20:14:06.890581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.810 [2024-02-14 20:14:06.890612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.810 [2024-02-14 20:14:06.890644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.811 [2024-02-14 20:14:06.890698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.811 [2024-02-14 20:14:06.890742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.811 [2024-02-14 20:14:06.890786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.811 [2024-02-14 20:14:06.890840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.811 [2024-02-14 20:14:06.890886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.811 [2024-02-14 20:14:06.890932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.811 [2024-02-14 20:14:06.890978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.811 [2024-02-14 20:14:06.891021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.811 [2024-02-14 20:14:06.891067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.811 [2024-02-14 20:14:06.891111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.811 [2024-02-14 20:14:06.891156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.811 [2024-02-14 20:14:06.891194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.811 [2024-02-14 20:14:06.891225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.811 [2024-02-14 20:14:06.891252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.811 [2024-02-14 20:14:06.891294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.811 [2024-02-14 20:14:06.891332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.811 [2024-02-14 20:14:06.891364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.811 [2024-02-14 20:14:06.891393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.811 [2024-02-14 20:14:06.891425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.811 [2024-02-14 20:14:06.891463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.811 [2024-02-14 20:14:06.891493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.811 [2024-02-14 20:14:06.891534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.811 [2024-02-14 20:14:06.891578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.811 [2024-02-14 20:14:06.891624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.811 [2024-02-14 20:14:06.891674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.811 [2024-02-14 20:14:06.891729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.811 [2024-02-14 20:14:06.891770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.811 [2024-02-14 20:14:06.891818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.811 [2024-02-14 20:14:06.891863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.811 [2024-02-14 20:14:06.891913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.811 [2024-02-14 20:14:06.891961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.811 [2024-02-14 20:14:06.892011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.811 [2024-02-14 20:14:06.892338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.811 [2024-02-14 20:14:06.892378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.811 [2024-02-14 20:14:06.892415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.811 [2024-02-14 20:14:06.892450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.811 [2024-02-14 20:14:06.892481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.811 [2024-02-14 20:14:06.892514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.811 [2024-02-14 20:14:06.892559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.811 [2024-02-14 20:14:06.892601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.811 [2024-02-14 20:14:06.892653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.811 [2024-02-14 20:14:06.892705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.811 [2024-02-14 20:14:06.892753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.811 [2024-02-14 20:14:06.892800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.811 [2024-02-14 20:14:06.892841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.811 [2024-02-14 20:14:06.892886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.811 [2024-02-14 20:14:06.892936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.811 [2024-02-14 20:14:06.892981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.811 [2024-02-14 20:14:06.893028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.811 [2024-02-14 20:14:06.893073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.811 [2024-02-14 20:14:06.893123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.811 [2024-02-14 20:14:06.893174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.811 [2024-02-14 20:14:06.893218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.811 [2024-02-14 20:14:06.893268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.811 [2024-02-14 20:14:06.893317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.811 [2024-02-14 20:14:06.893367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.811 [2024-02-14 20:14:06.893414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.811 [2024-02-14 20:14:06.893458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.811 [2024-02-14 20:14:06.893501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.811 [2024-02-14 20:14:06.893553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.811 [2024-02-14 20:14:06.893598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.811 [2024-02-14 20:14:06.893645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.811 [2024-02-14 20:14:06.893696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.811 [2024-02-14 20:14:06.893740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.811 [2024-02-14 20:14:06.893783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.811 [2024-02-14 20:14:06.893826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.811 [2024-02-14 20:14:06.893874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.811 [2024-02-14 20:14:06.893917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.811 [2024-02-14 20:14:06.893959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.811 [2024-02-14 20:14:06.893996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.811 [2024-02-14 20:14:06.894033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.811 [2024-02-14 20:14:06.894071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.811 [2024-02-14 20:14:06.894108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.811 [2024-02-14 20:14:06.894147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.811 [2024-02-14 20:14:06.894186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.811 [2024-02-14 20:14:06.894217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.811 [2024-02-14 20:14:06.894246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.811 [2024-02-14 20:14:06.894287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.811 [2024-02-14 20:14:06.894328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.811 [2024-02-14 20:14:06.894372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.812 [2024-02-14 20:14:06.894411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.812 [2024-02-14 20:14:06.894452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.812 [2024-02-14 20:14:06.894492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.812 [2024-02-14 20:14:06.894534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.812 [2024-02-14 20:14:06.894569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.812 [2024-02-14 20:14:06.894598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.812 [2024-02-14 20:14:06.894637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.812 [2024-02-14 20:14:06.894687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.812 [2024-02-14 20:14:06.894721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.812 [2024-02-14 20:14:06.894752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.812 [2024-02-14 20:14:06.894791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.812 [2024-02-14 20:14:06.894834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.812 [2024-02-14 20:14:06.894894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.812 [2024-02-14 20:14:06.894935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.812 [2024-02-14 20:14:06.894987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.812 [2024-02-14 20:14:06.895032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.812 [2024-02-14 20:14:06.895132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.812 [2024-02-14 20:14:06.895180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.812 [2024-02-14 20:14:06.895226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.812 [2024-02-14 20:14:06.895499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.812 [2024-02-14 20:14:06.895545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.812 [2024-02-14 20:14:06.895593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.812 [2024-02-14 20:14:06.895636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.812 [2024-02-14 20:14:06.895691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.812 [2024-02-14 20:14:06.895747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.812 [2024-02-14 20:14:06.895792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.812 [2024-02-14 20:14:06.895834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.812 [2024-02-14 20:14:06.895875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.812 [2024-02-14 20:14:06.895924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.812 [2024-02-14 20:14:06.895965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.812 [2024-02-14 20:14:06.896005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.812 [2024-02-14 20:14:06.896043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.812 [2024-02-14 20:14:06.896081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.812 [2024-02-14 20:14:06.896121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.812 [2024-02-14 20:14:06.896163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.812 [2024-02-14 20:14:06.896204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.812 [2024-02-14 20:14:06.896242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.812 [2024-02-14 20:14:06.896272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.812 [2024-02-14 20:14:06.896301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.812 [2024-02-14 20:14:06.896340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.812 [2024-02-14 20:14:06.896378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.812 [2024-02-14 20:14:06.896419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.812 [2024-02-14 20:14:06.896459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.812 [2024-02-14 20:14:06.896500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.812 [2024-02-14 20:14:06.896543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.812 [2024-02-14 20:14:06.896572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.812 [2024-02-14 20:14:06.896609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.812 [2024-02-14 20:14:06.896643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.812 [2024-02-14 20:14:06.896678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.812 [2024-02-14 20:14:06.896723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.812 [2024-02-14 20:14:06.896771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.812 [2024-02-14 20:14:06.896817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.812 [2024-02-14 20:14:06.896865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.812 [2024-02-14 20:14:06.896912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.812 [2024-02-14 20:14:06.896957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.812 [2024-02-14 20:14:06.897002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.812 [2024-02-14 20:14:06.897049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.812 [2024-02-14 20:14:06.897096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.812 [2024-02-14 20:14:06.897142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.812 [2024-02-14 20:14:06.897183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.812 [2024-02-14 20:14:06.897224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.812 [2024-02-14 20:14:06.897261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.812 [2024-02-14 20:14:06.897302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.812 [2024-02-14 20:14:06.897345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.812 [2024-02-14 20:14:06.897374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.812 [2024-02-14 20:14:06.897401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.812 [2024-02-14 20:14:06.897430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.812 [2024-02-14 20:14:06.897462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.812 [2024-02-14 20:14:06.897504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.812 [2024-02-14 20:14:06.897533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.812 [2024-02-14 20:14:06.897562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.812 [2024-02-14 20:14:06.897601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.812 [2024-02-14 20:14:06.897651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.812 [2024-02-14 20:14:06.897697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.812 [2024-02-14 20:14:06.897746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.812 [2024-02-14 20:14:06.897791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.812 [2024-02-14 20:14:06.897836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.812 [2024-02-14 20:14:06.897875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.812 [2024-02-14 20:14:06.897917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.812 [2024-02-14 20:14:06.898245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.812 [2024-02-14 20:14:06.898291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.812 [2024-02-14 20:14:06.898334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.812 [2024-02-14 20:14:06.898364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.812 [2024-02-14 20:14:06.898398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.812 [2024-02-14 20:14:06.898438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.812 [2024-02-14 20:14:06.898467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.812 [2024-02-14 20:14:06.898508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.812 [2024-02-14 20:14:06.898548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.812 [2024-02-14 20:14:06.898589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.812 [2024-02-14 20:14:06.898630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.812 [2024-02-14 20:14:06.898685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.812 [2024-02-14 20:14:06.898728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.812 [2024-02-14 20:14:06.898774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.812 [2024-02-14 20:14:06.898824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.812 [2024-02-14 20:14:06.898866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.812 [2024-02-14 20:14:06.898913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.812 [2024-02-14 20:14:06.898960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.812 [2024-02-14 20:14:06.899005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.813 [2024-02-14 20:14:06.899046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.813 [2024-02-14 20:14:06.899092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.813 [2024-02-14 20:14:06.899139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.813 [2024-02-14 20:14:06.899180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.813 [2024-02-14 20:14:06.899222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.813 [2024-02-14 20:14:06.899265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.813 [2024-02-14 20:14:06.899301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.813 [2024-02-14 20:14:06.899342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.813 [2024-02-14 20:14:06.899381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.813 [2024-02-14 20:14:06.899419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.813 [2024-02-14 20:14:06.899448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.813 [2024-02-14 20:14:06.899478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.813 [2024-02-14 20:14:06.899515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.813 [2024-02-14 20:14:06.899557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.813 [2024-02-14 20:14:06.899596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.813 [2024-02-14 20:14:06.899633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.813 [2024-02-14 20:14:06.899679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.813 [2024-02-14 20:14:06.899716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.813 [2024-02-14 20:14:06.899748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.813 [2024-02-14 20:14:06.899782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.813 [2024-02-14 20:14:06.899827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.813 [2024-02-14 20:14:06.899867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.813 [2024-02-14 20:14:06.899915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.813 [2024-02-14 20:14:06.899957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.813 [2024-02-14 20:14:06.900002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.813 [2024-02-14 20:14:06.900060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.813 [2024-02-14 20:14:06.900105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.813 [2024-02-14 20:14:06.900156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.813 [2024-02-14 20:14:06.900206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.813 [2024-02-14 20:14:06.900251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.813 [2024-02-14 20:14:06.900303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.813 [2024-02-14 20:14:06.900348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.813 [2024-02-14 20:14:06.900391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.813 [2024-02-14 20:14:06.900439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.813 [2024-02-14 20:14:06.900480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.813 [2024-02-14 20:14:06.900525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.813 [2024-02-14 20:14:06.900570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.813 [2024-02-14 20:14:06.900621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.813 [2024-02-14 20:14:06.900674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.813 [2024-02-14 20:14:06.900721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.813 [2024-02-14 20:14:06.900764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.813 [2024-02-14 20:14:06.900803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.813 [2024-02-14 20:14:06.900842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.813 [2024-02-14 20:14:06.900891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.813 [2024-02-14 20:14:06.900927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.813 [2024-02-14 20:14:06.901019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.813 [2024-02-14 20:14:06.901049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.813 [2024-02-14 20:14:06.901083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.813 [2024-02-14 20:14:06.901364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.813 [2024-02-14 20:14:06.901413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.813 [2024-02-14 20:14:06.901455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.813 [2024-02-14 20:14:06.901504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.813 [2024-02-14 20:14:06.901547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.813 [2024-02-14 20:14:06.901591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.813 [2024-02-14 20:14:06.901641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.813 [2024-02-14 20:14:06.901696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.813 [2024-02-14 20:14:06.901739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.813 [2024-02-14 20:14:06.901787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.813 [2024-02-14 20:14:06.901830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.813 [2024-02-14 20:14:06.901882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.813 [2024-02-14 20:14:06.901930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.813 [2024-02-14 20:14:06.901984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.813 [2024-02-14 20:14:06.902031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.813 [2024-02-14 20:14:06.902078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.813 [2024-02-14 20:14:06.902130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.813 [2024-02-14 20:14:06.902174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.813 [2024-02-14 20:14:06.902223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.813 [2024-02-14 20:14:06.902269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.813 [2024-02-14 20:14:06.902315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.813 [2024-02-14 20:14:06.902373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.813 [2024-02-14 20:14:06.902416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.813 [2024-02-14 20:14:06.902459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.813 [2024-02-14 20:14:06.902506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.813 [2024-02-14 20:14:06.902550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.813 [2024-02-14 20:14:06.902599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.813 [2024-02-14 20:14:06.902642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.813 [2024-02-14 20:14:06.902690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.813 [2024-02-14 20:14:06.902736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.813 [2024-02-14 20:14:06.902777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.813 [2024-02-14 20:14:06.902826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.813 [2024-02-14 20:14:06.902865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.813 [2024-02-14 20:14:06.902905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.813 [2024-02-14 20:14:06.902947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.813 [2024-02-14 20:14:06.902992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.813 [2024-02-14 20:14:06.903031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.813 [2024-02-14 20:14:06.903069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.813 [2024-02-14 20:14:06.903100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.813 [2024-02-14 20:14:06.903130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.813 [2024-02-14 20:14:06.903171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.813 [2024-02-14 20:14:06.903212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.813 [2024-02-14 20:14:06.903256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.813 [2024-02-14 20:14:06.903294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.813 [2024-02-14 20:14:06.903334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.813 [2024-02-14 20:14:06.903379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.813 [2024-02-14 20:14:06.903418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.813 [2024-02-14 20:14:06.903457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.813 [2024-02-14 20:14:06.903491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.813 [2024-02-14 20:14:06.903519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.814 [2024-02-14 20:14:06.903563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.814 [2024-02-14 20:14:06.903594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.814 [2024-02-14 20:14:06.903622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.814 [2024-02-14 20:14:06.903675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.814 [2024-02-14 20:14:06.903718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.814 [2024-02-14 20:14:06.903766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.814 [2024-02-14 20:14:06.903810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.814 [2024-02-14 20:14:06.903858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.814 [2024-02-14 20:14:06.903903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.814 [2024-02-14 20:14:06.903953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.814 [2024-02-14 20:14:06.904285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.814 [2024-02-14 20:14:06.904325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.814 [2024-02-14 20:14:06.904368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.814 [2024-02-14 20:14:06.904413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.814 [2024-02-14 20:14:06.904451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.814 [2024-02-14 20:14:06.904488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.814 [2024-02-14 20:14:06.904518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.814 [2024-02-14 20:14:06.904546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.814 [2024-02-14 20:14:06.904575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.814 [2024-02-14 20:14:06.904616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.814 [2024-02-14 20:14:06.904667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.814 [2024-02-14 20:14:06.904713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.814 [2024-02-14 20:14:06.904763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.814 [2024-02-14 20:14:06.904808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.814 [2024-02-14 20:14:06.904854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.814 [2024-02-14 20:14:06.904906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.814 [2024-02-14 20:14:06.904956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.814 [2024-02-14 20:14:06.904999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.814 [2024-02-14 20:14:06.905049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.814 [2024-02-14 20:14:06.905095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.814 [2024-02-14 20:14:06.905143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.814 [2024-02-14 20:14:06.905194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.814 [2024-02-14 20:14:06.905237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.814 [2024-02-14 20:14:06.905275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.814 [2024-02-14 20:14:06.905314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.814 [2024-02-14 20:14:06.905345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.814 [2024-02-14 20:14:06.905372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.814 [2024-02-14 20:14:06.905413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.814 [2024-02-14 20:14:06.905449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.814 [2024-02-14 20:14:06.905489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.814 [2024-02-14 20:14:06.905527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.814 [2024-02-14 20:14:06.905557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.814 [2024-02-14 20:14:06.905596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.814 [2024-02-14 20:14:06.905633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.814 [2024-02-14 20:14:06.905677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.814 [2024-02-14 20:14:06.905713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.814 [2024-02-14 20:14:06.905755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.814 [2024-02-14 20:14:06.905800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.814 [2024-02-14 20:14:06.905847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.814 [2024-02-14 20:14:06.905905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.814 [2024-02-14 20:14:06.905950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.814 [2024-02-14 20:14:06.905996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.814 [2024-02-14 20:14:06.906044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.814 [2024-02-14 20:14:06.906085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.814 [2024-02-14 20:14:06.906136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.814 [2024-02-14 20:14:06.906175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.814 [2024-02-14 20:14:06.906220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.814 [2024-02-14 20:14:06.906259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.814 [2024-02-14 20:14:06.906301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.814 [2024-02-14 20:14:06.906345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.814 [2024-02-14 20:14:06.906388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.814 [2024-02-14 20:14:06.906435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.814 [2024-02-14 20:14:06.906474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.814 [2024-02-14 20:14:06.906514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.814 [2024-02-14 20:14:06.906556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.814 [2024-02-14 20:14:06.906593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.814 [2024-02-14 20:14:06.906632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.814 [2024-02-14 20:14:06.906679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.814 [2024-02-14 20:14:06.906721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.814 [2024-02-14 20:14:06.906751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.814 [2024-02-14 20:14:06.906778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.814 [2024-02-14 20:14:06.906806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.814 [2024-02-14 20:14:06.906848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.814 [2024-02-14 20:14:06.906882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.814 [2024-02-14 20:14:06.906970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.814 [2024-02-14 20:14:06.907014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.814 [2024-02-14 20:14:06.907056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.814 [2024-02-14 20:14:06.907323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.814 [2024-02-14 20:14:06.907369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.814 [2024-02-14 20:14:06.907415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.814 [2024-02-14 20:14:06.907460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.814 [2024-02-14 20:14:06.907504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.814 [2024-02-14 20:14:06.907548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.814 [2024-02-14 20:14:06.907591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.814 [2024-02-14 20:14:06.907640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.814 [2024-02-14 20:14:06.907690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.814 [2024-02-14 20:14:06.907740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.814 [2024-02-14 20:14:06.907782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.814 [2024-02-14 20:14:06.907829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.814 [2024-02-14 20:14:06.907870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.814 [2024-02-14 20:14:06.907912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.814 [2024-02-14 20:14:06.907953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.814 [2024-02-14 20:14:06.907998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.814 [2024-02-14 20:14:06.908040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.814 [2024-02-14 20:14:06.908088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.815 [2024-02-14 20:14:06.908137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.815 [2024-02-14 20:14:06.908180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.815 [2024-02-14 20:14:06.908231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.815 [2024-02-14 20:14:06.908276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.815 [2024-02-14 20:14:06.908323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.815 [2024-02-14 20:14:06.908366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.815 [2024-02-14 20:14:06.908411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.815 [2024-02-14 20:14:06.908452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.815 [2024-02-14 20:14:06.908497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.815 [2024-02-14 20:14:06.908537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.815 [2024-02-14 20:14:06.908582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.815 [2024-02-14 20:14:06.908622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.815 [2024-02-14 20:14:06.908669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.815 [2024-02-14 20:14:06.908706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.815 [2024-02-14 20:14:06.908743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.815 [2024-02-14 20:14:06.908780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.815 [2024-02-14 20:14:06.908819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.815 [2024-02-14 20:14:06.908847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.815 [2024-02-14 20:14:06.908882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.815 [2024-02-14 20:14:06.908922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.815 [2024-02-14 20:14:06.908962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.815 [2024-02-14 20:14:06.909002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.815 [2024-02-14 20:14:06.909038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.815 [2024-02-14 20:14:06.909079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.815 [2024-02-14 20:14:06.909121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.815 [2024-02-14 20:14:06.909158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.815 [2024-02-14 20:14:06.909197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.815 [2024-02-14 20:14:06.909233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.815 [2024-02-14 20:14:06.909261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.815 [2024-02-14 20:14:06.909296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.815 [2024-02-14 20:14:06.909334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.815 [2024-02-14 20:14:06.909379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.815 [2024-02-14 20:14:06.909416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.815 [2024-02-14 20:14:06.909447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.815 [2024-02-14 20:14:06.909477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.815 [2024-02-14 20:14:06.909521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.815 [2024-02-14 20:14:06.909564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.815 [2024-02-14 20:14:06.909610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.815 [2024-02-14 20:14:06.909656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.815 [2024-02-14 20:14:06.909701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.815 [2024-02-14 20:14:06.909747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.815 [2024-02-14 20:14:06.909797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.815 [2024-02-14 20:14:06.910117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.815 [2024-02-14 20:14:06.910155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.815 [2024-02-14 20:14:06.910192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.815 [2024-02-14 20:14:06.910222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.815 [2024-02-14 20:14:06.910251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.815 [2024-02-14 20:14:06.910284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.815 [2024-02-14 20:14:06.910325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.815 [2024-02-14 20:14:06.910367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.815 [2024-02-14 20:14:06.910413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.815 [2024-02-14 20:14:06.910460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.815 [2024-02-14 20:14:06.910508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.815 [2024-02-14 20:14:06.910552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.815 [2024-02-14 20:14:06.910593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.815 [2024-02-14 20:14:06.910645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.815 [2024-02-14 20:14:06.910699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.815 [2024-02-14 20:14:06.910749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.815 [2024-02-14 20:14:06.910800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.815 Message suppressed 999 times: [2024-02-14 20:14:06.910849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.815 Read completed with error (sct=0, sc=15) 00:13:29.815 [2024-02-14 20:14:06.910900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.815 [2024-02-14 20:14:06.910945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.815 [2024-02-14 20:14:06.910993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.815 [2024-02-14 20:14:06.911037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.815 [2024-02-14 20:14:06.911084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.815 [2024-02-14 20:14:06.911141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.815 [2024-02-14 20:14:06.911182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.815 [2024-02-14 20:14:06.911222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.815 [2024-02-14 20:14:06.911267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.815 [2024-02-14 20:14:06.911307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.815 [2024-02-14 20:14:06.911336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.815 [2024-02-14 20:14:06.911367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.815 [2024-02-14 20:14:06.911408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.815 [2024-02-14 20:14:06.911450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.815 [2024-02-14 20:14:06.911489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.815 [2024-02-14 20:14:06.911528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.815 [2024-02-14 20:14:06.911571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.815 [2024-02-14 20:14:06.911604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.815 [2024-02-14 20:14:06.911632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.815 [2024-02-14 20:14:06.911675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.815 [2024-02-14 20:14:06.911715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.815 [2024-02-14 20:14:06.911748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.815 [2024-02-14 20:14:06.911790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.815 [2024-02-14 20:14:06.911837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.815 [2024-02-14 20:14:06.911889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.815 [2024-02-14 20:14:06.911933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.815 [2024-02-14 20:14:06.911977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.815 [2024-02-14 20:14:06.912033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.815 [2024-02-14 20:14:06.912078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.815 [2024-02-14 20:14:06.912127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.815 [2024-02-14 20:14:06.912170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.816 [2024-02-14 20:14:06.912213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.816 [2024-02-14 20:14:06.912265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.816 [2024-02-14 20:14:06.912309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.816 [2024-02-14 20:14:06.912355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.816 [2024-02-14 20:14:06.912399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.816 [2024-02-14 20:14:06.912446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.816 [2024-02-14 20:14:06.912496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.816 [2024-02-14 20:14:06.912538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.816 [2024-02-14 20:14:06.912576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.816 [2024-02-14 20:14:06.912613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.816 [2024-02-14 20:14:06.912658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.816 [2024-02-14 20:14:06.912698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.816 [2024-02-14 20:14:06.912728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.816 [2024-02-14 20:14:06.912755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.816 [2024-02-14 20:14:06.912795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.816 [2024-02-14 20:14:06.912901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.816 [2024-02-14 20:14:06.912944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.816 [2024-02-14 20:14:06.912983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.816 [2024-02-14 20:14:06.913265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.816 [2024-02-14 20:14:06.913317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.816 [2024-02-14 20:14:06.913365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.816 [2024-02-14 20:14:06.913412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.816 [2024-02-14 20:14:06.913457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.816 [2024-02-14 20:14:06.913503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.816 [2024-02-14 20:14:06.913554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.816 [2024-02-14 20:14:06.913598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.816 [2024-02-14 20:14:06.913643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.816 [2024-02-14 20:14:06.913704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.816 [2024-02-14 20:14:06.913747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.816 [2024-02-14 20:14:06.913790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.816 [2024-02-14 20:14:06.913832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.816 [2024-02-14 20:14:06.913883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.816 [2024-02-14 20:14:06.913928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.816 [2024-02-14 20:14:06.913972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.816 [2024-02-14 20:14:06.914022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.816 [2024-02-14 20:14:06.914073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.816 [2024-02-14 20:14:06.914121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.816 [2024-02-14 20:14:06.914169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.816 [2024-02-14 20:14:06.914220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.816 [2024-02-14 20:14:06.914265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.816 [2024-02-14 20:14:06.914312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.816 [2024-02-14 20:14:06.914354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.816 [2024-02-14 20:14:06.914396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.816 [2024-02-14 20:14:06.914432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.816 [2024-02-14 20:14:06.914468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.816 [2024-02-14 20:14:06.914497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.816 [2024-02-14 20:14:06.914531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.816 [2024-02-14 20:14:06.914573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.816 [2024-02-14 20:14:06.914614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.816 [2024-02-14 20:14:06.914656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.816 [2024-02-14 20:14:06.914698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.816 [2024-02-14 20:14:06.914742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.816 [2024-02-14 20:14:06.914780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.816 [2024-02-14 20:14:06.914823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.816 [2024-02-14 20:14:06.914851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.816 [2024-02-14 20:14:06.914894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.816 [2024-02-14 20:14:06.914931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.816 [2024-02-14 20:14:06.914963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.816 [2024-02-14 20:14:06.914992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.816 [2024-02-14 20:14:06.915032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.816 [2024-02-14 20:14:06.915073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.816 [2024-02-14 20:14:06.915119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.816 [2024-02-14 20:14:06.915166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.816 [2024-02-14 20:14:06.915210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.816 [2024-02-14 20:14:06.915259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.816 [2024-02-14 20:14:06.915305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.816 [2024-02-14 20:14:06.915350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.816 [2024-02-14 20:14:06.915397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.816 [2024-02-14 20:14:06.915452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.816 [2024-02-14 20:14:06.915499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.816 [2024-02-14 20:14:06.915545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.816 [2024-02-14 20:14:06.915597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.816 [2024-02-14 20:14:06.915640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.816 [2024-02-14 20:14:06.915693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.816 [2024-02-14 20:14:06.915740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.816 [2024-02-14 20:14:06.915792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.816 [2024-02-14 20:14:06.915838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.816 [2024-02-14 20:14:06.915882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.816 [2024-02-14 20:14:06.916216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.816 [2024-02-14 20:14:06.916253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.816 [2024-02-14 20:14:06.916290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.816 [2024-02-14 20:14:06.916326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.816 [2024-02-14 20:14:06.916356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.816 [2024-02-14 20:14:06.916385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.816 [2024-02-14 20:14:06.916426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.816 [2024-02-14 20:14:06.916469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.816 [2024-02-14 20:14:06.916514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.816 [2024-02-14 20:14:06.916562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.816 [2024-02-14 20:14:06.916604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.817 [2024-02-14 20:14:06.916657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.817 [2024-02-14 20:14:06.916708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.817 [2024-02-14 20:14:06.916752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.817 [2024-02-14 20:14:06.916797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.817 [2024-02-14 20:14:06.916841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.817 [2024-02-14 20:14:06.916888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.817 [2024-02-14 20:14:06.916938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.817 [2024-02-14 20:14:06.916981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.817 [2024-02-14 20:14:06.917030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.817 [2024-02-14 20:14:06.917080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.817 [2024-02-14 20:14:06.917125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.817 [2024-02-14 20:14:06.917178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.817 [2024-02-14 20:14:06.917224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.817 [2024-02-14 20:14:06.917274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.817 [2024-02-14 20:14:06.917315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.817 [2024-02-14 20:14:06.917365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.817 [2024-02-14 20:14:06.917405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.817 [2024-02-14 20:14:06.917439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.817 [2024-02-14 20:14:06.917468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.817 [2024-02-14 20:14:06.917500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.817 [2024-02-14 20:14:06.917539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.817 [2024-02-14 20:14:06.917581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.817 [2024-02-14 20:14:06.917620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.817 [2024-02-14 20:14:06.917665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.817 [2024-02-14 20:14:06.917704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.817 [2024-02-14 20:14:06.917739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.817 [2024-02-14 20:14:06.917772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.817 [2024-02-14 20:14:06.917810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.817 [2024-02-14 20:14:06.917845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.817 [2024-02-14 20:14:06.917873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.817 [2024-02-14 20:14:06.917916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.817 [2024-02-14 20:14:06.917958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.817 [2024-02-14 20:14:06.918006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.817 [2024-02-14 20:14:06.918058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.817 [2024-02-14 20:14:06.918103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.817 [2024-02-14 20:14:06.918151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.817 [2024-02-14 20:14:06.918199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.817 [2024-02-14 20:14:06.918244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.817 [2024-02-14 20:14:06.918293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.817 [2024-02-14 20:14:06.918336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.817 [2024-02-14 20:14:06.918380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.817 [2024-02-14 20:14:06.918430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.817 [2024-02-14 20:14:06.918473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.817 [2024-02-14 20:14:06.918521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.817 [2024-02-14 20:14:06.918567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.817 [2024-02-14 20:14:06.918617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.817 [2024-02-14 20:14:06.918665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.817 [2024-02-14 20:14:06.918707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.817 [2024-02-14 20:14:06.918750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.817 [2024-02-14 20:14:06.918790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.817 [2024-02-14 20:14:06.918830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.817 [2024-02-14 20:14:06.918859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.817 [2024-02-14 20:14:06.918890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.817 [2024-02-14 20:14:06.918997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.817 [2024-02-14 20:14:06.919041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.817 [2024-02-14 20:14:06.919078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.817 [2024-02-14 20:14:06.919342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.817 [2024-02-14 20:14:06.919388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.817 [2024-02-14 20:14:06.919432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.817 [2024-02-14 20:14:06.919482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.817 [2024-02-14 20:14:06.919526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.817 [2024-02-14 20:14:06.919577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.817 [2024-02-14 20:14:06.919619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.817 [2024-02-14 20:14:06.919678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.817 [2024-02-14 20:14:06.919726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.817 [2024-02-14 20:14:06.919777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.817 [2024-02-14 20:14:06.919821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.817 [2024-02-14 20:14:06.919866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.817 [2024-02-14 20:14:06.919916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.817 [2024-02-14 20:14:06.919962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.817 [2024-02-14 20:14:06.920009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.817 [2024-02-14 20:14:06.920051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.817 [2024-02-14 20:14:06.920100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.817 [2024-02-14 20:14:06.920149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.817 [2024-02-14 20:14:06.920197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.817 [2024-02-14 20:14:06.920240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.817 [2024-02-14 20:14:06.920287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.817 [2024-02-14 20:14:06.920333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.817 [2024-02-14 20:14:06.920377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.817 [2024-02-14 20:14:06.920422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.817 [2024-02-14 20:14:06.920471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.817 [2024-02-14 20:14:06.920512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.817 [2024-02-14 20:14:06.920552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.817 [2024-02-14 20:14:06.920581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.817 [2024-02-14 20:14:06.920614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.817 [2024-02-14 20:14:06.920656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.817 [2024-02-14 20:14:06.920703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.817 [2024-02-14 20:14:06.920751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.817 [2024-02-14 20:14:06.920790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.817 [2024-02-14 20:14:06.920827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.817 [2024-02-14 20:14:06.920867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.817 [2024-02-14 20:14:06.920906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.817 [2024-02-14 20:14:06.920935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.817 [2024-02-14 20:14:06.920970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.817 [2024-02-14 20:14:06.921006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.817 [2024-02-14 20:14:06.921040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.817 [2024-02-14 20:14:06.921072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.817 [2024-02-14 20:14:06.921122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.818 [2024-02-14 20:14:06.921169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.818 [2024-02-14 20:14:06.921220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.818 [2024-02-14 20:14:06.921268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.818 [2024-02-14 20:14:06.921312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.818 [2024-02-14 20:14:06.921362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.818 [2024-02-14 20:14:06.921406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.818 [2024-02-14 20:14:06.921448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.818 [2024-02-14 20:14:06.921503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.818 [2024-02-14 20:14:06.921550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.818 [2024-02-14 20:14:06.921594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.818 [2024-02-14 20:14:06.921658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.818 [2024-02-14 20:14:06.921705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.818 [2024-02-14 20:14:06.921749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.818 [2024-02-14 20:14:06.921803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.818 [2024-02-14 20:14:06.921845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.818 [2024-02-14 20:14:06.921894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.818 [2024-02-14 20:14:06.921935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.818 [2024-02-14 20:14:06.921974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.818 [2024-02-14 20:14:06.922295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.818 [2024-02-14 20:14:06.922334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.818 [2024-02-14 20:14:06.922375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.818 [2024-02-14 20:14:06.922412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.818 [2024-02-14 20:14:06.922442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.818 [2024-02-14 20:14:06.922473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.818 [2024-02-14 20:14:06.922521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.818 [2024-02-14 20:14:06.922572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.818 [2024-02-14 20:14:06.922617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.818 [2024-02-14 20:14:06.922672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.818 [2024-02-14 20:14:06.922720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.818 [2024-02-14 20:14:06.922764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.818 [2024-02-14 20:14:06.922811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.818 [2024-02-14 20:14:06.922860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.818 [2024-02-14 20:14:06.922903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.818 [2024-02-14 20:14:06.922952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.818 [2024-02-14 20:14:06.923002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.818 [2024-02-14 20:14:06.923047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.818 [2024-02-14 20:14:06.923092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.818 [2024-02-14 20:14:06.923139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.818 [2024-02-14 20:14:06.923183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.818 [2024-02-14 20:14:06.923230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.818 [2024-02-14 20:14:06.923278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.818 [2024-02-14 20:14:06.923321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.818 [2024-02-14 20:14:06.923369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.818 [2024-02-14 20:14:06.923412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.818 [2024-02-14 20:14:06.923454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.818 [2024-02-14 20:14:06.923495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.818 [2024-02-14 20:14:06.923529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.818 [2024-02-14 20:14:06.923557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.818 [2024-02-14 20:14:06.923594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.818 [2024-02-14 20:14:06.923636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.818 [2024-02-14 20:14:06.923680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.818 [2024-02-14 20:14:06.923717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.818 [2024-02-14 20:14:06.923761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.818 [2024-02-14 20:14:06.923800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.818 [2024-02-14 20:14:06.923829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.818 [2024-02-14 20:14:06.923865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.818 [2024-02-14 20:14:06.923899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.818 [2024-02-14 20:14:06.923931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.818 [2024-02-14 20:14:06.923971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.818 [2024-02-14 20:14:06.924023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.818 [2024-02-14 20:14:06.924071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.818 [2024-02-14 20:14:06.924112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.818 [2024-02-14 20:14:06.924163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.818 [2024-02-14 20:14:06.924207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.818 [2024-02-14 20:14:06.924249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.818 [2024-02-14 20:14:06.924300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.818 [2024-02-14 20:14:06.924340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.818 [2024-02-14 20:14:06.924388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.818 [2024-02-14 20:14:06.924433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.818 [2024-02-14 20:14:06.924474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.818 [2024-02-14 20:14:06.924518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.818 [2024-02-14 20:14:06.924555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.818 [2024-02-14 20:14:06.924604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.818 [2024-02-14 20:14:06.924635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.818 [2024-02-14 20:14:06.924673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.818 [2024-02-14 20:14:06.924714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.818 [2024-02-14 20:14:06.924751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.818 [2024-02-14 20:14:06.924791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.818 [2024-02-14 20:14:06.924830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.818 [2024-02-14 20:14:06.924862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.818 [2024-02-14 20:14:06.924893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.818 [2024-02-14 20:14:06.924940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.818 [2024-02-14 20:14:06.925041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.818 [2024-02-14 20:14:06.925087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.818 [2024-02-14 20:14:06.925136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.818 [2024-02-14 20:14:06.925424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.818 [2024-02-14 20:14:06.925474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.818 [2024-02-14 20:14:06.925517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.818 [2024-02-14 20:14:06.925562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.818 [2024-02-14 20:14:06.925606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.818 [2024-02-14 20:14:06.925661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.818 [2024-02-14 20:14:06.925708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.818 [2024-02-14 20:14:06.925751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.818 [2024-02-14 20:14:06.925799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.818 [2024-02-14 20:14:06.925843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.818 [2024-02-14 20:14:06.925886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.818 [2024-02-14 20:14:06.925936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.818 [2024-02-14 20:14:06.925984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.819 [2024-02-14 20:14:06.926032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.819 [2024-02-14 20:14:06.926080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.819 [2024-02-14 20:14:06.926124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.819 [2024-02-14 20:14:06.926164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.819 [2024-02-14 20:14:06.926206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.819 [2024-02-14 20:14:06.926254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.819 [2024-02-14 20:14:06.926283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.819 [2024-02-14 20:14:06.926316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.819 [2024-02-14 20:14:06.926354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.819 [2024-02-14 20:14:06.926396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.819 [2024-02-14 20:14:06.926433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.819 [2024-02-14 20:14:06.926472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.819 [2024-02-14 20:14:06.926509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.819 [2024-02-14 20:14:06.926553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.819 [2024-02-14 20:14:06.926588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.819 [2024-02-14 20:14:06.926619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.819 [2024-02-14 20:14:06.926665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.819 [2024-02-14 20:14:06.926707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.819 [2024-02-14 20:14:06.926742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.819 [2024-02-14 20:14:06.926772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.819 [2024-02-14 20:14:06.926818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.819 [2024-02-14 20:14:06.926863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.819 [2024-02-14 20:14:06.926910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.819 [2024-02-14 20:14:06.926951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.819 [2024-02-14 20:14:06.927000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.819 [2024-02-14 20:14:06.927047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.819 [2024-02-14 20:14:06.927092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.819 [2024-02-14 20:14:06.927137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.819 [2024-02-14 20:14:06.927182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.819 [2024-02-14 20:14:06.927232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.819 [2024-02-14 20:14:06.927280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.819 [2024-02-14 20:14:06.927325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.819 [2024-02-14 20:14:06.927375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.819 [2024-02-14 20:14:06.927426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.819 [2024-02-14 20:14:06.927472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.819 [2024-02-14 20:14:06.927517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.819 [2024-02-14 20:14:06.927559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.819 [2024-02-14 20:14:06.927611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.819 [2024-02-14 20:14:06.927662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.819 [2024-02-14 20:14:06.927707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.819 [2024-02-14 20:14:06.927757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.819 [2024-02-14 20:14:06.927801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.819 [2024-02-14 20:14:06.927845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.819 [2024-02-14 20:14:06.927886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.819 [2024-02-14 20:14:06.927925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.819 [2024-02-14 20:14:06.927970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.819 [2024-02-14 20:14:06.928009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.819 [2024-02-14 20:14:06.928347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.819 [2024-02-14 20:14:06.928386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.819 [2024-02-14 20:14:06.928423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.819 [2024-02-14 20:14:06.928461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.819 [2024-02-14 20:14:06.928492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.819 [2024-02-14 20:14:06.928520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.819 [2024-02-14 20:14:06.928550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.819 [2024-02-14 20:14:06.928602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.819 [2024-02-14 20:14:06.928644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.819 [2024-02-14 20:14:06.928697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.819 [2024-02-14 20:14:06.928744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.819 [2024-02-14 20:14:06.928791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.819 [2024-02-14 20:14:06.928843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.819 [2024-02-14 20:14:06.928889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.819 [2024-02-14 20:14:06.928939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.819 [2024-02-14 20:14:06.928993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.819 [2024-02-14 20:14:06.929036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.819 [2024-02-14 20:14:06.929079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.819 [2024-02-14 20:14:06.929127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.819 [2024-02-14 20:14:06.929172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.819 [2024-02-14 20:14:06.929218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.819 [2024-02-14 20:14:06.929261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.819 [2024-02-14 20:14:06.929304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.819 [2024-02-14 20:14:06.929356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.819 [2024-02-14 20:14:06.929401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.819 [2024-02-14 20:14:06.929447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.819 [2024-02-14 20:14:06.929485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.819 [2024-02-14 20:14:06.929523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.819 [2024-02-14 20:14:06.929567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.819 [2024-02-14 20:14:06.929600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.819 [2024-02-14 20:14:06.929628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.819 [2024-02-14 20:14:06.929670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.819 [2024-02-14 20:14:06.929715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.819 [2024-02-14 20:14:06.929759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.819 [2024-02-14 20:14:06.929804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.819 [2024-02-14 20:14:06.929845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.819 [2024-02-14 20:14:06.929882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.819 [2024-02-14 20:14:06.929914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.819 [2024-02-14 20:14:06.929954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.819 [2024-02-14 20:14:06.929987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.819 [2024-02-14 20:14:06.930019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.819 [2024-02-14 20:14:06.930060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.819 [2024-02-14 20:14:06.930105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.819 [2024-02-14 20:14:06.930152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.819 [2024-02-14 20:14:06.930198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.819 [2024-02-14 20:14:06.930246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.819 [2024-02-14 20:14:06.930293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.819 [2024-02-14 20:14:06.930343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.819 [2024-02-14 20:14:06.930385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.819 [2024-02-14 20:14:06.930434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.819 [2024-02-14 20:14:06.930479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.819 [2024-02-14 20:14:06.930524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.820 [2024-02-14 20:14:06.930562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.820 [2024-02-14 20:14:06.930603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.820 [2024-02-14 20:14:06.930657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.820 [2024-02-14 20:14:06.930692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.820 [2024-02-14 20:14:06.930722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.820 [2024-02-14 20:14:06.930767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.820 [2024-02-14 20:14:06.930804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.820 [2024-02-14 20:14:06.930847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.820 [2024-02-14 20:14:06.930890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.820 [2024-02-14 20:14:06.930928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.820 [2024-02-14 20:14:06.930959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.820 [2024-02-14 20:14:06.930991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.820 [2024-02-14 20:14:06.931105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.820 [2024-02-14 20:14:06.931153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.820 [2024-02-14 20:14:06.931200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.820 [2024-02-14 20:14:06.931464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.820 [2024-02-14 20:14:06.931507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.820 [2024-02-14 20:14:06.931558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.820 [2024-02-14 20:14:06.931605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.820 [2024-02-14 20:14:06.931657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.820 [2024-02-14 20:14:06.931705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.820 [2024-02-14 20:14:06.931747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.820 [2024-02-14 20:14:06.931794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.820 [2024-02-14 20:14:06.931852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.820 [2024-02-14 20:14:06.931897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.820 [2024-02-14 20:14:06.931941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.820 [2024-02-14 20:14:06.931990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.820 [2024-02-14 20:14:06.932034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.820 [2024-02-14 20:14:06.932085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.820 [2024-02-14 20:14:06.932130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.820 [2024-02-14 20:14:06.932170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.820 [2024-02-14 20:14:06.932212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.820 [2024-02-14 20:14:06.932251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.820 [2024-02-14 20:14:06.932297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.820 [2024-02-14 20:14:06.932332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.820 [2024-02-14 20:14:06.932361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.820 [2024-02-14 20:14:06.932398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.820 [2024-02-14 20:14:06.932442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.820 [2024-02-14 20:14:06.932481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.820 [2024-02-14 20:14:06.932519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.820 [2024-02-14 20:14:06.932555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.820 [2024-02-14 20:14:06.932596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.820 [2024-02-14 20:14:06.932640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.820 [2024-02-14 20:14:06.932679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.820 [2024-02-14 20:14:06.932711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.820 [2024-02-14 20:14:06.932747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.820 [2024-02-14 20:14:06.932786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.820 [2024-02-14 20:14:06.932821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.820 [2024-02-14 20:14:06.932859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.820 [2024-02-14 20:14:06.932904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.820 [2024-02-14 20:14:06.932950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.820 [2024-02-14 20:14:06.932999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.820 [2024-02-14 20:14:06.933046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.820 [2024-02-14 20:14:06.933088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.820 [2024-02-14 20:14:06.933130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.820 [2024-02-14 20:14:06.933179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.820 [2024-02-14 20:14:06.933227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.820 [2024-02-14 20:14:06.933271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.820 [2024-02-14 20:14:06.933323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.820 [2024-02-14 20:14:06.933366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.820 [2024-02-14 20:14:06.933413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.820 [2024-02-14 20:14:06.933465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.820 [2024-02-14 20:14:06.933506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.820 [2024-02-14 20:14:06.933554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.820 [2024-02-14 20:14:06.933601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.820 [2024-02-14 20:14:06.933644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.820 [2024-02-14 20:14:06.933694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.820 [2024-02-14 20:14:06.933742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.820 [2024-02-14 20:14:06.933783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.820 [2024-02-14 20:14:06.933830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.820 [2024-02-14 20:14:06.933883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.820 [2024-02-14 20:14:06.933932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.820 [2024-02-14 20:14:06.933979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.820 [2024-02-14 20:14:06.934023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.820 [2024-02-14 20:14:06.934060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.820 [2024-02-14 20:14:06.934394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.820 [2024-02-14 20:14:06.934431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.820 [2024-02-14 20:14:06.934469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.820 [2024-02-14 20:14:06.934511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.820 [2024-02-14 20:14:06.934542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.820 [2024-02-14 20:14:06.934573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.820 [2024-02-14 20:14:06.934616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.820 [2024-02-14 20:14:06.934667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.820 [2024-02-14 20:14:06.934713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.821 [2024-02-14 20:14:06.934760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.821 [2024-02-14 20:14:06.934805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.821 [2024-02-14 20:14:06.934852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.821 [2024-02-14 20:14:06.934900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.821 [2024-02-14 20:14:06.934946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.821 [2024-02-14 20:14:06.934993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.821 [2024-02-14 20:14:06.935041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.821 [2024-02-14 20:14:06.935086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.821 [2024-02-14 20:14:06.935133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.821 [2024-02-14 20:14:06.935181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.821 [2024-02-14 20:14:06.935229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.821 [2024-02-14 20:14:06.935272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.821 [2024-02-14 20:14:06.935319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.821 [2024-02-14 20:14:06.935366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.821 [2024-02-14 20:14:06.935408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.821 [2024-02-14 20:14:06.935454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.821 [2024-02-14 20:14:06.935502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.821 [2024-02-14 20:14:06.935548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.821 [2024-02-14 20:14:06.935586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.821 [2024-02-14 20:14:06.935630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.821 [2024-02-14 20:14:06.935665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.821 [2024-02-14 20:14:06.935697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.821 [2024-02-14 20:14:06.935737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.821 [2024-02-14 20:14:06.935777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.821 [2024-02-14 20:14:06.935822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.821 [2024-02-14 20:14:06.935863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.821 [2024-02-14 20:14:06.935902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.821 [2024-02-14 20:14:06.935933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.821 [2024-02-14 20:14:06.935965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.821 [2024-02-14 20:14:06.936005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.821 [2024-02-14 20:14:06.936034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.821 [2024-02-14 20:14:06.936073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.821 [2024-02-14 20:14:06.936114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.821 [2024-02-14 20:14:06.936163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.821 [2024-02-14 20:14:06.936208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.821 [2024-02-14 20:14:06.936255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.821 [2024-02-14 20:14:06.936303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.821 [2024-02-14 20:14:06.936350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.821 [2024-02-14 20:14:06.936397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.821 [2024-02-14 20:14:06.936439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.821 [2024-02-14 20:14:06.936483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.821 [2024-02-14 20:14:06.936528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.821 [2024-02-14 20:14:06.936567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.821 [2024-02-14 20:14:06.936608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.821 [2024-02-14 20:14:06.936656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.821 [2024-02-14 20:14:06.936696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.821 [2024-02-14 20:14:06.936740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.821 [2024-02-14 20:14:06.936770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.821 [2024-02-14 20:14:06.936797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.821 [2024-02-14 20:14:06.936830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.821 [2024-02-14 20:14:06.936869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.821 [2024-02-14 20:14:06.936907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.821 [2024-02-14 20:14:06.936950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.821 [2024-02-14 20:14:06.936981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.821 [2024-02-14 20:14:06.937011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.821 [2024-02-14 20:14:06.937113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.821 [2024-02-14 20:14:06.937159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.821 [2024-02-14 20:14:06.937204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.821 [2024-02-14 20:14:06.937480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.821 [2024-02-14 20:14:06.937530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.821 [2024-02-14 20:14:06.937577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.821 [2024-02-14 20:14:06.937627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.821 [2024-02-14 20:14:06.937685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.821 [2024-02-14 20:14:06.937727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.821 [2024-02-14 20:14:06.937771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.821 [2024-02-14 20:14:06.937820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.821 [2024-02-14 20:14:06.937865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.821 [2024-02-14 20:14:06.937909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.821 [2024-02-14 20:14:06.937954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.821 [2024-02-14 20:14:06.938003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.821 [2024-02-14 20:14:06.938055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.821 [2024-02-14 20:14:06.938099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.821 [2024-02-14 20:14:06.938142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.821 [2024-02-14 20:14:06.938183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.821 [2024-02-14 20:14:06.938221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.821 [2024-02-14 20:14:06.938254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.821 [2024-02-14 20:14:06.938283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.821 [2024-02-14 20:14:06.938319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.821 [2024-02-14 20:14:06.938359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.821 [2024-02-14 20:14:06.938400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.821 [2024-02-14 20:14:06.938436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.821 [2024-02-14 20:14:06.938474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.821 [2024-02-14 20:14:06.938513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.821 [2024-02-14 20:14:06.938556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.821 [2024-02-14 20:14:06.938593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.821 [2024-02-14 20:14:06.938634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.821 [2024-02-14 20:14:06.938677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.821 [2024-02-14 20:14:06.938705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.821 [2024-02-14 20:14:06.938745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.821 [2024-02-14 20:14:06.938780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.821 [2024-02-14 20:14:06.938810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.821 [2024-02-14 20:14:06.938853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.821 [2024-02-14 20:14:06.938901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.821 [2024-02-14 20:14:06.938945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.821 [2024-02-14 20:14:06.938992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.821 [2024-02-14 20:14:06.939045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.821 [2024-02-14 20:14:06.939091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.821 [2024-02-14 20:14:06.939133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.822 [2024-02-14 20:14:06.939186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.822 [2024-02-14 20:14:06.939232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.822 [2024-02-14 20:14:06.939280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.822 [2024-02-14 20:14:06.939328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.822 [2024-02-14 20:14:06.939372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.822 [2024-02-14 20:14:06.939419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.822 [2024-02-14 20:14:06.939471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.822 [2024-02-14 20:14:06.939514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.822 [2024-02-14 20:14:06.939560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.822 [2024-02-14 20:14:06.939607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.822 [2024-02-14 20:14:06.939655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.822 [2024-02-14 20:14:06.939695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.822 [2024-02-14 20:14:06.939729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.822 [2024-02-14 20:14:06.939758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.822 [2024-02-14 20:14:06.939798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.822 [2024-02-14 20:14:06.939837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.822 [2024-02-14 20:14:06.939866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.822 [2024-02-14 20:14:06.939907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.822 [2024-02-14 20:14:06.939944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.822 [2024-02-14 20:14:06.939981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.822 [2024-02-14 20:14:06.940323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.822 [2024-02-14 20:14:06.940373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.822 [2024-02-14 20:14:06.940418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.822 [2024-02-14 20:14:06.940467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.822 [2024-02-14 20:14:06.940514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.822 [2024-02-14 20:14:06.940557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.822 [2024-02-14 20:14:06.940604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.822 [2024-02-14 20:14:06.940658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.822 [2024-02-14 20:14:06.940704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.822 [2024-02-14 20:14:06.940759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.822 [2024-02-14 20:14:06.940805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.822 [2024-02-14 20:14:06.940851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.822 [2024-02-14 20:14:06.940901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.822 [2024-02-14 20:14:06.940945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.822 [2024-02-14 20:14:06.940988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.822 [2024-02-14 20:14:06.941034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.822 [2024-02-14 20:14:06.941077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.822 [2024-02-14 20:14:06.941129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.822 [2024-02-14 20:14:06.941175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.822 [2024-02-14 20:14:06.941224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.822 [2024-02-14 20:14:06.941267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.822 [2024-02-14 20:14:06.941313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.822 [2024-02-14 20:14:06.941353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.822 [2024-02-14 20:14:06.941392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.822 [2024-02-14 20:14:06.941429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.822 [2024-02-14 20:14:06.941457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.822 [2024-02-14 20:14:06.941492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.822 [2024-02-14 20:14:06.941531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.822 [2024-02-14 20:14:06.941577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.822 [2024-02-14 20:14:06.941615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.822 [2024-02-14 20:14:06.941663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.822 [2024-02-14 20:14:06.941705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.822 [2024-02-14 20:14:06.941747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.822 [2024-02-14 20:14:06.941783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.822 [2024-02-14 20:14:06.941826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.822 [2024-02-14 20:14:06.941855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.822 [2024-02-14 20:14:06.941885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.822 [2024-02-14 20:14:06.941923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.822 [2024-02-14 20:14:06.941955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.822 [2024-02-14 20:14:06.941997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.822 [2024-02-14 20:14:06.942045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.822 [2024-02-14 20:14:06.942095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.822 [2024-02-14 20:14:06.942145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.822 [2024-02-14 20:14:06.942190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.822 [2024-02-14 20:14:06.942237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.822 [2024-02-14 20:14:06.942283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.822 [2024-02-14 20:14:06.942331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.822 [2024-02-14 20:14:06.942378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.822 [2024-02-14 20:14:06.942427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.822 [2024-02-14 20:14:06.942474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.822 [2024-02-14 20:14:06.942521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.822 [2024-02-14 20:14:06.942567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.822 [2024-02-14 20:14:06.942606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.822 [2024-02-14 20:14:06.942635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.822 [2024-02-14 20:14:06.942670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.822 [2024-02-14 20:14:06.942718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.822 [2024-02-14 20:14:06.942758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.822 [2024-02-14 20:14:06.942798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.822 [2024-02-14 20:14:06.942828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.822 [2024-02-14 20:14:06.942866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.822 [2024-02-14 20:14:06.942905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.822 [2024-02-14 20:14:06.942944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.822 [2024-02-14 20:14:06.942973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.822 [2024-02-14 20:14:06.943010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.822 [2024-02-14 20:14:06.943116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.822 [2024-02-14 20:14:06.943160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.822 [2024-02-14 20:14:06.943206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.822 [2024-02-14 20:14:06.943476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.822 [2024-02-14 20:14:06.943526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.822 [2024-02-14 20:14:06.943573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.822 [2024-02-14 20:14:06.943620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.822 [2024-02-14 20:14:06.943670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.822 [2024-02-14 20:14:06.943717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.822 [2024-02-14 20:14:06.943766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.822 [2024-02-14 20:14:06.943814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.822 [2024-02-14 20:14:06.943864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.822 [2024-02-14 20:14:06.943911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.822 [2024-02-14 20:14:06.943957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.823 [2024-02-14 20:14:06.943999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.823 [2024-02-14 20:14:06.944040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.823 [2024-02-14 20:14:06.944083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.823 [2024-02-14 20:14:06.944122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.823 [2024-02-14 20:14:06.944152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.823 [2024-02-14 20:14:06.944184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.823 [2024-02-14 20:14:06.944236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.823 [2024-02-14 20:14:06.944275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.823 [2024-02-14 20:14:06.944319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.823 [2024-02-14 20:14:06.944356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.823 [2024-02-14 20:14:06.944398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.823 [2024-02-14 20:14:06.944441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.823 [2024-02-14 20:14:06.944470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.823 [2024-02-14 20:14:06.944498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.823 [2024-02-14 20:14:06.944534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.823 [2024-02-14 20:14:06.944570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.823 [2024-02-14 20:14:06.944601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.823 [2024-02-14 20:14:06.944644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.823 [2024-02-14 20:14:06.944702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.823 [2024-02-14 20:14:06.944753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.823 [2024-02-14 20:14:06.944796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.823 [2024-02-14 20:14:06.944848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.823 [2024-02-14 20:14:06.944890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.823 [2024-02-14 20:14:06.944936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.823 [2024-02-14 20:14:06.944983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.823 [2024-02-14 20:14:06.945027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.823 [2024-02-14 20:14:06.945075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.823 [2024-02-14 20:14:06.945124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.823 [2024-02-14 20:14:06.945176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.823 [2024-02-14 20:14:06.945222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.823 [2024-02-14 20:14:06.945267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.823 [2024-02-14 20:14:06.945314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.823 [2024-02-14 20:14:06.945358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.823 [2024-02-14 20:14:06.945398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.823 [2024-02-14 20:14:06.945440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.823 [2024-02-14 20:14:06.945480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.823 [2024-02-14 20:14:06.945516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.823 [2024-02-14 20:14:06.945545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.823 [2024-02-14 20:14:06.945575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.823 [2024-02-14 20:14:06.945613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.823 [2024-02-14 20:14:06.945658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.823 [2024-02-14 20:14:06.945695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.823 [2024-02-14 20:14:06.945726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.823 [2024-02-14 20:14:06.945755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.823 [2024-02-14 20:14:06.945789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.823 [2024-02-14 20:14:06.945836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.823 [2024-02-14 20:14:06.945882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.823 [2024-02-14 20:14:06.945926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.823 [2024-02-14 20:14:06.945980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.823 [2024-02-14 20:14:06.946327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.823 [2024-02-14 20:14:06.946374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.823 [2024-02-14 20:14:06.946420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.823 [2024-02-14 20:14:06.946475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.823 [2024-02-14 20:14:06.946517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.823 [2024-02-14 20:14:06.946561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.823 [2024-02-14 20:14:06.946606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.823 [2024-02-14 20:14:06.946659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.823 [2024-02-14 20:14:06.946708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.823 [2024-02-14 20:14:06.946758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.823 [2024-02-14 20:14:06.946801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.823 [2024-02-14 20:14:06.946846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.823 [2024-02-14 20:14:06.946888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.823 [2024-02-14 20:14:06.946930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.823 [2024-02-14 20:14:06.946959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.823 [2024-02-14 20:14:06.946989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.823 [2024-02-14 20:14:06.947029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.823 [2024-02-14 20:14:06.947067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.823 [2024-02-14 20:14:06.947108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.823 [2024-02-14 20:14:06.947153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.823 [2024-02-14 20:14:06.947195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.823 [2024-02-14 20:14:06.947235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.823 [2024-02-14 20:14:06.947273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.823 [2024-02-14 20:14:06.947312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.823 [2024-02-14 20:14:06.947350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.823 [2024-02-14 20:14:06.947378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.823 [2024-02-14 20:14:06.947418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.823 [2024-02-14 20:14:06.947450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.823 [2024-02-14 20:14:06.947480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.823 [2024-02-14 20:14:06.947528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.823 [2024-02-14 20:14:06.947573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.823 [2024-02-14 20:14:06.947617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.823 [2024-02-14 20:14:06.947671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.823 [2024-02-14 20:14:06.947714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.823 [2024-02-14 20:14:06.947758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.823 [2024-02-14 20:14:06.947804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.823 [2024-02-14 20:14:06.947853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.823 [2024-02-14 20:14:06.947901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.823 [2024-02-14 20:14:06.947948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.823 [2024-02-14 20:14:06.947996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.823 [2024-02-14 20:14:06.948042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.823 [2024-02-14 20:14:06.948090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.823 [2024-02-14 20:14:06.948133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.823 [2024-02-14 20:14:06.948173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.823 [2024-02-14 20:14:06.948215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.823 [2024-02-14 20:14:06.948254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.823 [2024-02-14 20:14:06.948292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.823 [2024-02-14 20:14:06.948321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.824 [2024-02-14 20:14:06.948356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.824 [2024-02-14 20:14:06.948391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.824 [2024-02-14 20:14:06.948431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.824 [2024-02-14 20:14:06.948468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.824 [2024-02-14 20:14:06.948505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.824 [2024-02-14 20:14:06.948544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.824 [2024-02-14 20:14:06.948576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.824 [2024-02-14 20:14:06.948615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.824 [2024-02-14 20:14:06.948670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.824 [2024-02-14 20:14:06.948718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.824 [2024-02-14 20:14:06.948763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.824 [2024-02-14 20:14:06.948809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.824 [2024-02-14 20:14:06.948854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.824 [2024-02-14 20:14:06.948896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.824 [2024-02-14 20:14:06.948949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.824 [2024-02-14 20:14:06.948998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.824 [2024-02-14 20:14:06.949101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.824 [2024-02-14 20:14:06.949157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.824 [2024-02-14 20:14:06.949207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.824 [2024-02-14 20:14:06.949501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.824 [2024-02-14 20:14:06.949551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.824 [2024-02-14 20:14:06.949594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.824 [2024-02-14 20:14:06.949640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.824 [2024-02-14 20:14:06.949695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.824 [2024-02-14 20:14:06.949739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.824 [2024-02-14 20:14:06.949779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.824 [2024-02-14 20:14:06.949824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.824 [2024-02-14 20:14:06.949869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.824 [2024-02-14 20:14:06.949908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.824 [2024-02-14 20:14:06.949951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.824 [2024-02-14 20:14:06.949993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.824 [2024-02-14 20:14:06.950035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.824 [2024-02-14 20:14:06.950076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.824 [2024-02-14 20:14:06.950103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.824 [2024-02-14 20:14:06.950145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.824 [2024-02-14 20:14:06.950190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.824 [2024-02-14 20:14:06.950231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.824 [2024-02-14 20:14:06.950269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.824 [2024-02-14 20:14:06.950308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.824 [2024-02-14 20:14:06.950351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.824 [2024-02-14 20:14:06.950402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.824 [2024-02-14 20:14:06.950439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.824 [2024-02-14 20:14:06.950480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.824 [2024-02-14 20:14:06.950519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.824 [2024-02-14 20:14:06.950547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.824 [2024-02-14 20:14:06.950587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.824 [2024-02-14 20:14:06.950625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.824 [2024-02-14 20:14:06.950661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.824 [2024-02-14 20:14:06.950706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.824 [2024-02-14 20:14:06.950757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.824 [2024-02-14 20:14:06.950804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.824 [2024-02-14 20:14:06.950852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.824 [2024-02-14 20:14:06.950899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.824 [2024-02-14 20:14:06.950949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.824 [2024-02-14 20:14:06.950997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.824 [2024-02-14 20:14:06.951041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.824 [2024-02-14 20:14:06.951093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.824 [2024-02-14 20:14:06.951147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.824 [2024-02-14 20:14:06.951192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.824 [2024-02-14 20:14:06.951237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.824 [2024-02-14 20:14:06.951285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.824 [2024-02-14 20:14:06.951327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.824 [2024-02-14 20:14:06.951375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.824 [2024-02-14 20:14:06.951413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.824 [2024-02-14 20:14:06.951455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.824 [2024-02-14 20:14:06.951494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.824 [2024-02-14 20:14:06.951530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.824 [2024-02-14 20:14:06.951560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.824 [2024-02-14 20:14:06.951596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.824 [2024-02-14 20:14:06.951640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.824 [2024-02-14 20:14:06.951687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.824 [2024-02-14 20:14:06.951718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.824 [2024-02-14 20:14:06.951753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.824 [2024-02-14 20:14:06.951790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.824 [2024-02-14 20:14:06.951822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.824 [2024-02-14 20:14:06.951854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.824 [2024-02-14 20:14:06.951886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.824 [2024-02-14 20:14:06.951933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.824 [2024-02-14 20:14:06.951979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.824 [2024-02-14 20:14:06.952020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.824 [2024-02-14 20:14:06.952360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.824 [2024-02-14 20:14:06.952407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.824 [2024-02-14 20:14:06.952452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.824 [2024-02-14 20:14:06.952494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.824 [2024-02-14 20:14:06.952544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.824 [2024-02-14 20:14:06.952591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.824 [2024-02-14 20:14:06.952636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.824 [2024-02-14 20:14:06.952690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.824 [2024-02-14 20:14:06.952736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.824 [2024-02-14 20:14:06.952777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.824 [2024-02-14 20:14:06.952819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.824 [2024-02-14 20:14:06.952857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.824 [2024-02-14 20:14:06.952897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.824 [2024-02-14 20:14:06.952926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.824 [2024-02-14 20:14:06.952955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.824 [2024-02-14 20:14:06.952995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.825 [2024-02-14 20:14:06.953045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.825 [2024-02-14 20:14:06.953084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.825 [2024-02-14 20:14:06.953122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.825 [2024-02-14 20:14:06.953168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.825 [2024-02-14 20:14:06.953210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.825 [2024-02-14 20:14:06.953239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.825 [2024-02-14 20:14:06.953276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.825 [2024-02-14 20:14:06.953317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.825 [2024-02-14 20:14:06.953349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.825 [2024-02-14 20:14:06.953389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.825 [2024-02-14 20:14:06.953437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.825 [2024-02-14 20:14:06.953481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.825 [2024-02-14 20:14:06.953528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.825 [2024-02-14 20:14:06.953572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.825 [2024-02-14 20:14:06.953621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.825 [2024-02-14 20:14:06.953670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.825 [2024-02-14 20:14:06.953719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.825 [2024-02-14 20:14:06.953771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.825 [2024-02-14 20:14:06.953828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.825 [2024-02-14 20:14:06.953873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.825 [2024-02-14 20:14:06.953922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.825 [2024-02-14 20:14:06.953965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.825 [2024-02-14 20:14:06.954014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.825 [2024-02-14 20:14:06.954061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.825 [2024-02-14 20:14:06.954106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.825 [2024-02-14 20:14:06.954145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.825 [2024-02-14 20:14:06.954185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.825 [2024-02-14 20:14:06.954224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.825 [2024-02-14 20:14:06.954264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.825 [2024-02-14 20:14:06.954304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.825 [2024-02-14 20:14:06.954345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.825 [2024-02-14 20:14:06.954375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.825 [2024-02-14 20:14:06.954405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.825 [2024-02-14 20:14:06.954439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.825 [2024-02-14 20:14:06.954477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.825 [2024-02-14 20:14:06.954518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.825 [2024-02-14 20:14:06.954690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.825 [2024-02-14 20:14:06.954749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.825 [2024-02-14 20:14:06.954801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.825 [2024-02-14 20:14:06.954846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.825 [2024-02-14 20:14:06.954893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.825 [2024-02-14 20:14:06.954939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.825 [2024-02-14 20:14:06.954982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.825 [2024-02-14 20:14:06.955028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.825 [2024-02-14 20:14:06.955077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.825 [2024-02-14 20:14:06.955125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.825 [2024-02-14 20:14:06.955168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.825 [2024-02-14 20:14:06.955214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.825 [2024-02-14 20:14:06.955323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.825 [2024-02-14 20:14:06.955369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.825 [2024-02-14 20:14:06.955417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.825 [2024-02-14 20:14:06.955773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.825 [2024-02-14 20:14:06.955819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.825 [2024-02-14 20:14:06.955850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.825 [2024-02-14 20:14:06.955880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.825 [2024-02-14 20:14:06.955929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.825 [2024-02-14 20:14:06.955973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.825 [2024-02-14 20:14:06.956012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.825 [2024-02-14 20:14:06.956050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.825 [2024-02-14 20:14:06.956092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.825 [2024-02-14 20:14:06.956129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.825 [2024-02-14 20:14:06.956173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.825 [2024-02-14 20:14:06.956218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.825 [2024-02-14 20:14:06.956248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.825 [2024-02-14 20:14:06.956276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.825 [2024-02-14 20:14:06.956319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.825 [2024-02-14 20:14:06.956359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.825 [2024-02-14 20:14:06.956387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.825 [2024-02-14 20:14:06.956416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.825 [2024-02-14 20:14:06.956444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.825 [2024-02-14 20:14:06.956472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.825 [2024-02-14 20:14:06.956500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.825 [2024-02-14 20:14:06.956529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.825 [2024-02-14 20:14:06.956557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.825 [2024-02-14 20:14:06.956586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.825 [2024-02-14 20:14:06.956615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.825 [2024-02-14 20:14:06.956645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.825 [2024-02-14 20:14:06.956681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.825 [2024-02-14 20:14:06.956710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.825 [2024-02-14 20:14:06.956738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.826 [2024-02-14 20:14:06.956776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.826 [2024-02-14 20:14:06.956808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.826 [2024-02-14 20:14:06.956840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.826 [2024-02-14 20:14:06.956879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.826 [2024-02-14 20:14:06.956918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.826 [2024-02-14 20:14:06.956954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.826 [2024-02-14 20:14:06.956984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.826 [2024-02-14 20:14:06.957014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.826 [2024-02-14 20:14:06.957055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.826 [2024-02-14 20:14:06.957101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.826 [2024-02-14 20:14:06.957145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.826 [2024-02-14 20:14:06.957192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.826 [2024-02-14 20:14:06.957246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.826 [2024-02-14 20:14:06.957290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.826 [2024-02-14 20:14:06.957333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.826 [2024-02-14 20:14:06.957383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.826 [2024-02-14 20:14:06.957427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.826 [2024-02-14 20:14:06.957473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.826 [2024-02-14 20:14:06.957524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.826 [2024-02-14 20:14:06.957569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.826 [2024-02-14 20:14:06.957620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.826 [2024-02-14 20:14:06.957679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.826 [2024-02-14 20:14:06.957726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.826 [2024-02-14 20:14:06.957770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.826 [2024-02-14 20:14:06.957821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.826 [2024-02-14 20:14:06.957866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.826 [2024-02-14 20:14:06.957913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.826 [2024-02-14 20:14:06.957966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.826 [2024-02-14 20:14:06.958011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.826 [2024-02-14 20:14:06.958060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.826 [2024-02-14 20:14:06.958115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.826 [2024-02-14 20:14:06.958159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.826 [2024-02-14 20:14:06.958509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.826 [2024-02-14 20:14:06.958561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.826 [2024-02-14 20:14:06.958610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.826 [2024-02-14 20:14:06.958662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.826 [2024-02-14 20:14:06.958711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.826 [2024-02-14 20:14:06.958756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.826 [2024-02-14 20:14:06.958786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.826 [2024-02-14 20:14:06.958814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.826 [2024-02-14 20:14:06.958850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.826 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:13:29.826 [2024-02-14 20:14:06.958891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.826 [2024-02-14 20:14:06.958935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.826 [2024-02-14 20:14:06.958973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.826 [2024-02-14 20:14:06.959015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.826 [2024-02-14 20:14:06.959054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.826 [2024-02-14 20:14:06.959096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.826 [2024-02-14 20:14:06.959140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.826 [2024-02-14 20:14:06.959181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.826 [2024-02-14 20:14:06.959211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.826 [2024-02-14 20:14:06.959247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.826 [2024-02-14 20:14:06.959288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.826 [2024-02-14 20:14:06.959325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.826 [2024-02-14 20:14:06.959372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.826 [2024-02-14 20:14:06.959414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.826 [2024-02-14 20:14:06.959457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.826 [2024-02-14 20:14:06.959494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.826 [2024-02-14 20:14:06.959535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.826 [2024-02-14 20:14:06.959565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.826 [2024-02-14 20:14:06.959603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.826 [2024-02-14 20:14:06.959633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.826 [2024-02-14 20:14:06.959667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.826 [2024-02-14 20:14:06.959694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.826 [2024-02-14 20:14:06.959721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.826 [2024-02-14 20:14:06.959749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.826 [2024-02-14 20:14:06.959777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.826 [2024-02-14 20:14:06.959804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.826 [2024-02-14 20:14:06.959832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.826 [2024-02-14 20:14:06.959860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.826 [2024-02-14 20:14:06.959887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.826 [2024-02-14 20:14:06.959915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.826 [2024-02-14 20:14:06.959957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.826 [2024-02-14 20:14:06.959994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.826 [2024-02-14 20:14:06.960033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.826 [2024-02-14 20:14:06.960073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.826 [2024-02-14 20:14:06.960114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.826 [2024-02-14 20:14:06.960144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.826 [2024-02-14 20:14:06.960176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.826 [2024-02-14 20:14:06.960222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.826 [2024-02-14 20:14:06.960268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.826 [2024-02-14 20:14:06.960315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.826 [2024-02-14 20:14:06.960364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.826 [2024-02-14 20:14:06.960409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.826 [2024-02-14 20:14:06.960454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.826 [2024-02-14 20:14:06.960502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.826 [2024-02-14 20:14:06.960546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.826 [2024-02-14 20:14:06.960596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.826 [2024-02-14 20:14:06.960642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.826 [2024-02-14 20:14:06.960698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.826 [2024-02-14 20:14:06.960746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.826 [2024-02-14 20:14:06.960796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.826 [2024-02-14 20:14:06.960840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.826 [2024-02-14 20:14:06.960890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.826 [2024-02-14 20:14:06.960937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.826 [2024-02-14 20:14:06.960988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.826 [2024-02-14 20:14:06.961031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.826 [2024-02-14 20:14:06.961143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.827 [2024-02-14 20:14:06.961196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.827 [2024-02-14 20:14:06.961484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.827 [2024-02-14 20:14:06.961531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.827 [2024-02-14 20:14:06.961575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.827 [2024-02-14 20:14:06.961633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.827 [2024-02-14 20:14:06.961686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.827 [2024-02-14 20:14:06.961735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.827 [2024-02-14 20:14:06.961785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.827 [2024-02-14 20:14:06.961836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.827 [2024-02-14 20:14:06.961879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.827 [2024-02-14 20:14:06.961920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.827 [2024-02-14 20:14:06.961956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.827 [2024-02-14 20:14:06.961994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.827 [2024-02-14 20:14:06.962034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.827 [2024-02-14 20:14:06.962077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.827 [2024-02-14 20:14:06.962108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.827 [2024-02-14 20:14:06.962136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.827 [2024-02-14 20:14:06.962178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.827 [2024-02-14 20:14:06.962224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.827 [2024-02-14 20:14:06.962268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.827 [2024-02-14 20:14:06.962309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.827 [2024-02-14 20:14:06.962350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.827 [2024-02-14 20:14:06.962387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.827 [2024-02-14 20:14:06.962430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.827 [2024-02-14 20:14:06.962469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.827 [2024-02-14 20:14:06.962509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.827 [2024-02-14 20:14:06.962540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.827 [2024-02-14 20:14:06.962581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.827 [2024-02-14 20:14:06.962619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.827 [2024-02-14 20:14:06.962672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.827 [2024-02-14 20:14:06.962707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.827 [2024-02-14 20:14:06.962737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.827 [2024-02-14 20:14:06.962772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.827 [2024-02-14 20:14:06.962817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.827 [2024-02-14 20:14:06.962862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.827 [2024-02-14 20:14:06.962906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.827 [2024-02-14 20:14:06.962950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.827 [2024-02-14 20:14:06.962979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.827 [2024-02-14 20:14:06.963014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.827 [2024-02-14 20:14:06.963049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.827 [2024-02-14 20:14:06.963093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.827 [2024-02-14 20:14:06.963127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.827 [2024-02-14 20:14:06.963160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.827 [2024-02-14 20:14:06.963198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.827 [2024-02-14 20:14:06.963235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.827 [2024-02-14 20:14:06.963266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.827 [2024-02-14 20:14:06.963296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.827 [2024-02-14 20:14:06.963336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.827 [2024-02-14 20:14:06.963386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.827 [2024-02-14 20:14:06.963430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.827 [2024-02-14 20:14:06.963480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.827 [2024-02-14 20:14:06.963528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.827 [2024-02-14 20:14:06.963572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.827 [2024-02-14 20:14:06.963616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.827 [2024-02-14 20:14:06.963666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.827 [2024-02-14 20:14:06.963713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.827 [2024-02-14 20:14:06.963762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.827 [2024-02-14 20:14:06.963808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.827 [2024-02-14 20:14:06.963854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.827 [2024-02-14 20:14:06.963898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.827 [2024-02-14 20:14:06.963944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.827 [2024-02-14 20:14:06.963991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.827 [2024-02-14 20:14:06.964309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.827 [2024-02-14 20:14:06.964355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.827 [2024-02-14 20:14:06.964388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.827 [2024-02-14 20:14:06.964417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.827 [2024-02-14 20:14:06.964459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.827 [2024-02-14 20:14:06.964497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.827 [2024-02-14 20:14:06.964546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.827 [2024-02-14 20:14:06.964588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.827 [2024-02-14 20:14:06.964636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.827 [2024-02-14 20:14:06.964690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.827 [2024-02-14 20:14:06.964733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.827 [2024-02-14 20:14:06.964775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.827 [2024-02-14 20:14:06.964834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.827 [2024-02-14 20:14:06.964878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.827 [2024-02-14 20:14:06.964925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.827 [2024-02-14 20:14:06.964971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.827 [2024-02-14 20:14:06.965018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.827 [2024-02-14 20:14:06.965064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.827 [2024-02-14 20:14:06.965105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.827 [2024-02-14 20:14:06.965146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.827 [2024-02-14 20:14:06.965188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.827 [2024-02-14 20:14:06.965240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.827 [2024-02-14 20:14:06.965270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.827 [2024-02-14 20:14:06.965301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.827 [2024-02-14 20:14:06.965339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.827 [2024-02-14 20:14:06.965374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.827 [2024-02-14 20:14:06.965416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.827 [2024-02-14 20:14:06.965456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.827 [2024-02-14 20:14:06.965495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.827 [2024-02-14 20:14:06.965523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.827 [2024-02-14 20:14:06.965558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.827 [2024-02-14 20:14:06.965603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.827 [2024-02-14 20:14:06.965656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.827 [2024-02-14 20:14:06.965705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.827 [2024-02-14 20:14:06.965755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.827 [2024-02-14 20:14:06.965802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.828 [2024-02-14 20:14:06.965859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.828 [2024-02-14 20:14:06.965904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.828 [2024-02-14 20:14:06.965947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.828 [2024-02-14 20:14:06.965993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.828 [2024-02-14 20:14:06.966036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.828 [2024-02-14 20:14:06.966089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.828 [2024-02-14 20:14:06.966132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.828 [2024-02-14 20:14:06.966179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.828 [2024-02-14 20:14:06.966225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.828 [2024-02-14 20:14:06.966271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.828 [2024-02-14 20:14:06.966315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.828 [2024-02-14 20:14:06.966365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.828 [2024-02-14 20:14:06.966414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.828 [2024-02-14 20:14:06.966459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.828 [2024-02-14 20:14:06.966507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.828 [2024-02-14 20:14:06.966555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.828 [2024-02-14 20:14:06.966599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.828 [2024-02-14 20:14:06.966655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.828 [2024-02-14 20:14:06.966702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.828 [2024-02-14 20:14:06.966755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.828 [2024-02-14 20:14:06.966801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.828 [2024-02-14 20:14:06.966853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.828 [2024-02-14 20:14:06.966901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.828 [2024-02-14 20:14:06.966947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.828 [2024-02-14 20:14:06.966991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.828 [2024-02-14 20:14:06.967039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.828 [2024-02-14 20:14:06.967086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.828 [2024-02-14 20:14:06.967130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.828 [2024-02-14 20:14:06.967233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.828 [2024-02-14 20:14:06.967286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.828 [2024-02-14 20:14:06.967540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.828 [2024-02-14 20:14:06.967583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.828 [2024-02-14 20:14:06.967625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.828 [2024-02-14 20:14:06.967672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.828 [2024-02-14 20:14:06.967712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.828 [2024-02-14 20:14:06.967755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.828 [2024-02-14 20:14:06.967793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.828 [2024-02-14 20:14:06.967833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.828 [2024-02-14 20:14:06.967878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.828 [2024-02-14 20:14:06.967908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.828 [2024-02-14 20:14:06.967944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.828 [2024-02-14 20:14:06.967984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.828 [2024-02-14 20:14:06.968025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.828 [2024-02-14 20:14:06.968061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.828 [2024-02-14 20:14:06.968092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.828 [2024-02-14 20:14:06.968120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.828 [2024-02-14 20:14:06.968156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.828 [2024-02-14 20:14:06.968187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.828 [2024-02-14 20:14:06.968235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.828 [2024-02-14 20:14:06.968287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.828 [2024-02-14 20:14:06.968330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.828 [2024-02-14 20:14:06.968380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.828 [2024-02-14 20:14:06.968429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.828 [2024-02-14 20:14:06.968478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.828 [2024-02-14 20:14:06.968525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.828 [2024-02-14 20:14:06.968573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.828 [2024-02-14 20:14:06.968624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.828 [2024-02-14 20:14:06.968675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.828 [2024-02-14 20:14:06.968722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.828 [2024-02-14 20:14:06.968763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.828 [2024-02-14 20:14:06.968808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.828 [2024-02-14 20:14:06.968837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.828 [2024-02-14 20:14:06.968864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.828 [2024-02-14 20:14:06.968901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.828 [2024-02-14 20:14:06.968938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.828 [2024-02-14 20:14:06.968976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.828 [2024-02-14 20:14:06.969015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.828 [2024-02-14 20:14:06.969045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.828 [2024-02-14 20:14:06.969075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.828 [2024-02-14 20:14:06.969107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.828 [2024-02-14 20:14:06.969150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.828 [2024-02-14 20:14:06.969199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.828 [2024-02-14 20:14:06.969247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.828 [2024-02-14 20:14:06.969295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.828 [2024-02-14 20:14:06.969344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.828 [2024-02-14 20:14:06.969388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.828 [2024-02-14 20:14:06.969433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.828 [2024-02-14 20:14:06.969485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.828 [2024-02-14 20:14:06.969531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.828 [2024-02-14 20:14:06.969574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.828 [2024-02-14 20:14:06.969621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.828 [2024-02-14 20:14:06.969672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.828 [2024-02-14 20:14:06.969728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.828 [2024-02-14 20:14:06.969776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.828 [2024-02-14 20:14:06.969820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.828 [2024-02-14 20:14:06.969868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.828 [2024-02-14 20:14:06.969912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.828 [2024-02-14 20:14:06.969959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.828 [2024-02-14 20:14:06.970012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.828 [2024-02-14 20:14:06.970053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.828 [2024-02-14 20:14:06.970093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.828 [2024-02-14 20:14:06.970473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.828 [2024-02-14 20:14:06.970515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.828 [2024-02-14 20:14:06.970544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.828 [2024-02-14 20:14:06.970580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.828 [2024-02-14 20:14:06.970620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.828 [2024-02-14 20:14:06.970657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.829 [2024-02-14 20:14:06.970701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.829 [2024-02-14 20:14:06.970750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.829 [2024-02-14 20:14:06.970801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.829 [2024-02-14 20:14:06.970845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.829 [2024-02-14 20:14:06.970893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.829 [2024-02-14 20:14:06.970938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.829 [2024-02-14 20:14:06.970981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.829 [2024-02-14 20:14:06.971031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.829 [2024-02-14 20:14:06.971074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.829 [2024-02-14 20:14:06.971123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.829 [2024-02-14 20:14:06.971168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.829 [2024-02-14 20:14:06.971208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.829 [2024-02-14 20:14:06.971254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.829 [2024-02-14 20:14:06.971294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.829 [2024-02-14 20:14:06.971338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.829 [2024-02-14 20:14:06.971376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.829 [2024-02-14 20:14:06.971421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.829 [2024-02-14 20:14:06.971449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.829 [2024-02-14 20:14:06.971490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.829 [2024-02-14 20:14:06.971530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.829 [2024-02-14 20:14:06.971564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.829 [2024-02-14 20:14:06.971607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.829 [2024-02-14 20:14:06.971639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.829 [2024-02-14 20:14:06.971674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.829 [2024-02-14 20:14:06.971722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.829 [2024-02-14 20:14:06.971766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.829 [2024-02-14 20:14:06.971814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.829 [2024-02-14 20:14:06.971863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.829 [2024-02-14 20:14:06.971908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.829 [2024-02-14 20:14:06.971962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.829 [2024-02-14 20:14:06.972018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.829 [2024-02-14 20:14:06.972059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.829 [2024-02-14 20:14:06.972105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.829 [2024-02-14 20:14:06.972151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.829 [2024-02-14 20:14:06.972195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.829 [2024-02-14 20:14:06.972242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.829 [2024-02-14 20:14:06.972290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.829 [2024-02-14 20:14:06.972332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.829 [2024-02-14 20:14:06.972374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.829 [2024-02-14 20:14:06.972430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.829 [2024-02-14 20:14:06.972472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.829 [2024-02-14 20:14:06.972520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.829 [2024-02-14 20:14:06.972568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.829 [2024-02-14 20:14:06.972611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.829 [2024-02-14 20:14:06.972665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.829 [2024-02-14 20:14:06.972707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.829 [2024-02-14 20:14:06.972750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.829 [2024-02-14 20:14:06.972802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.829 [2024-02-14 20:14:06.972850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.829 [2024-02-14 20:14:06.972895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.829 [2024-02-14 20:14:06.972947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.829 [2024-02-14 20:14:06.972991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.829 [2024-02-14 20:14:06.973040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.829 [2024-02-14 20:14:06.973090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.829 [2024-02-14 20:14:06.973134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.829 [2024-02-14 20:14:06.973176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.829 [2024-02-14 20:14:06.973215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.829 [2024-02-14 20:14:06.973256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.829 [2024-02-14 20:14:06.973418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.829 [2024-02-14 20:14:06.973460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.829 [2024-02-14 20:14:06.973750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.829 [2024-02-14 20:14:06.973785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.829 [2024-02-14 20:14:06.973822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.829 [2024-02-14 20:14:06.973862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.829 [2024-02-14 20:14:06.973902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.829 [2024-02-14 20:14:06.973934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.829 [2024-02-14 20:14:06.973971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.829 [2024-02-14 20:14:06.974020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.829 [2024-02-14 20:14:06.974069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.829 [2024-02-14 20:14:06.974111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.829 [2024-02-14 20:14:06.974156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.829 [2024-02-14 20:14:06.974201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.829 [2024-02-14 20:14:06.974244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.829 [2024-02-14 20:14:06.974305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.829 [2024-02-14 20:14:06.974351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.829 [2024-02-14 20:14:06.974399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.829 [2024-02-14 20:14:06.974445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.829 [2024-02-14 20:14:06.974494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.829 [2024-02-14 20:14:06.974542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.829 [2024-02-14 20:14:06.974587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.829 [2024-02-14 20:14:06.974633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.829 [2024-02-14 20:14:06.974685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.829 [2024-02-14 20:14:06.974737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.829 [2024-02-14 20:14:06.974779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.830 [2024-02-14 20:14:06.974823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.830 [2024-02-14 20:14:06.974866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.830 [2024-02-14 20:14:06.974908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.830 [2024-02-14 20:14:06.974942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.830 [2024-02-14 20:14:06.974975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.830 [2024-02-14 20:14:06.975018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.830 [2024-02-14 20:14:06.975059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.830 [2024-02-14 20:14:06.975100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.830 [2024-02-14 20:14:06.975138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.830 [2024-02-14 20:14:06.975178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.830 [2024-02-14 20:14:06.975215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.830 [2024-02-14 20:14:06.975255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.830 [2024-02-14 20:14:06.975285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.830 [2024-02-14 20:14:06.975317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.830 [2024-02-14 20:14:06.975361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.830 [2024-02-14 20:14:06.975409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.830 [2024-02-14 20:14:06.975455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.830 [2024-02-14 20:14:06.975501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.830 [2024-02-14 20:14:06.975554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.830 [2024-02-14 20:14:06.975597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.830 [2024-02-14 20:14:06.975644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.830 [2024-02-14 20:14:06.975702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.830 [2024-02-14 20:14:06.975748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.830 [2024-02-14 20:14:06.975794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.830 [2024-02-14 20:14:06.975841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.830 [2024-02-14 20:14:06.975885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.830 [2024-02-14 20:14:06.975934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.830 [2024-02-14 20:14:06.975979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.830 [2024-02-14 20:14:06.976026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.830 [2024-02-14 20:14:06.976073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.830 [2024-02-14 20:14:06.976120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.830 [2024-02-14 20:14:06.976170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.830 [2024-02-14 20:14:06.976213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.830 [2024-02-14 20:14:06.976254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.830 [2024-02-14 20:14:06.976295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.830 [2024-02-14 20:14:06.976337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.830 [2024-02-14 20:14:06.976372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.830 [2024-02-14 20:14:06.976695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.830 [2024-02-14 20:14:06.976740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.830 [2024-02-14 20:14:06.976771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.830 [2024-02-14 20:14:06.976804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.830 [2024-02-14 20:14:06.976857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.830 [2024-02-14 20:14:06.976907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.830 [2024-02-14 20:14:06.976953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.830 [2024-02-14 20:14:06.977013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.830 [2024-02-14 20:14:06.977056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.830 [2024-02-14 20:14:06.977103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.830 [2024-02-14 20:14:06.977154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.830 [2024-02-14 20:14:06.977202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.830 [2024-02-14 20:14:06.977250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.830 [2024-02-14 20:14:06.977300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.830 [2024-02-14 20:14:06.977343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.830 [2024-02-14 20:14:06.977394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.830 [2024-02-14 20:14:06.977433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.830 [2024-02-14 20:14:06.977478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.830 [2024-02-14 20:14:06.977517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.830 [2024-02-14 20:14:06.977547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.830 [2024-02-14 20:14:06.977588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.830 [2024-02-14 20:14:06.977629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.830 [2024-02-14 20:14:06.977678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.830 [2024-02-14 20:14:06.977716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.830 [2024-02-14 20:14:06.977754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.830 [2024-02-14 20:14:06.977784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.830 [2024-02-14 20:14:06.977814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.830 [2024-02-14 20:14:06.977859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.830 [2024-02-14 20:14:06.977905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.830 [2024-02-14 20:14:06.977951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.830 [2024-02-14 20:14:06.978005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.830 [2024-02-14 20:14:06.978051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.830 [2024-02-14 20:14:06.978097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.830 [2024-02-14 20:14:06.978151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.830 [2024-02-14 20:14:06.978194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.830 [2024-02-14 20:14:06.978236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.830 [2024-02-14 20:14:06.978283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.830 [2024-02-14 20:14:06.978333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.830 [2024-02-14 20:14:06.978378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.830 [2024-02-14 20:14:06.978427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.830 [2024-02-14 20:14:06.978477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.830 [2024-02-14 20:14:06.978524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.830 [2024-02-14 20:14:06.978574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.830 [2024-02-14 20:14:06.978620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.830 [2024-02-14 20:14:06.978674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.830 [2024-02-14 20:14:06.978720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.830 [2024-02-14 20:14:06.978764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.830 [2024-02-14 20:14:06.978810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.830 [2024-02-14 20:14:06.978856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.830 [2024-02-14 20:14:06.978905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.830 [2024-02-14 20:14:06.978955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.830 [2024-02-14 20:14:06.979004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.830 [2024-02-14 20:14:06.979053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.830 [2024-02-14 20:14:06.979100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.830 [2024-02-14 20:14:06.979144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.830 [2024-02-14 20:14:06.979187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.830 [2024-02-14 20:14:06.979221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.830 [2024-02-14 20:14:06.979250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.830 [2024-02-14 20:14:06.979286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.830 [2024-02-14 20:14:06.979324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.831 [2024-02-14 20:14:06.979368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.831 [2024-02-14 20:14:06.979409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.831 [2024-02-14 20:14:06.979450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.831 [2024-02-14 20:14:06.979494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.831 [2024-02-14 20:14:06.979595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.831 [2024-02-14 20:14:06.979636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.831 [2024-02-14 20:14:06.979920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.831 [2024-02-14 20:14:06.979976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.831 [2024-02-14 20:14:06.980022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.831 [2024-02-14 20:14:06.980072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.831 [2024-02-14 20:14:06.980119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.831 [2024-02-14 20:14:06.980164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.831 [2024-02-14 20:14:06.980216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.831 [2024-02-14 20:14:06.980261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.831 [2024-02-14 20:14:06.980308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.831 [2024-02-14 20:14:06.980358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.831 [2024-02-14 20:14:06.980403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.831 [2024-02-14 20:14:06.980449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.831 [2024-02-14 20:14:06.980495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.831 [2024-02-14 20:14:06.980545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.831 [2024-02-14 20:14:06.980596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.831 [2024-02-14 20:14:06.980642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.831 [2024-02-14 20:14:06.980696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.831 [2024-02-14 20:14:06.980743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.831 [2024-02-14 20:14:06.980787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.831 [2024-02-14 20:14:06.980826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.831 [2024-02-14 20:14:06.980864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.831 [2024-02-14 20:14:06.980903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.831 [2024-02-14 20:14:06.980945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.831 [2024-02-14 20:14:06.980988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.831 [2024-02-14 20:14:06.981029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.831 [2024-02-14 20:14:06.981058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.831 [2024-02-14 20:14:06.981094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.831 [2024-02-14 20:14:06.981136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.831 [2024-02-14 20:14:06.981175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.831 [2024-02-14 20:14:06.981212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.831 [2024-02-14 20:14:06.981244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.831 [2024-02-14 20:14:06.981285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.831 [2024-02-14 20:14:06.981324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.831 [2024-02-14 20:14:06.981359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.831 [2024-02-14 20:14:06.981389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.831 [2024-02-14 20:14:06.981418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.831 [2024-02-14 20:14:06.981456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.831 [2024-02-14 20:14:06.981504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.831 [2024-02-14 20:14:06.981557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.831 [2024-02-14 20:14:06.981601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.831 [2024-02-14 20:14:06.981654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.831 [2024-02-14 20:14:06.981703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.831 [2024-02-14 20:14:06.981749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.831 [2024-02-14 20:14:06.981794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.831 [2024-02-14 20:14:06.981839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.831 [2024-02-14 20:14:06.981881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.831 [2024-02-14 20:14:06.981931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.831 [2024-02-14 20:14:06.981981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.831 [2024-02-14 20:14:06.982031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.831 [2024-02-14 20:14:06.982081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.831 [2024-02-14 20:14:06.982131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.831 [2024-02-14 20:14:06.982178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.831 [2024-02-14 20:14:06.982226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.831 [2024-02-14 20:14:06.982275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.831 [2024-02-14 20:14:06.982318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.831 [2024-02-14 20:14:06.982361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.831 [2024-02-14 20:14:06.982401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.831 [2024-02-14 20:14:06.982438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.831 [2024-02-14 20:14:06.982473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.831 [2024-02-14 20:14:06.982503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.831 [2024-02-14 20:14:06.982537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.831 [2024-02-14 20:14:06.982883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.831 [2024-02-14 20:14:06.982920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.831 [2024-02-14 20:14:06.982969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.831 [2024-02-14 20:14:06.983017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.831 [2024-02-14 20:14:06.983065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.831 [2024-02-14 20:14:06.983114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.831 [2024-02-14 20:14:06.983164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.831 [2024-02-14 20:14:06.983210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.831 [2024-02-14 20:14:06.983255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.831 [2024-02-14 20:14:06.983304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.831 [2024-02-14 20:14:06.983358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.831 [2024-02-14 20:14:06.983404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.831 [2024-02-14 20:14:06.983445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.831 [2024-02-14 20:14:06.983492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.831 [2024-02-14 20:14:06.983535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.831 [2024-02-14 20:14:06.983575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.831 [2024-02-14 20:14:06.983618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.831 [2024-02-14 20:14:06.983654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.831 [2024-02-14 20:14:06.983691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.831 [2024-02-14 20:14:06.983727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.831 [2024-02-14 20:14:06.983767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.831 [2024-02-14 20:14:06.983808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.831 [2024-02-14 20:14:06.983845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.831 [2024-02-14 20:14:06.983876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.831 [2024-02-14 20:14:06.983907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.831 [2024-02-14 20:14:06.983953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.831 [2024-02-14 20:14:06.984001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.831 [2024-02-14 20:14:06.984046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.831 [2024-02-14 20:14:06.984099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.831 [2024-02-14 20:14:06.984144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.832 [2024-02-14 20:14:06.984190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.832 [2024-02-14 20:14:06.984240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.832 [2024-02-14 20:14:06.984285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.832 [2024-02-14 20:14:06.984334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.832 [2024-02-14 20:14:06.984384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.832 [2024-02-14 20:14:06.984432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.832 [2024-02-14 20:14:06.984478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.832 [2024-02-14 20:14:06.984526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.832 [2024-02-14 20:14:06.984571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.832 [2024-02-14 20:14:06.984618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.832 [2024-02-14 20:14:06.984675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.832 [2024-02-14 20:14:06.984725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.832 [2024-02-14 20:14:06.984768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.832 [2024-02-14 20:14:06.984815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.832 [2024-02-14 20:14:06.984857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.832 [2024-02-14 20:14:06.984906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.832 [2024-02-14 20:14:06.984949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.832 [2024-02-14 20:14:06.984992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.832 [2024-02-14 20:14:06.985036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.832 [2024-02-14 20:14:06.985075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.832 [2024-02-14 20:14:06.985116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.832 [2024-02-14 20:14:06.985156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.832 [2024-02-14 20:14:06.985198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.832 [2024-02-14 20:14:06.985238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.832 [2024-02-14 20:14:06.985277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.832 [2024-02-14 20:14:06.985307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.832 [2024-02-14 20:14:06.985335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.832 [2024-02-14 20:14:06.985372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.832 [2024-02-14 20:14:06.985412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.832 [2024-02-14 20:14:06.985451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.832 [2024-02-14 20:14:06.985490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.832 [2024-02-14 20:14:06.985530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.832 [2024-02-14 20:14:06.985570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.832 [2024-02-14 20:14:06.985612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.832 [2024-02-14 20:14:06.985746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.832 [2024-02-14 20:14:06.985787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.832 [2024-02-14 20:14:06.986058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.832 [2024-02-14 20:14:06.986117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.832 [2024-02-14 20:14:06.986169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.832 [2024-02-14 20:14:06.986217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.832 [2024-02-14 20:14:06.986266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.832 [2024-02-14 20:14:06.986310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.832 [2024-02-14 20:14:06.986354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.832 [2024-02-14 20:14:06.986398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.832 [2024-02-14 20:14:06.986443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.832 [2024-02-14 20:14:06.986491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.832 [2024-02-14 20:14:06.986537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.832 [2024-02-14 20:14:06.986579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.832 [2024-02-14 20:14:06.986630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.832 [2024-02-14 20:14:06.986687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.832 [2024-02-14 20:14:06.986732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.832 [2024-02-14 20:14:06.986784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.832 [2024-02-14 20:14:06.986829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.832 [2024-02-14 20:14:06.986880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.832 [2024-02-14 20:14:06.986924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.832 [2024-02-14 20:14:06.986964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.832 [2024-02-14 20:14:06.987000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.832 [2024-02-14 20:14:06.987045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.832 [2024-02-14 20:14:06.987084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.832 [2024-02-14 20:14:06.987123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.832 [2024-02-14 20:14:06.987154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.832 [2024-02-14 20:14:06.987182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.832 [2024-02-14 20:14:06.987218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.832 [2024-02-14 20:14:06.987255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.832 [2024-02-14 20:14:06.987292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.832 [2024-02-14 20:14:06.987331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.832 [2024-02-14 20:14:06.987364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.832 [2024-02-14 20:14:06.987404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.832 [2024-02-14 20:14:06.987445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.832 [2024-02-14 20:14:06.987475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.832 [2024-02-14 20:14:06.987506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.832 [2024-02-14 20:14:06.987545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.832 [2024-02-14 20:14:06.987595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.832 [2024-02-14 20:14:06.987654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.832 [2024-02-14 20:14:06.987699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.832 [2024-02-14 20:14:06.987744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.832 [2024-02-14 20:14:06.987792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.832 [2024-02-14 20:14:06.987839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.832 [2024-02-14 20:14:06.987889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.832 [2024-02-14 20:14:06.987936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.832 [2024-02-14 20:14:06.987977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.832 [2024-02-14 20:14:06.988030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.832 [2024-02-14 20:14:06.988075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.832 [2024-02-14 20:14:06.988120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.832 [2024-02-14 20:14:06.988167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.832 [2024-02-14 20:14:06.988210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.832 [2024-02-14 20:14:06.988254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.832 [2024-02-14 20:14:06.988305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.832 [2024-02-14 20:14:06.988349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.832 [2024-02-14 20:14:06.988397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.832 [2024-02-14 20:14:06.988442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.832 [2024-02-14 20:14:06.988483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.832 [2024-02-14 20:14:06.988524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.832 [2024-02-14 20:14:06.988566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.832 [2024-02-14 20:14:06.988610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.832 [2024-02-14 20:14:06.988639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.832 [2024-02-14 20:14:06.988676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.832 [2024-02-14 20:14:06.989030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.833 [2024-02-14 20:14:06.989064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.833 [2024-02-14 20:14:06.989107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.833 [2024-02-14 20:14:06.989153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.833 [2024-02-14 20:14:06.989203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.833 [2024-02-14 20:14:06.989249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.833 [2024-02-14 20:14:06.989293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.833 [2024-02-14 20:14:06.989349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.833 [2024-02-14 20:14:06.989396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.833 [2024-02-14 20:14:06.989440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.833 [2024-02-14 20:14:06.989490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.833 [2024-02-14 20:14:06.989537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.833 [2024-02-14 20:14:06.989579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.833 [2024-02-14 20:14:06.989621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.833 [2024-02-14 20:14:06.989666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.833 [2024-02-14 20:14:06.989708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.833 [2024-02-14 20:14:06.989748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.833 [2024-02-14 20:14:06.989777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.833 [2024-02-14 20:14:06.989808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.833 [2024-02-14 20:14:06.989851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.833 [2024-02-14 20:14:06.989884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.833 [2024-02-14 20:14:06.989923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.833 [2024-02-14 20:14:06.989961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.833 [2024-02-14 20:14:06.989996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.833 [2024-02-14 20:14:06.990029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.833 [2024-02-14 20:14:06.990065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.833 [2024-02-14 20:14:06.990111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.833 [2024-02-14 20:14:06.990161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.833 [2024-02-14 20:14:06.990207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.833 [2024-02-14 20:14:06.990253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.833 [2024-02-14 20:14:06.990300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.833 [2024-02-14 20:14:06.990344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.833 [2024-02-14 20:14:06.990386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.833 [2024-02-14 20:14:06.990429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.833 [2024-02-14 20:14:06.990474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.833 [2024-02-14 20:14:06.990519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.833 [2024-02-14 20:14:06.990566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.833 [2024-02-14 20:14:06.990616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.833 [2024-02-14 20:14:06.990666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.833 [2024-02-14 20:14:06.990717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.833 [2024-02-14 20:14:06.990767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.833 [2024-02-14 20:14:06.990813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.833 [2024-02-14 20:14:06.990859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.833 [2024-02-14 20:14:06.990909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.833 [2024-02-14 20:14:06.990952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.833 [2024-02-14 20:14:06.990997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.833 [2024-02-14 20:14:06.991040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.833 [2024-02-14 20:14:06.991086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.833 [2024-02-14 20:14:06.991138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.833 [2024-02-14 20:14:06.991185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.833 [2024-02-14 20:14:06.991232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.833 [2024-02-14 20:14:06.991273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.833 [2024-02-14 20:14:06.991310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.833 [2024-02-14 20:14:06.991349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.833 [2024-02-14 20:14:06.991392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.833 [2024-02-14 20:14:06.991425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.833 [2024-02-14 20:14:06.991453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.833 [2024-02-14 20:14:06.991490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.833 [2024-02-14 20:14:06.991527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.833 [2024-02-14 20:14:06.991570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.833 [2024-02-14 20:14:06.991609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.833 [2024-02-14 20:14:06.991657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.833 [2024-02-14 20:14:06.991696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.833 [2024-02-14 20:14:06.991734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.833 [2024-02-14 20:14:06.991859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.833 [2024-02-14 20:14:06.991898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.833 [2024-02-14 20:14:06.992152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.833 [2024-02-14 20:14:06.992200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.833 [2024-02-14 20:14:06.992248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.833 [2024-02-14 20:14:06.992293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.833 [2024-02-14 20:14:06.992338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.833 [2024-02-14 20:14:06.992384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.833 [2024-02-14 20:14:06.992431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.833 [2024-02-14 20:14:06.992477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.833 [2024-02-14 20:14:06.992524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.833 [2024-02-14 20:14:06.992566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.833 [2024-02-14 20:14:06.992614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.833 [2024-02-14 20:14:06.992661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.833 [2024-02-14 20:14:06.992710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.833 [2024-02-14 20:14:06.992758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.833 [2024-02-14 20:14:06.992803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.833 [2024-02-14 20:14:06.992853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.833 [2024-02-14 20:14:06.992901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.833 [2024-02-14 20:14:06.992947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.833 [2024-02-14 20:14:06.992993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.833 [2024-02-14 20:14:06.993034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.833 [2024-02-14 20:14:06.993075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.833 [2024-02-14 20:14:06.993112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.833 [2024-02-14 20:14:06.993150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.833 [2024-02-14 20:14:06.993179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.833 [2024-02-14 20:14:06.993212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.833 [2024-02-14 20:14:06.993252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.833 [2024-02-14 20:14:06.993294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.833 [2024-02-14 20:14:06.993338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.833 [2024-02-14 20:14:06.993376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.833 [2024-02-14 20:14:06.993412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.833 [2024-02-14 20:14:06.993443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.833 [2024-02-14 20:14:06.993482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.833 [2024-02-14 20:14:06.993521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.834 [2024-02-14 20:14:06.993556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.834 [2024-02-14 20:14:06.993585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.834 [2024-02-14 20:14:06.993613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.834 [2024-02-14 20:14:06.993663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.834 [2024-02-14 20:14:06.993707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.834 [2024-02-14 20:14:06.993753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.834 [2024-02-14 20:14:06.993805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.834 [2024-02-14 20:14:06.993850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.834 [2024-02-14 20:14:06.993894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.834 [2024-02-14 20:14:06.993935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.834 [2024-02-14 20:14:06.993978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.834 [2024-02-14 20:14:06.994022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.834 [2024-02-14 20:14:06.994071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.834 [2024-02-14 20:14:06.994114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.834 [2024-02-14 20:14:06.994157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.834 [2024-02-14 20:14:06.994203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.834 [2024-02-14 20:14:06.994247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.834 [2024-02-14 20:14:06.994292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.834 [2024-02-14 20:14:06.994336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.834 [2024-02-14 20:14:06.994386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.834 [2024-02-14 20:14:06.994431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.834 [2024-02-14 20:14:06.994476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.834 [2024-02-14 20:14:06.994518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.834 [2024-02-14 20:14:06.994561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.834 [2024-02-14 20:14:06.994601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.834 [2024-02-14 20:14:06.994644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.834 [2024-02-14 20:14:06.994692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.834 [2024-02-14 20:14:06.994722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.834 [2024-02-14 20:14:06.995056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.834 [2024-02-14 20:14:06.995090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.834 [2024-02-14 20:14:06.995148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.834 [2024-02-14 20:14:06.995195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.834 [2024-02-14 20:14:06.995243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.834 [2024-02-14 20:14:06.995287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.834 [2024-02-14 20:14:06.995339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.834 [2024-02-14 20:14:06.995383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.834 [2024-02-14 20:14:06.995436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.834 [2024-02-14 20:14:06.995484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.834 [2024-02-14 20:14:06.995526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.834 [2024-02-14 20:14:06.995568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.834 [2024-02-14 20:14:06.995610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.834 [2024-02-14 20:14:06.995666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.834 [2024-02-14 20:14:06.995696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.834 [2024-02-14 20:14:06.995724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.834 [2024-02-14 20:14:06.995755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.834 [2024-02-14 20:14:06.995794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.834 [2024-02-14 20:14:06.995830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.834 [2024-02-14 20:14:06.995870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.834 [2024-02-14 20:14:06.995899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.834 [2024-02-14 20:14:06.995940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.834 [2024-02-14 20:14:06.995984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.834 [2024-02-14 20:14:06.996024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.834 [2024-02-14 20:14:06.996068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.834 [2024-02-14 20:14:06.996115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.834 [2024-02-14 20:14:06.996159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.834 [2024-02-14 20:14:06.996201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.834 [2024-02-14 20:14:06.996247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.834 [2024-02-14 20:14:06.996293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.834 [2024-02-14 20:14:06.996336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.834 [2024-02-14 20:14:06.996377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.834 [2024-02-14 20:14:06.996424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.834 [2024-02-14 20:14:06.996467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.834 [2024-02-14 20:14:06.996515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.834 [2024-02-14 20:14:06.996570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.834 [2024-02-14 20:14:06.996615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.834 [2024-02-14 20:14:06.996668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.834 [2024-02-14 20:14:06.996719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.834 [2024-02-14 20:14:06.996765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.834 [2024-02-14 20:14:06.996810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.834 [2024-02-14 20:14:06.996853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.834 [2024-02-14 20:14:06.996899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.834 [2024-02-14 20:14:06.996943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.834 [2024-02-14 20:14:06.996982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.834 [2024-02-14 20:14:06.997021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.834 [2024-02-14 20:14:06.997065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.834 [2024-02-14 20:14:06.997094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.834 [2024-02-14 20:14:06.997125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.834 [2024-02-14 20:14:06.997162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.834 [2024-02-14 20:14:06.997207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.834 [2024-02-14 20:14:06.997252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.834 [2024-02-14 20:14:06.997296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.834 [2024-02-14 20:14:06.997333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.834 [2024-02-14 20:14:06.997363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.835 [2024-02-14 20:14:06.997394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.835 [2024-02-14 20:14:06.997438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.835 [2024-02-14 20:14:06.997475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.835 [2024-02-14 20:14:06.997505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.835 [2024-02-14 20:14:06.997543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.835 [2024-02-14 20:14:06.997589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.835 [2024-02-14 20:14:06.997634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.835 [2024-02-14 20:14:06.997690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.835 [2024-02-14 20:14:06.997732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.835 [2024-02-14 20:14:06.997834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.835 [2024-02-14 20:14:06.997880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.835 [2024-02-14 20:14:06.998168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.835 [2024-02-14 20:14:06.998216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.835 [2024-02-14 20:14:06.998264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.835 [2024-02-14 20:14:06.998302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.835 [2024-02-14 20:14:06.998340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.835 [2024-02-14 20:14:06.998379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.835 [2024-02-14 20:14:06.998419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.835 [2024-02-14 20:14:06.998455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.835 [2024-02-14 20:14:06.998483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.835 [2024-02-14 20:14:06.998517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.835 [2024-02-14 20:14:06.998558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.835 [2024-02-14 20:14:06.998602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.835 [2024-02-14 20:14:06.998655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.835 [2024-02-14 20:14:06.998688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.835 [2024-02-14 20:14:06.998727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.835 [2024-02-14 20:14:06.998764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.835 [2024-02-14 20:14:06.998796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.835 [2024-02-14 20:14:06.998839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.835 [2024-02-14 20:14:06.998880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.835 [2024-02-14 20:14:06.998928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.835 [2024-02-14 20:14:06.998968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.835 [2024-02-14 20:14:06.999012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.835 [2024-02-14 20:14:06.999056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.835 [2024-02-14 20:14:06.999104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.835 [2024-02-14 20:14:06.999144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.835 [2024-02-14 20:14:06.999192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.835 [2024-02-14 20:14:06.999235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.835 [2024-02-14 20:14:06.999283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.835 [2024-02-14 20:14:06.999329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.835 [2024-02-14 20:14:06.999375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.835 [2024-02-14 20:14:06.999422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.835 [2024-02-14 20:14:06.999467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.835 [2024-02-14 20:14:06.999512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.835 [2024-02-14 20:14:06.999557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.835 [2024-02-14 20:14:06.999602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.835 [2024-02-14 20:14:06.999655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.835 [2024-02-14 20:14:06.999697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.835 [2024-02-14 20:14:06.999745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.835 [2024-02-14 20:14:06.999783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.835 [2024-02-14 20:14:06.999823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.835 [2024-02-14 20:14:06.999862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.835 [2024-02-14 20:14:06.999899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.835 [2024-02-14 20:14:06.999943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.835 [2024-02-14 20:14:06.999993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.835 [2024-02-14 20:14:07.000030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.835 [2024-02-14 20:14:07.000061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.835 [2024-02-14 20:14:07.000096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.835 [2024-02-14 20:14:07.000134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.835 [2024-02-14 20:14:07.000171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.835 [2024-02-14 20:14:07.000202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.835 [2024-02-14 20:14:07.000237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.835 [2024-02-14 20:14:07.000275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.835 [2024-02-14 20:14:07.000314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.835 [2024-02-14 20:14:07.000345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.835 [2024-02-14 20:14:07.000373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.835 [2024-02-14 20:14:07.000414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.835 [2024-02-14 20:14:07.000457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.835 [2024-02-14 20:14:07.000503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.835 [2024-02-14 20:14:07.000546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.835 [2024-02-14 20:14:07.000595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.835 [2024-02-14 20:14:07.000644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.835 [2024-02-14 20:14:07.000997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.835 [2024-02-14 20:14:07.001048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.835 [2024-02-14 20:14:07.001093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.835 [2024-02-14 20:14:07.001139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.835 [2024-02-14 20:14:07.001185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.835 [2024-02-14 20:14:07.001232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.835 [2024-02-14 20:14:07.001274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.835 [2024-02-14 20:14:07.001320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.835 [2024-02-14 20:14:07.001365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.835 [2024-02-14 20:14:07.001394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.835 [2024-02-14 20:14:07.001422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.835 [2024-02-14 20:14:07.001467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.835 [2024-02-14 20:14:07.001507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.835 [2024-02-14 20:14:07.001551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.835 [2024-02-14 20:14:07.001591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.835 [2024-02-14 20:14:07.001642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.835 [2024-02-14 20:14:07.001686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.835 [2024-02-14 20:14:07.001726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.835 [2024-02-14 20:14:07.001755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.835 [2024-02-14 20:14:07.001795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.835 [2024-02-14 20:14:07.001829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.835 [2024-02-14 20:14:07.001857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.835 [2024-02-14 20:14:07.001899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.835 [2024-02-14 20:14:07.001941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.836 [2024-02-14 20:14:07.001984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.836 [2024-02-14 20:14:07.002026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.836 [2024-02-14 20:14:07.002078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.836 [2024-02-14 20:14:07.002124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.836 [2024-02-14 20:14:07.002169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.836 [2024-02-14 20:14:07.002217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.836 [2024-02-14 20:14:07.002262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.836 [2024-02-14 20:14:07.002309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.836 [2024-02-14 20:14:07.002354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.836 [2024-02-14 20:14:07.002392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.836 [2024-02-14 20:14:07.002431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.836 [2024-02-14 20:14:07.002466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.836 [2024-02-14 20:14:07.002504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.836 [2024-02-14 20:14:07.002543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.836 [2024-02-14 20:14:07.002574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.836 [2024-02-14 20:14:07.002601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.836 [2024-02-14 20:14:07.002636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.836 [2024-02-14 20:14:07.002680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.836 [2024-02-14 20:14:07.002718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.836 [2024-02-14 20:14:07.002754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.836 [2024-02-14 20:14:07.002787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.836 [2024-02-14 20:14:07.002816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.836 [2024-02-14 20:14:07.002861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.836 [2024-02-14 20:14:07.002907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.836 [2024-02-14 20:14:07.002958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.836 [2024-02-14 20:14:07.003001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.836 [2024-02-14 20:14:07.003052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.836 [2024-02-14 20:14:07.003104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.836 [2024-02-14 20:14:07.003150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.836 [2024-02-14 20:14:07.003193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.836 [2024-02-14 20:14:07.003252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.836 [2024-02-14 20:14:07.003298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.836 [2024-02-14 20:14:07.003344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.836 [2024-02-14 20:14:07.003385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.836 [2024-02-14 20:14:07.003428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.836 [2024-02-14 20:14:07.003470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.836 [2024-02-14 20:14:07.003524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.836 [2024-02-14 20:14:07.003570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.836 [2024-02-14 20:14:07.003619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.836 [2024-02-14 20:14:07.003669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.836 [2024-02-14 20:14:07.003774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.836 [2024-02-14 20:14:07.003828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.836 [2024-02-14 20:14:07.004105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.836 [2024-02-14 20:14:07.004143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.836 [2024-02-14 20:14:07.004187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.836 [2024-02-14 20:14:07.004234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.836 [2024-02-14 20:14:07.004275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.836 [2024-02-14 20:14:07.004304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.836 [2024-02-14 20:14:07.004332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.836 [2024-02-14 20:14:07.004370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.836 [2024-02-14 20:14:07.004409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.836 [2024-02-14 20:14:07.004447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.836 [2024-02-14 20:14:07.004486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.836 [2024-02-14 20:14:07.004527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.836 [2024-02-14 20:14:07.004567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.836 [2024-02-14 20:14:07.004609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.836 [2024-02-14 20:14:07.004665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.836 [2024-02-14 20:14:07.004703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.836 [2024-02-14 20:14:07.004732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.836 [2024-02-14 20:14:07.004770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.836 [2024-02-14 20:14:07.004808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.836 [2024-02-14 20:14:07.004842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.836 [2024-02-14 20:14:07.004880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.836 [2024-02-14 20:14:07.004929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.836 [2024-02-14 20:14:07.004976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.836 [2024-02-14 20:14:07.005026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.836 [2024-02-14 20:14:07.005072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.836 [2024-02-14 20:14:07.005118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.836 [2024-02-14 20:14:07.005162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.836 [2024-02-14 20:14:07.005206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.836 [2024-02-14 20:14:07.005256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.836 [2024-02-14 20:14:07.005295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.836 [2024-02-14 20:14:07.005342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.836 [2024-02-14 20:14:07.005389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.836 [2024-02-14 20:14:07.005434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.836 [2024-02-14 20:14:07.005481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.836 [2024-02-14 20:14:07.005524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.836 [2024-02-14 20:14:07.005577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.836 [2024-02-14 20:14:07.005622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.836 [2024-02-14 20:14:07.005669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.836 [2024-02-14 20:14:07.005713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.836 [2024-02-14 20:14:07.005761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.836 [2024-02-14 20:14:07.005805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.836 [2024-02-14 20:14:07.005845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.836 [2024-02-14 20:14:07.005885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.836 [2024-02-14 20:14:07.005921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.836 [2024-02-14 20:14:07.005963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.836 [2024-02-14 20:14:07.006000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.836 [2024-02-14 20:14:07.006027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.836 [2024-02-14 20:14:07.006072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.836 [2024-02-14 20:14:07.006111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.836 [2024-02-14 20:14:07.006150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.836 [2024-02-14 20:14:07.006187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.836 [2024-02-14 20:14:07.006224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.836 [2024-02-14 20:14:07.006265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.836 [2024-02-14 20:14:07.006302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.836 [2024-02-14 20:14:07.006332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.836 [2024-02-14 20:14:07.006362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.837 [2024-02-14 20:14:07.006400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.837 [2024-02-14 20:14:07.006446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.837 [2024-02-14 20:14:07.006493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.837 [2024-02-14 20:14:07.006543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.837 [2024-02-14 20:14:07.006589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.837 [2024-02-14 20:14:07.006952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.837 [2024-02-14 20:14:07.006998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.837 [2024-02-14 20:14:07.007049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.837 [2024-02-14 20:14:07.007095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.837 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:13:29.837 [2024-02-14 20:14:07.007146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.837 [2024-02-14 20:14:07.007193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.837 [2024-02-14 20:14:07.007244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.837 [2024-02-14 20:14:07.007295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.837 [2024-02-14 20:14:07.007341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.837 [2024-02-14 20:14:07.007384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.837 [2024-02-14 20:14:07.007421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.837 [2024-02-14 20:14:07.007457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.837 [2024-02-14 20:14:07.007484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.837 [2024-02-14 20:14:07.007525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.837 [2024-02-14 20:14:07.007561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.837 [2024-02-14 20:14:07.007601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.837 [2024-02-14 20:14:07.007639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.837 [2024-02-14 20:14:07.007694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.837 [2024-02-14 20:14:07.007750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.837 [2024-02-14 20:14:07.007785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.837 [2024-02-14 20:14:07.007813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.837 [2024-02-14 20:14:07.007853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.837 [2024-02-14 20:14:07.007884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.837 [2024-02-14 20:14:07.007913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.837 [2024-02-14 20:14:07.007954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.837 [2024-02-14 20:14:07.007999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.837 [2024-02-14 20:14:07.008047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.837 [2024-02-14 20:14:07.008091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.837 [2024-02-14 20:14:07.008133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.837 [2024-02-14 20:14:07.008181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.837 [2024-02-14 20:14:07.008228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.837 [2024-02-14 20:14:07.008275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.837 [2024-02-14 20:14:07.008322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.837 [2024-02-14 20:14:07.008370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.837 [2024-02-14 20:14:07.008414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.837 [2024-02-14 20:14:07.008460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.837 [2024-02-14 20:14:07.008502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.837 [2024-02-14 20:14:07.008549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.837 [2024-02-14 20:14:07.008581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.837 [2024-02-14 20:14:07.008609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.837 [2024-02-14 20:14:07.008643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.837 [2024-02-14 20:14:07.008691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.837 [2024-02-14 20:14:07.008727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.837 [2024-02-14 20:14:07.008763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.837 [2024-02-14 20:14:07.008793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.837 [2024-02-14 20:14:07.008823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.837 [2024-02-14 20:14:07.008866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.837 [2024-02-14 20:14:07.008918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.837 [2024-02-14 20:14:07.008961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.837 [2024-02-14 20:14:07.009012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.837 [2024-02-14 20:14:07.009059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.837 [2024-02-14 20:14:07.009108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.837 [2024-02-14 20:14:07.009155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.837 [2024-02-14 20:14:07.009209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.837 [2024-02-14 20:14:07.009254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.837 [2024-02-14 20:14:07.009296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.837 [2024-02-14 20:14:07.009347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.837 [2024-02-14 20:14:07.009392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.837 [2024-02-14 20:14:07.009437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.837 [2024-02-14 20:14:07.009480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.837 [2024-02-14 20:14:07.009531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.837 [2024-02-14 20:14:07.009582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.837 [2024-02-14 20:14:07.009630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.837 [2024-02-14 20:14:07.009681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.837 [2024-02-14 20:14:07.009787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.837 [2024-02-14 20:14:07.009834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.837 [2024-02-14 20:14:07.010111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.837 [2024-02-14 20:14:07.010155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.837 [2024-02-14 20:14:07.010185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.837 [2024-02-14 20:14:07.010214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.837 [2024-02-14 20:14:07.010255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.837 [2024-02-14 20:14:07.010294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.837 [2024-02-14 20:14:07.010333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.837 [2024-02-14 20:14:07.010369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.837 [2024-02-14 20:14:07.010412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.837 [2024-02-14 20:14:07.010451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.837 [2024-02-14 20:14:07.010493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.837 [2024-02-14 20:14:07.010527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.837 [2024-02-14 20:14:07.010566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.837 [2024-02-14 20:14:07.010610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.837 [2024-02-14 20:14:07.010655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.837 [2024-02-14 20:14:07.010687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.837 [2024-02-14 20:14:07.010732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.837 [2024-02-14 20:14:07.010781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.837 [2024-02-14 20:14:07.010828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.837 [2024-02-14 20:14:07.010873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.837 [2024-02-14 20:14:07.010924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.837 [2024-02-14 20:14:07.010970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.837 [2024-02-14 20:14:07.011014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.837 [2024-02-14 20:14:07.011066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.837 [2024-02-14 20:14:07.011110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.837 [2024-02-14 20:14:07.011158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.838 [2024-02-14 20:14:07.011206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.838 [2024-02-14 20:14:07.011258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.838 [2024-02-14 20:14:07.011306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.838 [2024-02-14 20:14:07.011348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.838 [2024-02-14 20:14:07.011398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.838 [2024-02-14 20:14:07.011446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.838 [2024-02-14 20:14:07.011492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.838 [2024-02-14 20:14:07.011541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.838 [2024-02-14 20:14:07.011590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.838 [2024-02-14 20:14:07.011634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.838 [2024-02-14 20:14:07.011686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.838 [2024-02-14 20:14:07.011727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.838 [2024-02-14 20:14:07.011774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.838 [2024-02-14 20:14:07.011813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.838 [2024-02-14 20:14:07.011850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.838 [2024-02-14 20:14:07.011890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.838 [2024-02-14 20:14:07.011933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.838 [2024-02-14 20:14:07.011971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.838 [2024-02-14 20:14:07.012006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.838 [2024-02-14 20:14:07.012035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.838 [2024-02-14 20:14:07.012075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.838 [2024-02-14 20:14:07.012116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.838 [2024-02-14 20:14:07.012155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.838 [2024-02-14 20:14:07.012193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.838 [2024-02-14 20:14:07.012235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.838 [2024-02-14 20:14:07.012275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.838 [2024-02-14 20:14:07.012311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.838 [2024-02-14 20:14:07.012343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.838 [2024-02-14 20:14:07.012373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.838 [2024-02-14 20:14:07.012416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.838 [2024-02-14 20:14:07.012462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.838 [2024-02-14 20:14:07.012511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.838 [2024-02-14 20:14:07.012554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.838 [2024-02-14 20:14:07.012599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.838 [2024-02-14 20:14:07.012654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.838 [2024-02-14 20:14:07.012999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.838 [2024-02-14 20:14:07.013046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.838 [2024-02-14 20:14:07.013103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.838 [2024-02-14 20:14:07.013153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.838 [2024-02-14 20:14:07.013196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.838 [2024-02-14 20:14:07.013241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.838 [2024-02-14 20:14:07.013292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.838 [2024-02-14 20:14:07.013339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.838 [2024-02-14 20:14:07.013381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.838 [2024-02-14 20:14:07.013419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.838 [2024-02-14 20:14:07.013448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.838 [2024-02-14 20:14:07.013481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.838 [2024-02-14 20:14:07.013526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.838 [2024-02-14 20:14:07.013569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.838 [2024-02-14 20:14:07.013609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.838 [2024-02-14 20:14:07.013654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.838 [2024-02-14 20:14:07.013699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.838 [2024-02-14 20:14:07.013743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.838 [2024-02-14 20:14:07.013772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.838 [2024-02-14 20:14:07.013809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.838 [2024-02-14 20:14:07.013839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.838 [2024-02-14 20:14:07.013871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.838 [2024-02-14 20:14:07.013915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.838 [2024-02-14 20:14:07.013961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.838 [2024-02-14 20:14:07.014007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.838 [2024-02-14 20:14:07.014062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.838 [2024-02-14 20:14:07.014108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.838 [2024-02-14 20:14:07.014151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.838 [2024-02-14 20:14:07.014199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.838 [2024-02-14 20:14:07.014246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.838 [2024-02-14 20:14:07.014293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.838 [2024-02-14 20:14:07.014336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.838 [2024-02-14 20:14:07.014373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.838 [2024-02-14 20:14:07.014415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.838 [2024-02-14 20:14:07.014454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.838 [2024-02-14 20:14:07.014483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.838 [2024-02-14 20:14:07.014512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.838 [2024-02-14 20:14:07.014550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.838 [2024-02-14 20:14:07.014589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.838 [2024-02-14 20:14:07.014627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.838 [2024-02-14 20:14:07.014664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.838 [2024-02-14 20:14:07.014695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.838 [2024-02-14 20:14:07.014742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.838 [2024-02-14 20:14:07.014790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.838 [2024-02-14 20:14:07.014836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.838 [2024-02-14 20:14:07.014880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.838 [2024-02-14 20:14:07.014933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.838 [2024-02-14 20:14:07.014980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.839 [2024-02-14 20:14:07.015027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.839 [2024-02-14 20:14:07.015077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.839 [2024-02-14 20:14:07.015122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.839 [2024-02-14 20:14:07.015170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.839 [2024-02-14 20:14:07.015218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.839 [2024-02-14 20:14:07.015274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.839 [2024-02-14 20:14:07.015320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.839 [2024-02-14 20:14:07.015371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.839 [2024-02-14 20:14:07.015424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.839 [2024-02-14 20:14:07.015473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.839 [2024-02-14 20:14:07.015519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.839 [2024-02-14 20:14:07.015569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.839 [2024-02-14 20:14:07.015614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.839 [2024-02-14 20:14:07.015663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.839 [2024-02-14 20:14:07.015714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.839 [2024-02-14 20:14:07.015757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.839 [2024-02-14 20:14:07.015859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.839 [2024-02-14 20:14:07.015913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.839 [2024-02-14 20:14:07.016170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.839 [2024-02-14 20:14:07.016214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.839 [2024-02-14 20:14:07.016251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.839 [2024-02-14 20:14:07.016291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.839 [2024-02-14 20:14:07.016331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.839 [2024-02-14 20:14:07.016368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.839 [2024-02-14 20:14:07.016407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.839 [2024-02-14 20:14:07.016450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.839 [2024-02-14 20:14:07.016481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.839 [2024-02-14 20:14:07.016518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.839 [2024-02-14 20:14:07.016559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.839 [2024-02-14 20:14:07.016596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.839 [2024-02-14 20:14:07.016631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.839 [2024-02-14 20:14:07.016682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.839 [2024-02-14 20:14:07.016733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.839 [2024-02-14 20:14:07.016780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.839 [2024-02-14 20:14:07.016826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.839 [2024-02-14 20:14:07.016870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.839 [2024-02-14 20:14:07.016915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.839 [2024-02-14 20:14:07.016958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.839 [2024-02-14 20:14:07.017005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.839 [2024-02-14 20:14:07.017051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.839 [2024-02-14 20:14:07.017099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.839 [2024-02-14 20:14:07.017147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.839 [2024-02-14 20:14:07.017193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.839 [2024-02-14 20:14:07.017239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.839 [2024-02-14 20:14:07.017286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.839 [2024-02-14 20:14:07.017333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.839 [2024-02-14 20:14:07.017381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.839 [2024-02-14 20:14:07.017430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.839 [2024-02-14 20:14:07.017483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.839 [2024-02-14 20:14:07.017531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.839 [2024-02-14 20:14:07.017574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.839 [2024-02-14 20:14:07.017619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.839 [2024-02-14 20:14:07.017670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.839 [2024-02-14 20:14:07.017715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.839 [2024-02-14 20:14:07.017756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.839 [2024-02-14 20:14:07.017799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.839 [2024-02-14 20:14:07.017841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.839 [2024-02-14 20:14:07.017883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.839 [2024-02-14 20:14:07.017923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.839 [2024-02-14 20:14:07.017961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.839 [2024-02-14 20:14:07.017994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.839 [2024-02-14 20:14:07.018023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.839 [2024-02-14 20:14:07.018061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.839 [2024-02-14 20:14:07.018098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.839 [2024-02-14 20:14:07.018141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.839 [2024-02-14 20:14:07.018176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.839 [2024-02-14 20:14:07.018210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.839 [2024-02-14 20:14:07.018252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.839 [2024-02-14 20:14:07.018290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.839 [2024-02-14 20:14:07.018321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.839 [2024-02-14 20:14:07.018351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.839 [2024-02-14 20:14:07.018383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.839 [2024-02-14 20:14:07.018435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.839 [2024-02-14 20:14:07.018483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.839 [2024-02-14 20:14:07.018535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.839 [2024-02-14 20:14:07.018582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.839 [2024-02-14 20:14:07.018624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.839 [2024-02-14 20:14:07.018692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.839 [2024-02-14 20:14:07.018739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.839 [2024-02-14 20:14:07.019098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.839 [2024-02-14 20:14:07.019148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.839 [2024-02-14 20:14:07.019194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.839 [2024-02-14 20:14:07.019248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.839 [2024-02-14 20:14:07.019294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.839 [2024-02-14 20:14:07.019333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.839 [2024-02-14 20:14:07.019373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.839 [2024-02-14 20:14:07.019413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.839 [2024-02-14 20:14:07.019443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.839 [2024-02-14 20:14:07.019476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.839 [2024-02-14 20:14:07.019517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.839 [2024-02-14 20:14:07.019566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.839 [2024-02-14 20:14:07.019610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.839 [2024-02-14 20:14:07.019655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.839 [2024-02-14 20:14:07.019695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.839 [2024-02-14 20:14:07.019737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.839 [2024-02-14 20:14:07.019767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.839 [2024-02-14 20:14:07.019807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.840 [2024-02-14 20:14:07.019839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.840 [2024-02-14 20:14:07.019872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.840 [2024-02-14 20:14:07.019914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.840 [2024-02-14 20:14:07.019955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.840 [2024-02-14 20:14:07.019999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.840 [2024-02-14 20:14:07.020049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.840 [2024-02-14 20:14:07.020096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.840 [2024-02-14 20:14:07.020144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.840 [2024-02-14 20:14:07.020190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.840 [2024-02-14 20:14:07.020240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.840 [2024-02-14 20:14:07.020281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.840 [2024-02-14 20:14:07.020320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.840 [2024-02-14 20:14:07.020371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.840 [2024-02-14 20:14:07.020407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.840 [2024-02-14 20:14:07.020436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.840 [2024-02-14 20:14:07.020466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.840 [2024-02-14 20:14:07.020507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.840 [2024-02-14 20:14:07.020546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.840 [2024-02-14 20:14:07.020588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.840 [2024-02-14 20:14:07.020619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.840 [2024-02-14 20:14:07.020657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.840 [2024-02-14 20:14:07.020700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.840 [2024-02-14 20:14:07.020751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.840 [2024-02-14 20:14:07.020804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.840 [2024-02-14 20:14:07.020850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.840 [2024-02-14 20:14:07.020897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.840 [2024-02-14 20:14:07.020946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.840 [2024-02-14 20:14:07.020989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.840 [2024-02-14 20:14:07.021034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.840 [2024-02-14 20:14:07.021082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.840 [2024-02-14 20:14:07.021130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.840 [2024-02-14 20:14:07.021180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.840 [2024-02-14 20:14:07.021229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.840 [2024-02-14 20:14:07.021280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.840 [2024-02-14 20:14:07.021328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.840 [2024-02-14 20:14:07.021381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.840 [2024-02-14 20:14:07.021430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.840 [2024-02-14 20:14:07.021477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.840 [2024-02-14 20:14:07.021520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.840 [2024-02-14 20:14:07.021570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.840 [2024-02-14 20:14:07.021615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.840 [2024-02-14 20:14:07.021667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.840 [2024-02-14 20:14:07.021716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.840 [2024-02-14 20:14:07.021763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.840 [2024-02-14 20:14:07.021812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.840 [2024-02-14 20:14:07.021863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.840 [2024-02-14 20:14:07.021960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.840 [2024-02-14 20:14:07.022005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.840 [2024-02-14 20:14:07.022266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.840 [2024-02-14 20:14:07.022313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.840 [2024-02-14 20:14:07.022351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.840 [2024-02-14 20:14:07.022390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.840 [2024-02-14 20:14:07.022419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.840 [2024-02-14 20:14:07.022461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.840 [2024-02-14 20:14:07.022499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.840 [2024-02-14 20:14:07.022539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.840 [2024-02-14 20:14:07.022573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.840 [2024-02-14 20:14:07.022616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.840 [2024-02-14 20:14:07.022667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.840 [2024-02-14 20:14:07.022716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.840 [2024-02-14 20:14:07.022769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.840 [2024-02-14 20:14:07.022815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.840 [2024-02-14 20:14:07.022861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.840 [2024-02-14 20:14:07.022909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.840 [2024-02-14 20:14:07.022953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.840 [2024-02-14 20:14:07.023003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.840 [2024-02-14 20:14:07.023051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.840 [2024-02-14 20:14:07.023095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.840 [2024-02-14 20:14:07.023142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.840 [2024-02-14 20:14:07.023196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.840 [2024-02-14 20:14:07.023244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.840 [2024-02-14 20:14:07.023287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.840 [2024-02-14 20:14:07.023339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.840 [2024-02-14 20:14:07.023385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.840 [2024-02-14 20:14:07.023433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.840 [2024-02-14 20:14:07.023486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.840 [2024-02-14 20:14:07.023531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.840 [2024-02-14 20:14:07.023578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.840 [2024-02-14 20:14:07.023627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.840 [2024-02-14 20:14:07.023689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.840 [2024-02-14 20:14:07.023730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.840 [2024-02-14 20:14:07.023773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.840 [2024-02-14 20:14:07.023819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.840 [2024-02-14 20:14:07.023859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.840 [2024-02-14 20:14:07.023896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.840 [2024-02-14 20:14:07.023941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.840 [2024-02-14 20:14:07.023981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.840 [2024-02-14 20:14:07.024021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.840 [2024-02-14 20:14:07.024050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.840 [2024-02-14 20:14:07.024092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.840 [2024-02-14 20:14:07.024138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.840 [2024-02-14 20:14:07.024177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.840 [2024-02-14 20:14:07.024220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.840 [2024-02-14 20:14:07.024251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.840 [2024-02-14 20:14:07.024293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.840 [2024-02-14 20:14:07.024334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.840 [2024-02-14 20:14:07.024369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.840 [2024-02-14 20:14:07.024412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.841 [2024-02-14 20:14:07.024456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.841 [2024-02-14 20:14:07.024506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.841 [2024-02-14 20:14:07.024551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.841 [2024-02-14 20:14:07.024600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.841 [2024-02-14 20:14:07.024656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.841 [2024-02-14 20:14:07.024712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.841 [2024-02-14 20:14:07.024757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.841 [2024-02-14 20:14:07.024801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.841 [2024-02-14 20:14:07.024853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.841 [2024-02-14 20:14:07.024898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.841 [2024-02-14 20:14:07.024945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.841 [2024-02-14 20:14:07.025288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.841 [2024-02-14 20:14:07.025335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.841 [2024-02-14 20:14:07.025376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.841 [2024-02-14 20:14:07.025416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.841 [2024-02-14 20:14:07.025456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.841 [2024-02-14 20:14:07.025495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.841 [2024-02-14 20:14:07.025536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.841 [2024-02-14 20:14:07.025570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.841 [2024-02-14 20:14:07.025599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.841 [2024-02-14 20:14:07.025641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.841 [2024-02-14 20:14:07.025687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.841 [2024-02-14 20:14:07.025725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.841 [2024-02-14 20:14:07.025755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.841 [2024-02-14 20:14:07.025789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.841 [2024-02-14 20:14:07.025831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.841 [2024-02-14 20:14:07.025860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.841 [2024-02-14 20:14:07.025891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.841 [2024-02-14 20:14:07.025940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.841 [2024-02-14 20:14:07.025987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.841 [2024-02-14 20:14:07.026031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.841 [2024-02-14 20:14:07.026079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.841 [2024-02-14 20:14:07.026130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.841 [2024-02-14 20:14:07.026174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.841 [2024-02-14 20:14:07.026223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.841 [2024-02-14 20:14:07.026265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.841 [2024-02-14 20:14:07.026296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.841 [2024-02-14 20:14:07.026328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.841 [2024-02-14 20:14:07.026370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.841 [2024-02-14 20:14:07.026412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.841 [2024-02-14 20:14:07.026453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.841 [2024-02-14 20:14:07.026490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.841 [2024-02-14 20:14:07.026528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.841 [2024-02-14 20:14:07.026560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.841 [2024-02-14 20:14:07.026592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.841 [2024-02-14 20:14:07.026636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.841 [2024-02-14 20:14:07.026687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.841 [2024-02-14 20:14:07.026737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.841 [2024-02-14 20:14:07.026788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.841 [2024-02-14 20:14:07.026834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.841 [2024-02-14 20:14:07.026878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.841 [2024-02-14 20:14:07.026931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.841 [2024-02-14 20:14:07.026978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.841 [2024-02-14 20:14:07.027024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.841 [2024-02-14 20:14:07.027075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.841 [2024-02-14 20:14:07.027117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.841 [2024-02-14 20:14:07.027156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.841 [2024-02-14 20:14:07.027198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.841 [2024-02-14 20:14:07.027245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.841 [2024-02-14 20:14:07.027275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.841 [2024-02-14 20:14:07.027303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.841 [2024-02-14 20:14:07.027344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.841 [2024-02-14 20:14:07.027379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.841 [2024-02-14 20:14:07.027409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.841 [2024-02-14 20:14:07.027439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.841 [2024-02-14 20:14:07.027478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.841 [2024-02-14 20:14:07.027514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.841 [2024-02-14 20:14:07.027554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.841 [2024-02-14 20:14:07.027593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.841 [2024-02-14 20:14:07.027631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.841 [2024-02-14 20:14:07.027675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.841 [2024-02-14 20:14:07.027719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.841 [2024-02-14 20:14:07.027765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.841 [2024-02-14 20:14:07.027814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.841 [2024-02-14 20:14:07.027868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.841 [2024-02-14 20:14:07.027976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.841 [2024-02-14 20:14:07.028029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.841 [2024-02-14 20:14:07.028305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.841 [2024-02-14 20:14:07.028358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.841 [2024-02-14 20:14:07.028408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.841 [2024-02-14 20:14:07.028450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.841 [2024-02-14 20:14:07.028499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.841 [2024-02-14 20:14:07.028541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.841 [2024-02-14 20:14:07.028588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.841 [2024-02-14 20:14:07.028636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.841 [2024-02-14 20:14:07.028689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.841 [2024-02-14 20:14:07.028734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.841 [2024-02-14 20:14:07.028777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.841 [2024-02-14 20:14:07.028823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.841 [2024-02-14 20:14:07.028870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.841 [2024-02-14 20:14:07.028914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.841 [2024-02-14 20:14:07.028958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.841 [2024-02-14 20:14:07.029012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.841 [2024-02-14 20:14:07.029059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.841 [2024-02-14 20:14:07.029106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.841 [2024-02-14 20:14:07.029152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.841 [2024-02-14 20:14:07.029197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.842 [2024-02-14 20:14:07.029248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.842 [2024-02-14 20:14:07.029293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.842 [2024-02-14 20:14:07.029341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.842 [2024-02-14 20:14:07.029388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.842 [2024-02-14 20:14:07.029431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.842 [2024-02-14 20:14:07.029472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.842 [2024-02-14 20:14:07.029510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.842 [2024-02-14 20:14:07.029550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.842 [2024-02-14 20:14:07.029587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.842 [2024-02-14 20:14:07.029626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.842 [2024-02-14 20:14:07.029679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.842 [2024-02-14 20:14:07.029718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.842 [2024-02-14 20:14:07.029757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.842 [2024-02-14 20:14:07.029803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.842 [2024-02-14 20:14:07.029833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.842 [2024-02-14 20:14:07.029871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.842 [2024-02-14 20:14:07.029909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.842 [2024-02-14 20:14:07.029948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.842 [2024-02-14 20:14:07.029993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.842 [2024-02-14 20:14:07.030037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.842 [2024-02-14 20:14:07.030075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.842 [2024-02-14 20:14:07.030115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.842 [2024-02-14 20:14:07.030162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.842 [2024-02-14 20:14:07.030203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.842 [2024-02-14 20:14:07.030238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.842 [2024-02-14 20:14:07.030268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.842 [2024-02-14 20:14:07.030300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.842 [2024-02-14 20:14:07.030339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.842 [2024-02-14 20:14:07.030377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.842 [2024-02-14 20:14:07.030409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.842 [2024-02-14 20:14:07.030447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.842 [2024-02-14 20:14:07.030496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.842 [2024-02-14 20:14:07.030538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.842 [2024-02-14 20:14:07.030581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.842 [2024-02-14 20:14:07.030626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.842 [2024-02-14 20:14:07.030681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.842 [2024-02-14 20:14:07.030728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.842 [2024-02-14 20:14:07.030772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.842 [2024-02-14 20:14:07.030821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.842 [2024-02-14 20:14:07.030870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.842 [2024-02-14 20:14:07.030920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.842 [2024-02-14 20:14:07.031276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.842 [2024-02-14 20:14:07.031326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.842 [2024-02-14 20:14:07.031376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.842 [2024-02-14 20:14:07.031423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.842 [2024-02-14 20:14:07.031460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.842 [2024-02-14 20:14:07.031498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.842 [2024-02-14 20:14:07.031534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.842 [2024-02-14 20:14:07.031562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.842 [2024-02-14 20:14:07.031593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.842 [2024-02-14 20:14:07.031633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.842 [2024-02-14 20:14:07.031684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.842 [2024-02-14 20:14:07.031723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.842 [2024-02-14 20:14:07.031765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.842 [2024-02-14 20:14:07.031803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.842 [2024-02-14 20:14:07.031842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.842 [2024-02-14 20:14:07.031874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.842 [2024-02-14 20:14:07.031903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.842 [2024-02-14 20:14:07.031932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.842 [2024-02-14 20:14:07.031960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.842 [2024-02-14 20:14:07.031989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.842 [2024-02-14 20:14:07.032017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.842 [2024-02-14 20:14:07.032044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.842 [2024-02-14 20:14:07.032071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.842 [2024-02-14 20:14:07.032098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.842 [2024-02-14 20:14:07.032124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.842 [2024-02-14 20:14:07.032152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.842 [2024-02-14 20:14:07.032178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.842 [2024-02-14 20:14:07.032205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.842 [2024-02-14 20:14:07.032233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.842 [2024-02-14 20:14:07.032260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.842 [2024-02-14 20:14:07.032287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.842 [2024-02-14 20:14:07.032313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.842 [2024-02-14 20:14:07.032340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.842 [2024-02-14 20:14:07.032367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.842 [2024-02-14 20:14:07.032396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.842 [2024-02-14 20:14:07.032424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.842 [2024-02-14 20:14:07.032451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.842 [2024-02-14 20:14:07.032477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.842 [2024-02-14 20:14:07.032505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.842 [2024-02-14 20:14:07.032532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.842 [2024-02-14 20:14:07.032559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.842 [2024-02-14 20:14:07.032585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.842 [2024-02-14 20:14:07.032616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.842 [2024-02-14 20:14:07.032659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.842 [2024-02-14 20:14:07.032699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.842 [2024-02-14 20:14:07.032732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.842 [2024-02-14 20:14:07.032776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.842 [2024-02-14 20:14:07.032819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.842 [2024-02-14 20:14:07.032861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.842 [2024-02-14 20:14:07.032905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.842 [2024-02-14 20:14:07.032953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.842 [2024-02-14 20:14:07.032997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.842 [2024-02-14 20:14:07.033048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.842 [2024-02-14 20:14:07.033088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.842 [2024-02-14 20:14:07.033137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.842 [2024-02-14 20:14:07.033178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.842 [2024-02-14 20:14:07.033221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.843 [2024-02-14 20:14:07.033264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.843 [2024-02-14 20:14:07.033314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.843 [2024-02-14 20:14:07.033362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.843 [2024-02-14 20:14:07.033406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.843 [2024-02-14 20:14:07.033448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.843 [2024-02-14 20:14:07.033489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.843 [2024-02-14 20:14:07.033526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.843 [2024-02-14 20:14:07.033620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.843 [2024-02-14 20:14:07.033664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.843 [2024-02-14 20:14:07.033954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.843 [2024-02-14 20:14:07.033996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.843 [2024-02-14 20:14:07.034040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.843 [2024-02-14 20:14:07.034091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.843 [2024-02-14 20:14:07.034129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.843 [2024-02-14 20:14:07.034171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.843 [2024-02-14 20:14:07.034220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.843 [2024-02-14 20:14:07.034264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.843 [2024-02-14 20:14:07.034317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.843 [2024-02-14 20:14:07.034359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.843 [2024-02-14 20:14:07.034410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.843 [2024-02-14 20:14:07.034452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.843 [2024-02-14 20:14:07.034498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.843 [2024-02-14 20:14:07.034544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.843 [2024-02-14 20:14:07.034593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.843 [2024-02-14 20:14:07.034634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.843 [2024-02-14 20:14:07.034683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.843 [2024-02-14 20:14:07.034730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.843 [2024-02-14 20:14:07.034776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.843 [2024-02-14 20:14:07.034820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.843 [2024-02-14 20:14:07.034865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.843 [2024-02-14 20:14:07.034910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.843 [2024-02-14 20:14:07.034956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.843 [2024-02-14 20:14:07.035002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.843 [2024-02-14 20:14:07.035053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.843 [2024-02-14 20:14:07.035100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.843 [2024-02-14 20:14:07.035142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.843 [2024-02-14 20:14:07.035186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.843 [2024-02-14 20:14:07.035234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.843 [2024-02-14 20:14:07.035281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.843 [2024-02-14 20:14:07.035325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.843 [2024-02-14 20:14:07.035371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.843 [2024-02-14 20:14:07.035418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.843 [2024-02-14 20:14:07.035466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.843 [2024-02-14 20:14:07.035507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.843 [2024-02-14 20:14:07.035550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.843 [2024-02-14 20:14:07.035596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.843 [2024-02-14 20:14:07.035640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.843 [2024-02-14 20:14:07.035685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.843 [2024-02-14 20:14:07.035722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.843 [2024-02-14 20:14:07.035764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.843 [2024-02-14 20:14:07.035802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.843 [2024-02-14 20:14:07.035838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.843 [2024-02-14 20:14:07.035877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.843 [2024-02-14 20:14:07.035908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.843 [2024-02-14 20:14:07.035936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.843 [2024-02-14 20:14:07.035980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.843 [2024-02-14 20:14:07.036018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.843 [2024-02-14 20:14:07.036056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.843 [2024-02-14 20:14:07.036100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.843 [2024-02-14 20:14:07.036139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.843 [2024-02-14 20:14:07.036181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.843 [2024-02-14 20:14:07.036221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.843 [2024-02-14 20:14:07.036259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.843 [2024-02-14 20:14:07.036307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.843 [2024-02-14 20:14:07.036336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.843 [2024-02-14 20:14:07.036375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.843 [2024-02-14 20:14:07.036409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.843 [2024-02-14 20:14:07.036449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.843 [2024-02-14 20:14:07.036481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.843 [2024-02-14 20:14:07.036510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.843 [2024-02-14 20:14:07.036914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.843 [2024-02-14 20:14:07.036966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.843 [2024-02-14 20:14:07.037012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.843 [2024-02-14 20:14:07.037058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.843 [2024-02-14 20:14:07.037106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.843 [2024-02-14 20:14:07.037148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.843 [2024-02-14 20:14:07.037197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.843 [2024-02-14 20:14:07.037233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.843 [2024-02-14 20:14:07.037266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.843 [2024-02-14 20:14:07.037294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.843 [2024-02-14 20:14:07.037329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.844 [2024-02-14 20:14:07.037362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.844 [2024-02-14 20:14:07.037400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.844 [2024-02-14 20:14:07.037440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.844 [2024-02-14 20:14:07.037480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.844 [2024-02-14 20:14:07.037510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.844 [2024-02-14 20:14:07.037546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.844 [2024-02-14 20:14:07.037578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.844 [2024-02-14 20:14:07.037607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.844 [2024-02-14 20:14:07.037635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.844 [2024-02-14 20:14:07.037683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.844 [2024-02-14 20:14:07.037727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.844 [2024-02-14 20:14:07.037771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.844 [2024-02-14 20:14:07.037816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.844 [2024-02-14 20:14:07.037860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.844 [2024-02-14 20:14:07.037899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.844 [2024-02-14 20:14:07.037945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.844 [2024-02-14 20:14:07.037989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.844 [2024-02-14 20:14:07.038035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.844 [2024-02-14 20:14:07.038082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.844 [2024-02-14 20:14:07.038125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.844 [2024-02-14 20:14:07.038178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.844 [2024-02-14 20:14:07.038218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.844 [2024-02-14 20:14:07.038271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.844 [2024-02-14 20:14:07.038314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.844 [2024-02-14 20:14:07.038354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.844 [2024-02-14 20:14:07.038395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.844 [2024-02-14 20:14:07.038438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.844 [2024-02-14 20:14:07.038466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.844 [2024-02-14 20:14:07.038499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.844 [2024-02-14 20:14:07.038541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.844 [2024-02-14 20:14:07.038580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.844 [2024-02-14 20:14:07.038616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.844 [2024-02-14 20:14:07.038665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.844 [2024-02-14 20:14:07.038695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.844 [2024-02-14 20:14:07.038731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.844 [2024-02-14 20:14:07.038771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.844 [2024-02-14 20:14:07.038802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.844 [2024-02-14 20:14:07.038831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.844 [2024-02-14 20:14:07.038873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.844 [2024-02-14 20:14:07.038916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.844 [2024-02-14 20:14:07.038961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.844 [2024-02-14 20:14:07.039013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.844 [2024-02-14 20:14:07.039058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.844 [2024-02-14 20:14:07.039099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.844 [2024-02-14 20:14:07.039141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.844 [2024-02-14 20:14:07.039185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.844 [2024-02-14 20:14:07.039241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.844 [2024-02-14 20:14:07.039284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.844 [2024-02-14 20:14:07.039334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.844 [2024-02-14 20:14:07.039381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.844 [2024-02-14 20:14:07.039424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.844 [2024-02-14 20:14:07.039467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.844 [2024-02-14 20:14:07.039514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.844 [2024-02-14 20:14:07.039614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.844 [2024-02-14 20:14:07.039659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:29.844 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:29.844 20:14:07 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:29.844 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:29.844 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:29.844 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:29.844 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:30.116 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:30.116 [2024-02-14 20:14:07.243952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.116 [2024-02-14 20:14:07.244020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.116 [2024-02-14 20:14:07.244068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.116 [2024-02-14 20:14:07.244113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.116 [2024-02-14 20:14:07.244161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.116 [2024-02-14 20:14:07.244209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.116 [2024-02-14 20:14:07.244250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.116 [2024-02-14 20:14:07.244299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.116 [2024-02-14 20:14:07.244340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.116 [2024-02-14 20:14:07.244383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.116 [2024-02-14 20:14:07.244426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.116 [2024-02-14 20:14:07.244466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.116 [2024-02-14 20:14:07.244506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.116 [2024-02-14 20:14:07.244536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.116 [2024-02-14 20:14:07.244567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.116 [2024-02-14 20:14:07.244603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.116 [2024-02-14 20:14:07.244644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.116 [2024-02-14 20:14:07.244693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.116 [2024-02-14 20:14:07.244737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.116 [2024-02-14 20:14:07.244774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.116 [2024-02-14 20:14:07.244816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.116 [2024-02-14 20:14:07.244856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.116 [2024-02-14 20:14:07.244885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.116 [2024-02-14 20:14:07.244918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.116 [2024-02-14 20:14:07.244955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.116 [2024-02-14 20:14:07.244990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.116 [2024-02-14 20:14:07.245018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.116 [2024-02-14 20:14:07.245056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.116 [2024-02-14 20:14:07.245096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.116 [2024-02-14 20:14:07.245141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.116 [2024-02-14 20:14:07.245187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.116 [2024-02-14 20:14:07.245231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.117 [2024-02-14 20:14:07.245274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.117 [2024-02-14 20:14:07.245327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.117 [2024-02-14 20:14:07.245377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.117 [2024-02-14 20:14:07.245423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.117 [2024-02-14 20:14:07.245470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.117 [2024-02-14 20:14:07.245514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.117 [2024-02-14 20:14:07.245561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.117 [2024-02-14 20:14:07.245604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.117 [2024-02-14 20:14:07.245657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.117 [2024-02-14 20:14:07.245708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.117 [2024-02-14 20:14:07.245752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.117 [2024-02-14 20:14:07.245799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.117 [2024-02-14 20:14:07.245841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.117 [2024-02-14 20:14:07.245878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.117 [2024-02-14 20:14:07.245915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.117 [2024-02-14 20:14:07.245954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.117 [2024-02-14 20:14:07.245991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.117 [2024-02-14 20:14:07.246028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.117 [2024-02-14 20:14:07.246056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.117 [2024-02-14 20:14:07.246086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.117 [2024-02-14 20:14:07.246125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.117 [2024-02-14 20:14:07.246161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.117 [2024-02-14 20:14:07.246195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.117 [2024-02-14 20:14:07.246233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.117 [2024-02-14 20:14:07.246274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.117 [2024-02-14 20:14:07.246303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.117 [2024-02-14 20:14:07.246331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.117 [2024-02-14 20:14:07.246364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.117 [2024-02-14 20:14:07.246412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.117 [2024-02-14 20:14:07.246454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.117 [2024-02-14 20:14:07.246496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.117 [2024-02-14 20:14:07.246861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.117 [2024-02-14 20:14:07.246914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.117 [2024-02-14 20:14:07.246960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.117 [2024-02-14 20:14:07.247013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.117 [2024-02-14 20:14:07.247057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.117 [2024-02-14 20:14:07.247107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.117 [2024-02-14 20:14:07.247153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.117 [2024-02-14 20:14:07.247199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.117 [2024-02-14 20:14:07.247247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.117 [2024-02-14 20:14:07.247292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.117 [2024-02-14 20:14:07.247342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.117 [2024-02-14 20:14:07.247376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.117 [2024-02-14 20:14:07.247412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.117 [2024-02-14 20:14:07.247448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.117 [2024-02-14 20:14:07.247486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.117 [2024-02-14 20:14:07.247515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.117 [2024-02-14 20:14:07.247545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.117 [2024-02-14 20:14:07.247586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.117 [2024-02-14 20:14:07.247631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.117 [2024-02-14 20:14:07.247678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.117 [2024-02-14 20:14:07.247725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.117 [2024-02-14 20:14:07.247763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.117 [2024-02-14 20:14:07.247801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.117 [2024-02-14 20:14:07.247836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.117 [2024-02-14 20:14:07.247872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.117 [2024-02-14 20:14:07.247909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.117 [2024-02-14 20:14:07.247945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.117 [2024-02-14 20:14:07.247975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.117 [2024-02-14 20:14:07.248014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.117 [2024-02-14 20:14:07.248055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.117 [2024-02-14 20:14:07.248096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.117 [2024-02-14 20:14:07.248142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.117 [2024-02-14 20:14:07.248197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.117 [2024-02-14 20:14:07.248238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.117 [2024-02-14 20:14:07.248296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.117 [2024-02-14 20:14:07.248345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.117 [2024-02-14 20:14:07.248395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.117 [2024-02-14 20:14:07.248446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.117 [2024-02-14 20:14:07.248494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.117 [2024-02-14 20:14:07.248541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.117 [2024-02-14 20:14:07.248581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.117 [2024-02-14 20:14:07.248633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.117 [2024-02-14 20:14:07.248682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.117 [2024-02-14 20:14:07.248728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.117 [2024-02-14 20:14:07.248772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.117 [2024-02-14 20:14:07.248816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.117 [2024-02-14 20:14:07.248855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.117 [2024-02-14 20:14:07.248893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.117 [2024-02-14 20:14:07.248928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.117 [2024-02-14 20:14:07.248956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.117 [2024-02-14 20:14:07.248994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.117 [2024-02-14 20:14:07.249031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.117 [2024-02-14 20:14:07.249070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.117 [2024-02-14 20:14:07.249108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.117 [2024-02-14 20:14:07.249143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.117 [2024-02-14 20:14:07.249173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.117 [2024-02-14 20:14:07.249211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.117 [2024-02-14 20:14:07.249248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.117 [2024-02-14 20:14:07.249282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.117 [2024-02-14 20:14:07.249311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.117 [2024-02-14 20:14:07.249351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.117 [2024-02-14 20:14:07.249397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.117 [2024-02-14 20:14:07.249439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.118 [2024-02-14 20:14:07.249483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.118 [2024-02-14 20:14:07.249825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.118 [2024-02-14 20:14:07.249872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.118 [2024-02-14 20:14:07.249918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.118 [2024-02-14 20:14:07.249960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.118 [2024-02-14 20:14:07.250001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.118 [2024-02-14 20:14:07.250055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.118 [2024-02-14 20:14:07.250101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.118 [2024-02-14 20:14:07.250151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.118 [2024-02-14 20:14:07.250198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.118 [2024-02-14 20:14:07.250241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.118 [2024-02-14 20:14:07.250291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.118 [2024-02-14 20:14:07.250333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.118 [2024-02-14 20:14:07.250380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.118 [2024-02-14 20:14:07.250422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.118 [2024-02-14 20:14:07.250465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.118 [2024-02-14 20:14:07.250509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.118 [2024-02-14 20:14:07.250552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.118 [2024-02-14 20:14:07.250590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.118 [2024-02-14 20:14:07.250628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.118 [2024-02-14 20:14:07.250675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.118 [2024-02-14 20:14:07.250712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.118 [2024-02-14 20:14:07.250740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.118 [2024-02-14 20:14:07.250776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.118 [2024-02-14 20:14:07.250812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.118 [2024-02-14 20:14:07.250848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.118 [2024-02-14 20:14:07.250892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.118 [2024-02-14 20:14:07.250930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.118 [2024-02-14 20:14:07.250958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.118 [2024-02-14 20:14:07.250997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.118 [2024-02-14 20:14:07.251038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.118 [2024-02-14 20:14:07.251072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.118 [2024-02-14 20:14:07.251103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.118 [2024-02-14 20:14:07.251140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.118 [2024-02-14 20:14:07.251180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.118 [2024-02-14 20:14:07.251228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.118 [2024-02-14 20:14:07.251267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.118 [2024-02-14 20:14:07.251309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.118 [2024-02-14 20:14:07.251355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.118 [2024-02-14 20:14:07.251403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.118 [2024-02-14 20:14:07.251452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.118 [2024-02-14 20:14:07.251492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.118 [2024-02-14 20:14:07.251549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.118 [2024-02-14 20:14:07.251594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.118 [2024-02-14 20:14:07.251639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.118 [2024-02-14 20:14:07.251688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.118 [2024-02-14 20:14:07.251734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.118 [2024-02-14 20:14:07.251783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.118 [2024-02-14 20:14:07.251829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.118 [2024-02-14 20:14:07.251875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.118 [2024-02-14 20:14:07.251918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.118 [2024-02-14 20:14:07.251959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.118 [2024-02-14 20:14:07.251994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.118 [2024-02-14 20:14:07.252022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.118 [2024-02-14 20:14:07.252050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.118 [2024-02-14 20:14:07.252085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.118 [2024-02-14 20:14:07.252123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.118 [2024-02-14 20:14:07.252163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.118 [2024-02-14 20:14:07.252202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.118 [2024-02-14 20:14:07.252248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.118 [2024-02-14 20:14:07.252286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.118 [2024-02-14 20:14:07.252315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.118 [2024-02-14 20:14:07.252353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.118 [2024-02-14 20:14:07.252382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.118 [2024-02-14 20:14:07.252729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.118 [2024-02-14 20:14:07.252781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.118 [2024-02-14 20:14:07.252819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.118 [2024-02-14 20:14:07.252853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.118 [2024-02-14 20:14:07.252882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.118 [2024-02-14 20:14:07.252909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.118 [2024-02-14 20:14:07.252936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.118 [2024-02-14 20:14:07.252975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.118 [2024-02-14 20:14:07.253011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.118 [2024-02-14 20:14:07.253049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.118 [2024-02-14 20:14:07.253085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.118 [2024-02-14 20:14:07.253125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.118 [2024-02-14 20:14:07.253166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.118 [2024-02-14 20:14:07.253214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.118 [2024-02-14 20:14:07.253257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.118 [2024-02-14 20:14:07.253302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.118 [2024-02-14 20:14:07.253346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.118 [2024-02-14 20:14:07.253390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.118 [2024-02-14 20:14:07.253437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.118 [2024-02-14 20:14:07.253486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.118 [2024-02-14 20:14:07.253535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.118 [2024-02-14 20:14:07.253581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.118 [2024-02-14 20:14:07.253621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.118 [2024-02-14 20:14:07.253670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.118 [2024-02-14 20:14:07.253713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.118 [2024-02-14 20:14:07.253756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.118 [2024-02-14 20:14:07.253802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.118 [2024-02-14 20:14:07.253851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.118 [2024-02-14 20:14:07.253892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.118 [2024-02-14 20:14:07.253935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.118 [2024-02-14 20:14:07.253978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.118 [2024-02-14 20:14:07.254020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.118 [2024-02-14 20:14:07.254066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.118 [2024-02-14 20:14:07.254109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.119 [2024-02-14 20:14:07.254157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.119 [2024-02-14 20:14:07.254195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.119 [2024-02-14 20:14:07.254233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.119 [2024-02-14 20:14:07.254266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.119 [2024-02-14 20:14:07.254294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.119 [2024-02-14 20:14:07.254330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.119 [2024-02-14 20:14:07.254374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.119 [2024-02-14 20:14:07.254421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.119 [2024-02-14 20:14:07.254462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.119 [2024-02-14 20:14:07.254501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.119 [2024-02-14 20:14:07.254543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.119 [2024-02-14 20:14:07.254572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.119 [2024-02-14 20:14:07.254601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.119 [2024-02-14 20:14:07.254634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.119 [2024-02-14 20:14:07.254675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.119 [2024-02-14 20:14:07.254716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.119 [2024-02-14 20:14:07.254756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.119 [2024-02-14 20:14:07.254801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.119 [2024-02-14 20:14:07.254859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.119 [2024-02-14 20:14:07.254903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.119 [2024-02-14 20:14:07.254947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.119 [2024-02-14 20:14:07.254993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.119 [2024-02-14 20:14:07.255039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.119 [2024-02-14 20:14:07.255087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.119 [2024-02-14 20:14:07.255135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.119 [2024-02-14 20:14:07.255183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.119 [2024-02-14 20:14:07.255231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.119 [2024-02-14 20:14:07.255275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.119 [2024-02-14 20:14:07.255325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.119 [2024-02-14 20:14:07.255369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.119 [2024-02-14 20:14:07.255710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.119 [2024-02-14 20:14:07.255760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.119 [2024-02-14 20:14:07.255807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.119 [2024-02-14 20:14:07.255846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.119 [2024-02-14 20:14:07.255892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.119 [2024-02-14 20:14:07.255938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.119 [2024-02-14 20:14:07.255979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.119 [2024-02-14 20:14:07.256030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.119 [2024-02-14 20:14:07.256073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.119 [2024-02-14 20:14:07.256123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.119 [2024-02-14 20:14:07.256165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.119 [2024-02-14 20:14:07.256205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.119 [2024-02-14 20:14:07.256242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.119 [2024-02-14 20:14:07.256279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.119 [2024-02-14 20:14:07.256316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.119 [2024-02-14 20:14:07.256354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.119 [2024-02-14 20:14:07.256392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.119 [2024-02-14 20:14:07.256420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.119 [2024-02-14 20:14:07.256447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.119 [2024-02-14 20:14:07.256486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.119 [2024-02-14 20:14:07.256525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.119 [2024-02-14 20:14:07.256566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.119 [2024-02-14 20:14:07.256604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.119 [2024-02-14 20:14:07.256642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.119 [2024-02-14 20:14:07.256691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.119 [2024-02-14 20:14:07.256727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.119 [2024-02-14 20:14:07.256770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.119 [2024-02-14 20:14:07.256803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.119 [2024-02-14 20:14:07.256830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.119 [2024-02-14 20:14:07.256864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.119 [2024-02-14 20:14:07.256901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.119 [2024-02-14 20:14:07.256930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.119 [2024-02-14 20:14:07.256958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.119 [2024-02-14 20:14:07.257001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.119 [2024-02-14 20:14:07.257043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.119 [2024-02-14 20:14:07.257088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.119 [2024-02-14 20:14:07.257134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.119 [2024-02-14 20:14:07.257177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.119 [2024-02-14 20:14:07.257223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.119 [2024-02-14 20:14:07.257269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.119 [2024-02-14 20:14:07.257310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.119 [2024-02-14 20:14:07.257361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.119 [2024-02-14 20:14:07.257403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.119 [2024-02-14 20:14:07.257443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.119 [2024-02-14 20:14:07.257471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.119 [2024-02-14 20:14:07.257505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.119 [2024-02-14 20:14:07.257540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.119 [2024-02-14 20:14:07.257576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.119 [2024-02-14 20:14:07.257605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.119 [2024-02-14 20:14:07.257642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.119 [2024-02-14 20:14:07.257694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.119 [2024-02-14 20:14:07.257735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.119 [2024-02-14 20:14:07.257780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.119 [2024-02-14 20:14:07.257826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.119 [2024-02-14 20:14:07.257873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.119 [2024-02-14 20:14:07.257917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.119 [2024-02-14 20:14:07.257958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.119 [2024-02-14 20:14:07.258007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.119 [2024-02-14 20:14:07.258051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.119 [2024-02-14 20:14:07.258100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.119 [2024-02-14 20:14:07.258144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.119 [2024-02-14 20:14:07.258190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.119 [2024-02-14 20:14:07.258231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.119 [2024-02-14 20:14:07.258536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.119 [2024-02-14 20:14:07.258578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.119 [2024-02-14 20:14:07.258615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.119 [2024-02-14 20:14:07.258658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.120 [2024-02-14 20:14:07.258693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.120 [2024-02-14 20:14:07.258736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.120 [2024-02-14 20:14:07.258785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.120 [2024-02-14 20:14:07.258829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.120 [2024-02-14 20:14:07.258874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.120 [2024-02-14 20:14:07.258917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.120 [2024-02-14 20:14:07.258968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.120 [2024-02-14 20:14:07.259015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.120 [2024-02-14 20:14:07.259057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.120 [2024-02-14 20:14:07.259104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.120 [2024-02-14 20:14:07.259149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.120 [2024-02-14 20:14:07.259195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.120 [2024-02-14 20:14:07.259234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.120 [2024-02-14 20:14:07.259281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.120 [2024-02-14 20:14:07.259321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.120 [2024-02-14 20:14:07.259366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.120 [2024-02-14 20:14:07.259410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.120 [2024-02-14 20:14:07.259455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.120 [2024-02-14 20:14:07.259503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.120 [2024-02-14 20:14:07.259548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.120 [2024-02-14 20:14:07.259596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.120 [2024-02-14 20:14:07.259640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.120 [2024-02-14 20:14:07.259690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.120 [2024-02-14 20:14:07.259727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.120 [2024-02-14 20:14:07.259768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.120 [2024-02-14 20:14:07.259806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.120 [2024-02-14 20:14:07.259840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.120 [2024-02-14 20:14:07.259882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.120 [2024-02-14 20:14:07.259910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.120 [2024-02-14 20:14:07.259940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.120 [2024-02-14 20:14:07.259980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.120 [2024-02-14 20:14:07.260020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.120 [2024-02-14 20:14:07.260060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.120 [2024-02-14 20:14:07.260099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.120 [2024-02-14 20:14:07.260137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.120 [2024-02-14 20:14:07.260165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.120 [2024-02-14 20:14:07.260200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.120 [2024-02-14 20:14:07.260236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.120 [2024-02-14 20:14:07.260267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.120 [2024-02-14 20:14:07.260298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.120 [2024-02-14 20:14:07.260345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.120 [2024-02-14 20:14:07.260387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.120 [2024-02-14 20:14:07.260430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.120 [2024-02-14 20:14:07.260473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.120 [2024-02-14 20:14:07.260517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.120 [2024-02-14 20:14:07.260563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.120 [2024-02-14 20:14:07.260604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.120 [2024-02-14 20:14:07.260653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.120 [2024-02-14 20:14:07.260693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.120 [2024-02-14 20:14:07.260734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.120 [2024-02-14 20:14:07.260780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.120 [2024-02-14 20:14:07.260823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.120 [2024-02-14 20:14:07.260867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.120 [2024-02-14 20:14:07.260911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.120 [2024-02-14 20:14:07.260961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.120 [2024-02-14 20:14:07.260999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.120 [2024-02-14 20:14:07.261048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.120 [2024-02-14 20:14:07.261090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.120 [2024-02-14 20:14:07.261137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.120 [2024-02-14 20:14:07.261181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.120 [2024-02-14 20:14:07.261501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.120 [2024-02-14 20:14:07.261544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.120 [2024-02-14 20:14:07.261590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.120 [2024-02-14 20:14:07.261632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.120 [2024-02-14 20:14:07.261673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.120 [2024-02-14 20:14:07.261705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.120 [2024-02-14 20:14:07.261743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.120 [2024-02-14 20:14:07.261780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.120 [2024-02-14 20:14:07.261811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.120 [2024-02-14 20:14:07.261844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.120 [2024-02-14 20:14:07.261873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.120 [2024-02-14 20:14:07.261920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.120 [2024-02-14 20:14:07.261971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.120 [2024-02-14 20:14:07.262017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.120 [2024-02-14 20:14:07.262062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.120 [2024-02-14 20:14:07.262107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.120 [2024-02-14 20:14:07.262150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.120 [2024-02-14 20:14:07.262194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.120 [2024-02-14 20:14:07.262237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.120 [2024-02-14 20:14:07.262279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.120 [2024-02-14 20:14:07.262323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.120 [2024-02-14 20:14:07.262361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.120 [2024-02-14 20:14:07.262413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.120 [2024-02-14 20:14:07.262459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.120 [2024-02-14 20:14:07.262507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.120 [2024-02-14 20:14:07.262554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.120 [2024-02-14 20:14:07.262596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.120 [2024-02-14 20:14:07.262636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.120 [2024-02-14 20:14:07.262691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.120 [2024-02-14 20:14:07.262738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.120 [2024-02-14 20:14:07.262781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.120 [2024-02-14 20:14:07.262821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.120 [2024-02-14 20:14:07.262863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.120 [2024-02-14 20:14:07.262899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.120 [2024-02-14 20:14:07.262939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.120 [2024-02-14 20:14:07.262973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.120 [2024-02-14 20:14:07.263001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.121 [2024-02-14 20:14:07.263032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.121 [2024-02-14 20:14:07.263071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.121 [2024-02-14 20:14:07.263114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.121 [2024-02-14 20:14:07.263155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.121 [2024-02-14 20:14:07.263191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.121 [2024-02-14 20:14:07.263227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.121 [2024-02-14 20:14:07.263255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.121 [2024-02-14 20:14:07.263290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.121 [2024-02-14 20:14:07.263325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.121 [2024-02-14 20:14:07.263357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.121 [2024-02-14 20:14:07.263387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.121 [2024-02-14 20:14:07.263428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.121 [2024-02-14 20:14:07.263468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.121 [2024-02-14 20:14:07.263514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.121 [2024-02-14 20:14:07.263558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.121 [2024-02-14 20:14:07.263603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.121 [2024-02-14 20:14:07.263651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.121 [2024-02-14 20:14:07.263700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.121 [2024-02-14 20:14:07.263745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.121 [2024-02-14 20:14:07.263792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.121 [2024-02-14 20:14:07.263837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.121 [2024-02-14 20:14:07.263884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.121 [2024-02-14 20:14:07.263930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.121 [2024-02-14 20:14:07.263975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.121 [2024-02-14 20:14:07.264028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.121 [2024-02-14 20:14:07.264074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.121 [2024-02-14 20:14:07.264400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.121 [2024-02-14 20:14:07.264432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.121 [2024-02-14 20:14:07.264469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.121 [2024-02-14 20:14:07.264505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.121 [2024-02-14 20:14:07.264540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.121 [2024-02-14 20:14:07.264583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.121 [2024-02-14 20:14:07.264614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.121 [2024-02-14 20:14:07.264659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.121 [2024-02-14 20:14:07.264693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.121 [2024-02-14 20:14:07.264735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.121 [2024-02-14 20:14:07.264764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.121 [2024-02-14 20:14:07.264793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.121 [2024-02-14 20:14:07.264838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.121 [2024-02-14 20:14:07.264882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.121 [2024-02-14 20:14:07.264925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.121 [2024-02-14 20:14:07.264970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.121 [2024-02-14 20:14:07.265018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.121 [2024-02-14 20:14:07.265067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.121 [2024-02-14 20:14:07.265111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.121 [2024-02-14 20:14:07.265163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.121 [2024-02-14 20:14:07.265215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.121 [2024-02-14 20:14:07.265261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.121 [2024-02-14 20:14:07.265313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.121 [2024-02-14 20:14:07.265362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.121 [2024-02-14 20:14:07.265409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.121 [2024-02-14 20:14:07.265461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.121 [2024-02-14 20:14:07.265508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.121 [2024-02-14 20:14:07.265558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.121 [2024-02-14 20:14:07.265601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.121 [2024-02-14 20:14:07.265650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.121 [2024-02-14 20:14:07.265699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.121 [2024-02-14 20:14:07.265748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.121 [2024-02-14 20:14:07.265791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.121 [2024-02-14 20:14:07.265835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.121 [2024-02-14 20:14:07.265877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.121 [2024-02-14 20:14:07.265916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.121 [2024-02-14 20:14:07.265952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.121 [2024-02-14 20:14:07.265989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.121 [2024-02-14 20:14:07.266018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.121 [2024-02-14 20:14:07.266049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.121 [2024-02-14 20:14:07.266087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.121 [2024-02-14 20:14:07.266128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.121 [2024-02-14 20:14:07.266165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.121 [2024-02-14 20:14:07.266202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.121 [2024-02-14 20:14:07.266247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.121 [2024-02-14 20:14:07.266282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.121 [2024-02-14 20:14:07.266319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.121 [2024-02-14 20:14:07.266361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.121 [2024-02-14 20:14:07.266392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.121 [2024-02-14 20:14:07.266425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.121 [2024-02-14 20:14:07.266472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.121 [2024-02-14 20:14:07.266517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.121 [2024-02-14 20:14:07.266567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.121 [2024-02-14 20:14:07.266613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.122 [2024-02-14 20:14:07.266660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.122 [2024-02-14 20:14:07.266710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.122 [2024-02-14 20:14:07.266756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.122 [2024-02-14 20:14:07.266801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.122 [2024-02-14 20:14:07.266848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.122 [2024-02-14 20:14:07.266894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.122 [2024-02-14 20:14:07.266939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.122 [2024-02-14 20:14:07.266988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.122 [2024-02-14 20:14:07.267031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.122 [2024-02-14 20:14:07.267076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.122 [2024-02-14 20:14:07.267410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.122 [2024-02-14 20:14:07.267450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.122 [2024-02-14 20:14:07.267490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.122 [2024-02-14 20:14:07.267529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.122 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:13:30.122 [2024-02-14 20:14:07.267558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.122 [2024-02-14 20:14:07.267588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.122 [2024-02-14 20:14:07.267623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.122 [2024-02-14 20:14:07.267667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.122 [2024-02-14 20:14:07.267707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.122 [2024-02-14 20:14:07.267751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.122 [2024-02-14 20:14:07.267781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.122 [2024-02-14 20:14:07.267821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.122 [2024-02-14 20:14:07.267862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.122 [2024-02-14 20:14:07.267892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.122 [2024-02-14 20:14:07.267939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.122 [2024-02-14 20:14:07.267982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.122 [2024-02-14 20:14:07.268041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.122 [2024-02-14 20:14:07.268089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.122 [2024-02-14 20:14:07.268136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.122 [2024-02-14 20:14:07.268182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.122 [2024-02-14 20:14:07.268226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.122 [2024-02-14 20:14:07.268281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.122 [2024-02-14 20:14:07.268324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.122 [2024-02-14 20:14:07.268373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.122 [2024-02-14 20:14:07.268421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.122 [2024-02-14 20:14:07.268465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.122 [2024-02-14 20:14:07.268510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.122 [2024-02-14 20:14:07.268559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.122 [2024-02-14 20:14:07.268601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.122 [2024-02-14 20:14:07.268656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.122 [2024-02-14 20:14:07.268704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.122 [2024-02-14 20:14:07.268750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.122 [2024-02-14 20:14:07.268800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.122 [2024-02-14 20:14:07.268842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.122 [2024-02-14 20:14:07.268887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.122 [2024-02-14 20:14:07.268928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.122 [2024-02-14 20:14:07.268964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.122 [2024-02-14 20:14:07.269012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.122 [2024-02-14 20:14:07.269057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.122 [2024-02-14 20:14:07.269103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.122 [2024-02-14 20:14:07.269139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.122 [2024-02-14 20:14:07.269168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.122 [2024-02-14 20:14:07.269210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.122 [2024-02-14 20:14:07.269247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.122 [2024-02-14 20:14:07.269278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.122 [2024-02-14 20:14:07.269309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.122 [2024-02-14 20:14:07.269347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.122 [2024-02-14 20:14:07.269383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.122 [2024-02-14 20:14:07.269413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.122 [2024-02-14 20:14:07.269444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.122 [2024-02-14 20:14:07.269487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.122 [2024-02-14 20:14:07.269539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.122 [2024-02-14 20:14:07.269584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.122 [2024-02-14 20:14:07.269629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.122 [2024-02-14 20:14:07.269682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.122 [2024-02-14 20:14:07.269728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.122 [2024-02-14 20:14:07.269773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.122 [2024-02-14 20:14:07.269834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.122 [2024-02-14 20:14:07.269876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.122 [2024-02-14 20:14:07.269926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.122 [2024-02-14 20:14:07.269970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.122 [2024-02-14 20:14:07.270012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.122 [2024-02-14 20:14:07.270058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.122 [2024-02-14 20:14:07.270381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.122 [2024-02-14 20:14:07.270428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.122 [2024-02-14 20:14:07.270470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.122 [2024-02-14 20:14:07.270511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.122 [2024-02-14 20:14:07.270552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.122 [2024-02-14 20:14:07.270593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.122 [2024-02-14 20:14:07.270633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.122 [2024-02-14 20:14:07.270669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.122 [2024-02-14 20:14:07.270699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.122 [2024-02-14 20:14:07.270738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.122 [2024-02-14 20:14:07.270769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.122 [2024-02-14 20:14:07.270812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.122 [2024-02-14 20:14:07.270855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.122 [2024-02-14 20:14:07.270900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.122 [2024-02-14 20:14:07.270944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.122 [2024-02-14 20:14:07.270997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.122 [2024-02-14 20:14:07.271044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.122 [2024-02-14 20:14:07.271090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.122 [2024-02-14 20:14:07.271136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.123 [2024-02-14 20:14:07.271188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.123 [2024-02-14 20:14:07.271232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.123 [2024-02-14 20:14:07.271274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.123 [2024-02-14 20:14:07.271316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.123 [2024-02-14 20:14:07.271362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.123 [2024-02-14 20:14:07.271399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.123 [2024-02-14 20:14:07.271438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.123 [2024-02-14 20:14:07.271468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.123 [2024-02-14 20:14:07.271495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.123 [2024-02-14 20:14:07.271523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.123 [2024-02-14 20:14:07.271549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.123 [2024-02-14 20:14:07.271576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.123 [2024-02-14 20:14:07.271602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.123 [2024-02-14 20:14:07.271630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.123 [2024-02-14 20:14:07.271660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.123 [2024-02-14 20:14:07.271688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.123 [2024-02-14 20:14:07.271715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.123 [2024-02-14 20:14:07.271743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.123 [2024-02-14 20:14:07.271770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.123 [2024-02-14 20:14:07.271797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.123 [2024-02-14 20:14:07.271824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.123 [2024-02-14 20:14:07.271855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.123 [2024-02-14 20:14:07.271894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.123 [2024-02-14 20:14:07.271932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.123 [2024-02-14 20:14:07.271970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.123 [2024-02-14 20:14:07.272004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.123 [2024-02-14 20:14:07.272046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.123 [2024-02-14 20:14:07.272093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.123 [2024-02-14 20:14:07.272135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.123 [2024-02-14 20:14:07.272177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.123 [2024-02-14 20:14:07.272228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.123 [2024-02-14 20:14:07.272273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.123 [2024-02-14 20:14:07.272319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.123 [2024-02-14 20:14:07.272370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.123 [2024-02-14 20:14:07.272417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.123 [2024-02-14 20:14:07.272462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.123 [2024-02-14 20:14:07.272511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.123 [2024-02-14 20:14:07.272555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.123 20:14:07 -- target/ns_hotplug_stress.sh@40 -- # null_size=1014 00:13:30.123 [2024-02-14 20:14:07.272603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.123 [2024-02-14 20:14:07.272653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.123 [2024-02-14 20:14:07.272698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.123 [2024-02-14 20:14:07.272744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.123 [2024-02-14 20:14:07.272793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.123 [2024-02-14 20:14:07.272841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.123 [2024-02-14 20:14:07.272887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.123 20:14:07 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:13:30.123 [2024-02-14 20:14:07.273220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.123 [2024-02-14 20:14:07.273266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.123 [2024-02-14 20:14:07.273313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.123 [2024-02-14 20:14:07.273369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.123 [2024-02-14 20:14:07.273411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.123 [2024-02-14 20:14:07.273452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.123 [2024-02-14 20:14:07.273493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.123 [2024-02-14 20:14:07.273533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.123 [2024-02-14 20:14:07.273576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.123 [2024-02-14 20:14:07.273615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.123 [2024-02-14 20:14:07.273661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.123 [2024-02-14 20:14:07.273701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.123 [2024-02-14 20:14:07.273740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.123 [2024-02-14 20:14:07.273769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.123 [2024-02-14 20:14:07.273799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.123 [2024-02-14 20:14:07.273841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.123 [2024-02-14 20:14:07.273880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.123 [2024-02-14 20:14:07.273924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.123 [2024-02-14 20:14:07.273967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.123 [2024-02-14 20:14:07.274012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.123 [2024-02-14 20:14:07.274055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.123 [2024-02-14 20:14:07.274084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.123 [2024-02-14 20:14:07.274114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.123 [2024-02-14 20:14:07.274144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.123 [2024-02-14 20:14:07.274182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.123 [2024-02-14 20:14:07.274224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.123 [2024-02-14 20:14:07.274256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.123 [2024-02-14 20:14:07.274300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.123 [2024-02-14 20:14:07.274348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.123 [2024-02-14 20:14:07.274396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.123 [2024-02-14 20:14:07.274442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.123 [2024-02-14 20:14:07.274491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.123 [2024-02-14 20:14:07.274540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.123 [2024-02-14 20:14:07.274588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.123 [2024-02-14 20:14:07.274633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.123 [2024-02-14 20:14:07.274687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.123 [2024-02-14 20:14:07.274733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.123 [2024-02-14 20:14:07.274781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.123 [2024-02-14 20:14:07.274826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.123 [2024-02-14 20:14:07.274868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.123 [2024-02-14 20:14:07.274928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.123 [2024-02-14 20:14:07.274977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.123 [2024-02-14 20:14:07.275021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.123 [2024-02-14 20:14:07.275067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.123 [2024-02-14 20:14:07.275106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.123 [2024-02-14 20:14:07.275137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.123 [2024-02-14 20:14:07.275166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.123 [2024-02-14 20:14:07.275209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.123 [2024-02-14 20:14:07.275251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.123 [2024-02-14 20:14:07.275289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.124 [2024-02-14 20:14:07.275336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.124 [2024-02-14 20:14:07.275373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.124 [2024-02-14 20:14:07.275412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.124 [2024-02-14 20:14:07.275441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.124 [2024-02-14 20:14:07.275485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.124 [2024-02-14 20:14:07.275533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.124 [2024-02-14 20:14:07.275582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.124 [2024-02-14 20:14:07.275629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.124 [2024-02-14 20:14:07.275675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.124 [2024-02-14 20:14:07.275709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.124 [2024-02-14 20:14:07.275738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.124 [2024-02-14 20:14:07.275766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.124 [2024-02-14 20:14:07.275794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.124 [2024-02-14 20:14:07.276060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.124 [2024-02-14 20:14:07.276090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.124 [2024-02-14 20:14:07.276118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.124 [2024-02-14 20:14:07.276146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.124 [2024-02-14 20:14:07.276172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.124 [2024-02-14 20:14:07.276200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.124 [2024-02-14 20:14:07.276229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.124 [2024-02-14 20:14:07.276257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.124 [2024-02-14 20:14:07.276283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.124 [2024-02-14 20:14:07.276310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.124 [2024-02-14 20:14:07.276337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.124 [2024-02-14 20:14:07.276366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.124 [2024-02-14 20:14:07.276393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.124 [2024-02-14 20:14:07.276421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.124 [2024-02-14 20:14:07.276449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.124 [2024-02-14 20:14:07.276488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.124 [2024-02-14 20:14:07.276525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.124 [2024-02-14 20:14:07.276553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.124 [2024-02-14 20:14:07.276581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.124 [2024-02-14 20:14:07.276609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.124 [2024-02-14 20:14:07.276637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.124 [2024-02-14 20:14:07.276672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.124 [2024-02-14 20:14:07.276702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.124 [2024-02-14 20:14:07.276730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.124 [2024-02-14 20:14:07.276757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.124 [2024-02-14 20:14:07.276785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.124 [2024-02-14 20:14:07.276814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.124 [2024-02-14 20:14:07.276841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.124 [2024-02-14 20:14:07.276869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.124 [2024-02-14 20:14:07.276897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.124 [2024-02-14 20:14:07.276926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.124 [2024-02-14 20:14:07.276953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.124 [2024-02-14 20:14:07.276982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.124 [2024-02-14 20:14:07.277011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.124 [2024-02-14 20:14:07.277039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.124 [2024-02-14 20:14:07.277067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.124 [2024-02-14 20:14:07.277095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.124 [2024-02-14 20:14:07.277123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.124 [2024-02-14 20:14:07.277151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.124 [2024-02-14 20:14:07.277181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.124 [2024-02-14 20:14:07.277209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.124 [2024-02-14 20:14:07.277240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.124 [2024-02-14 20:14:07.277278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.124 [2024-02-14 20:14:07.277321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.124 [2024-02-14 20:14:07.277357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.124 [2024-02-14 20:14:07.277387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.124 [2024-02-14 20:14:07.277423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.124 [2024-02-14 20:14:07.277465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.124 [2024-02-14 20:14:07.277501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.124 [2024-02-14 20:14:07.277540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.124 [2024-02-14 20:14:07.277584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.124 [2024-02-14 20:14:07.277631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.124 [2024-02-14 20:14:07.277679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.124 [2024-02-14 20:14:07.277731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.124 [2024-02-14 20:14:07.277784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.124 [2024-02-14 20:14:07.277832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.124 [2024-02-14 20:14:07.277878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.124 [2024-02-14 20:14:07.277927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.124 [2024-02-14 20:14:07.277967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.124 [2024-02-14 20:14:07.278008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.124 [2024-02-14 20:14:07.278053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.124 [2024-02-14 20:14:07.278097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.124 [2024-02-14 20:14:07.278143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.124 [2024-02-14 20:14:07.278189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.124 [2024-02-14 20:14:07.278516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.124 [2024-02-14 20:14:07.278564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.124 [2024-02-14 20:14:07.278614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.124 [2024-02-14 20:14:07.278664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.124 [2024-02-14 20:14:07.278708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.124 [2024-02-14 20:14:07.278753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.124 [2024-02-14 20:14:07.278800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.124 [2024-02-14 20:14:07.278849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.124 [2024-02-14 20:14:07.278895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.124 [2024-02-14 20:14:07.278941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.124 [2024-02-14 20:14:07.278983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.124 [2024-02-14 20:14:07.279029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.124 [2024-02-14 20:14:07.279082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.124 [2024-02-14 20:14:07.279130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.124 [2024-02-14 20:14:07.279175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.124 [2024-02-14 20:14:07.279222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.124 [2024-02-14 20:14:07.279265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.124 [2024-02-14 20:14:07.279313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.124 [2024-02-14 20:14:07.279362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.124 [2024-02-14 20:14:07.279411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.124 [2024-02-14 20:14:07.279457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.125 [2024-02-14 20:14:07.279504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.125 [2024-02-14 20:14:07.279551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.125 [2024-02-14 20:14:07.279599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.125 [2024-02-14 20:14:07.279642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.125 [2024-02-14 20:14:07.279684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.125 [2024-02-14 20:14:07.279725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.125 [2024-02-14 20:14:07.279763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.125 [2024-02-14 20:14:07.279795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.125 [2024-02-14 20:14:07.279825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.125 [2024-02-14 20:14:07.279866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.125 [2024-02-14 20:14:07.279908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.125 [2024-02-14 20:14:07.279946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.125 [2024-02-14 20:14:07.279984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.125 [2024-02-14 20:14:07.280023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.125 [2024-02-14 20:14:07.280063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.125 [2024-02-14 20:14:07.280101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.125 [2024-02-14 20:14:07.280137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.125 [2024-02-14 20:14:07.280185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.125 [2024-02-14 20:14:07.280215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.125 [2024-02-14 20:14:07.280252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.125 [2024-02-14 20:14:07.280297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.125 [2024-02-14 20:14:07.280338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.125 [2024-02-14 20:14:07.280378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.125 [2024-02-14 20:14:07.280414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.125 [2024-02-14 20:14:07.280456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.125 [2024-02-14 20:14:07.280494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.125 [2024-02-14 20:14:07.280535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.125 [2024-02-14 20:14:07.280567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.125 [2024-02-14 20:14:07.280605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.125 [2024-02-14 20:14:07.280640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.125 [2024-02-14 20:14:07.280682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.125 [2024-02-14 20:14:07.280717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.125 [2024-02-14 20:14:07.280749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.125 [2024-02-14 20:14:07.280790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.125 [2024-02-14 20:14:07.280843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.125 [2024-02-14 20:14:07.280897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.125 [2024-02-14 20:14:07.280939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.125 [2024-02-14 20:14:07.280990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.125 [2024-02-14 20:14:07.281035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.125 [2024-02-14 20:14:07.281081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.125 [2024-02-14 20:14:07.281131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.125 [2024-02-14 20:14:07.281176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.125 [2024-02-14 20:14:07.281516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.125 [2024-02-14 20:14:07.281562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.125 [2024-02-14 20:14:07.281606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.125 [2024-02-14 20:14:07.281654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.125 [2024-02-14 20:14:07.281706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.125 [2024-02-14 20:14:07.281749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.125 [2024-02-14 20:14:07.281792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.125 [2024-02-14 20:14:07.281843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.125 [2024-02-14 20:14:07.281886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.125 [2024-02-14 20:14:07.281930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.125 [2024-02-14 20:14:07.281984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.125 [2024-02-14 20:14:07.282032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.125 [2024-02-14 20:14:07.282079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.125 [2024-02-14 20:14:07.282125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.125 [2024-02-14 20:14:07.282170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.125 [2024-02-14 20:14:07.282213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.125 [2024-02-14 20:14:07.282242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.125 [2024-02-14 20:14:07.282277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.125 [2024-02-14 20:14:07.282315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.125 [2024-02-14 20:14:07.282353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.125 [2024-02-14 20:14:07.282393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.125 [2024-02-14 20:14:07.282431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.125 [2024-02-14 20:14:07.282471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.125 [2024-02-14 20:14:07.282510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.125 [2024-02-14 20:14:07.282540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.125 [2024-02-14 20:14:07.282583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.125 [2024-02-14 20:14:07.282629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.125 [2024-02-14 20:14:07.282674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.125 [2024-02-14 20:14:07.282719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.125 [2024-02-14 20:14:07.282755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.125 [2024-02-14 20:14:07.282797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.125 [2024-02-14 20:14:07.282834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.125 [2024-02-14 20:14:07.282864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.125 [2024-02-14 20:14:07.282906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.125 [2024-02-14 20:14:07.282936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.125 [2024-02-14 20:14:07.282966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.125 [2024-02-14 20:14:07.282995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.125 [2024-02-14 20:14:07.283023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.125 [2024-02-14 20:14:07.283050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.126 [2024-02-14 20:14:07.283078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.126 [2024-02-14 20:14:07.283105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.126 [2024-02-14 20:14:07.283133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.126 [2024-02-14 20:14:07.283161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.126 [2024-02-14 20:14:07.283188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.126 [2024-02-14 20:14:07.283216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.126 [2024-02-14 20:14:07.283243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.126 [2024-02-14 20:14:07.283272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.126 [2024-02-14 20:14:07.283299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.126 [2024-02-14 20:14:07.283326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.126 [2024-02-14 20:14:07.283354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.126 [2024-02-14 20:14:07.283381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.126 [2024-02-14 20:14:07.283408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.126 [2024-02-14 20:14:07.283435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.126 [2024-02-14 20:14:07.283462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.126 [2024-02-14 20:14:07.283491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.126 [2024-02-14 20:14:07.283521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.126 [2024-02-14 20:14:07.283549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.126 [2024-02-14 20:14:07.283577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.126 [2024-02-14 20:14:07.283605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.126 [2024-02-14 20:14:07.283656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.126 [2024-02-14 20:14:07.283693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.126 [2024-02-14 20:14:07.283724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.126 [2024-02-14 20:14:07.283752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.126 [2024-02-14 20:14:07.283781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.126 [2024-02-14 20:14:07.284043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.126 [2024-02-14 20:14:07.284074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.126 [2024-02-14 20:14:07.284103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.126 [2024-02-14 20:14:07.284130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.126 [2024-02-14 20:14:07.284158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.126 [2024-02-14 20:14:07.284186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.126 [2024-02-14 20:14:07.284214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.126 [2024-02-14 20:14:07.284242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.126 [2024-02-14 20:14:07.284271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.126 [2024-02-14 20:14:07.284298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.126 [2024-02-14 20:14:07.284326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.126 [2024-02-14 20:14:07.284354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.126 [2024-02-14 20:14:07.284382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.126 [2024-02-14 20:14:07.284412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.126 [2024-02-14 20:14:07.284440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.126 [2024-02-14 20:14:07.284467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.126 [2024-02-14 20:14:07.284495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.126 [2024-02-14 20:14:07.284524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.126 [2024-02-14 20:14:07.284551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.126 [2024-02-14 20:14:07.284579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.126 [2024-02-14 20:14:07.284618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.126 [2024-02-14 20:14:07.284654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.126 [2024-02-14 20:14:07.284685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.126 [2024-02-14 20:14:07.284715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.126 [2024-02-14 20:14:07.284754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.126 [2024-02-14 20:14:07.284793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.126 [2024-02-14 20:14:07.284830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.126 [2024-02-14 20:14:07.284875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.126 [2024-02-14 20:14:07.284923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.126 [2024-02-14 20:14:07.284967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.126 [2024-02-14 20:14:07.285013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.126 [2024-02-14 20:14:07.285063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.126 [2024-02-14 20:14:07.285107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.126 [2024-02-14 20:14:07.285152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.126 [2024-02-14 20:14:07.285203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.126 [2024-02-14 20:14:07.285249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.126 [2024-02-14 20:14:07.285289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.126 [2024-02-14 20:14:07.285333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.126 [2024-02-14 20:14:07.285363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.126 [2024-02-14 20:14:07.285391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.126 [2024-02-14 20:14:07.285429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.126 [2024-02-14 20:14:07.285476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.126 [2024-02-14 20:14:07.285511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.126 [2024-02-14 20:14:07.285543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.126 [2024-02-14 20:14:07.285585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.126 [2024-02-14 20:14:07.285620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.126 [2024-02-14 20:14:07.285668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.126 [2024-02-14 20:14:07.285713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.126 [2024-02-14 20:14:07.285757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.126 [2024-02-14 20:14:07.285805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.126 [2024-02-14 20:14:07.285851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.126 [2024-02-14 20:14:07.285898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.126 [2024-02-14 20:14:07.285944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.126 [2024-02-14 20:14:07.285985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.126 [2024-02-14 20:14:07.286033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.126 [2024-02-14 20:14:07.286080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.126 [2024-02-14 20:14:07.286124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.126 [2024-02-14 20:14:07.286173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.126 [2024-02-14 20:14:07.286218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.126 [2024-02-14 20:14:07.286272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.126 [2024-02-14 20:14:07.286324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.126 [2024-02-14 20:14:07.286372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.126 [2024-02-14 20:14:07.286415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.126 [2024-02-14 20:14:07.286765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.126 [2024-02-14 20:14:07.286817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.126 [2024-02-14 20:14:07.286864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.126 [2024-02-14 20:14:07.286915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.126 [2024-02-14 20:14:07.286962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.126 [2024-02-14 20:14:07.287004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.126 [2024-02-14 20:14:07.287054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.126 [2024-02-14 20:14:07.287096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.127 [2024-02-14 20:14:07.287142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.127 [2024-02-14 20:14:07.287190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.127 [2024-02-14 20:14:07.287236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.127 [2024-02-14 20:14:07.287280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.127 [2024-02-14 20:14:07.287320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.127 [2024-02-14 20:14:07.287361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.127 [2024-02-14 20:14:07.287401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.127 [2024-02-14 20:14:07.287441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.127 [2024-02-14 20:14:07.287483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.127 [2024-02-14 20:14:07.287524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.127 [2024-02-14 20:14:07.287565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.127 [2024-02-14 20:14:07.287605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.127 [2024-02-14 20:14:07.287640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.127 [2024-02-14 20:14:07.287674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.127 [2024-02-14 20:14:07.287715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.127 [2024-02-14 20:14:07.287754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.127 [2024-02-14 20:14:07.287797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.127 [2024-02-14 20:14:07.287840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.127 [2024-02-14 20:14:07.287880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.127 [2024-02-14 20:14:07.287920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.127 [2024-02-14 20:14:07.287957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.127 [2024-02-14 20:14:07.287994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.127 [2024-02-14 20:14:07.288024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.127 [2024-02-14 20:14:07.288064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.127 [2024-02-14 20:14:07.288102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.127 [2024-02-14 20:14:07.288141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.127 [2024-02-14 20:14:07.288175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.127 [2024-02-14 20:14:07.288219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.127 [2024-02-14 20:14:07.288259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.127 [2024-02-14 20:14:07.288303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.127 [2024-02-14 20:14:07.288347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.127 [2024-02-14 20:14:07.288412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.127 [2024-02-14 20:14:07.288455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.127 [2024-02-14 20:14:07.288499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.127 [2024-02-14 20:14:07.288546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.127 [2024-02-14 20:14:07.288594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.127 [2024-02-14 20:14:07.288642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.127 [2024-02-14 20:14:07.288699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.127 [2024-02-14 20:14:07.288750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.127 [2024-02-14 20:14:07.288796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.127 [2024-02-14 20:14:07.288841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.127 [2024-02-14 20:14:07.288889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.127 [2024-02-14 20:14:07.288936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.127 [2024-02-14 20:14:07.288982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.127 [2024-02-14 20:14:07.289023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.127 [2024-02-14 20:14:07.289072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.127 [2024-02-14 20:14:07.289124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.127 [2024-02-14 20:14:07.289173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.127 [2024-02-14 20:14:07.289222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.127 [2024-02-14 20:14:07.289269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.127 [2024-02-14 20:14:07.289311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.127 [2024-02-14 20:14:07.289362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.127 [2024-02-14 20:14:07.289407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.127 [2024-02-14 20:14:07.289454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.127 [2024-02-14 20:14:07.289503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.127 [2024-02-14 20:14:07.289547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.127 [2024-02-14 20:14:07.289878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.127 [2024-02-14 20:14:07.289925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.127 [2024-02-14 20:14:07.289966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.127 [2024-02-14 20:14:07.290004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.127 [2024-02-14 20:14:07.290033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.127 [2024-02-14 20:14:07.290063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.127 [2024-02-14 20:14:07.290104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.127 [2024-02-14 20:14:07.290141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.127 [2024-02-14 20:14:07.290182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.127 [2024-02-14 20:14:07.290224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.127 [2024-02-14 20:14:07.290262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.127 [2024-02-14 20:14:07.290302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.127 [2024-02-14 20:14:07.290350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.127 [2024-02-14 20:14:07.290381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.127 [2024-02-14 20:14:07.290411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.127 [2024-02-14 20:14:07.290440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.127 [2024-02-14 20:14:07.290472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.127 [2024-02-14 20:14:07.290501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.127 [2024-02-14 20:14:07.290543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.127 [2024-02-14 20:14:07.290573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.127 [2024-02-14 20:14:07.290601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.127 [2024-02-14 20:14:07.290629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.127 [2024-02-14 20:14:07.290661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.127 [2024-02-14 20:14:07.290689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.127 [2024-02-14 20:14:07.290718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.127 [2024-02-14 20:14:07.290746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.127 [2024-02-14 20:14:07.290774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.127 [2024-02-14 20:14:07.290802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.127 [2024-02-14 20:14:07.290829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.127 [2024-02-14 20:14:07.290857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.127 [2024-02-14 20:14:07.290884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.127 [2024-02-14 20:14:07.290914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.127 [2024-02-14 20:14:07.290941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.127 [2024-02-14 20:14:07.290969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.127 [2024-02-14 20:14:07.290996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.127 [2024-02-14 20:14:07.291024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.127 [2024-02-14 20:14:07.291052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.127 [2024-02-14 20:14:07.291080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.127 [2024-02-14 20:14:07.291107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.127 [2024-02-14 20:14:07.291136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.127 [2024-02-14 20:14:07.291164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.127 [2024-02-14 20:14:07.291192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.128 [2024-02-14 20:14:07.291227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.128 [2024-02-14 20:14:07.291256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.128 [2024-02-14 20:14:07.291286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.128 [2024-02-14 20:14:07.291329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.128 [2024-02-14 20:14:07.291375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.128 [2024-02-14 20:14:07.291430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.128 [2024-02-14 20:14:07.291476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.128 [2024-02-14 20:14:07.291505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.128 [2024-02-14 20:14:07.291533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.128 [2024-02-14 20:14:07.291572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.128 [2024-02-14 20:14:07.291608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.128 [2024-02-14 20:14:07.291636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.128 [2024-02-14 20:14:07.291670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.128 [2024-02-14 20:14:07.291698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.128 [2024-02-14 20:14:07.291724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.128 [2024-02-14 20:14:07.291752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.128 [2024-02-14 20:14:07.291780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.128 [2024-02-14 20:14:07.291808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.128 [2024-02-14 20:14:07.291836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.128 [2024-02-14 20:14:07.291863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.128 [2024-02-14 20:14:07.291891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.128 [2024-02-14 20:14:07.292156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.128 [2024-02-14 20:14:07.292188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.128 [2024-02-14 20:14:07.292216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.128 [2024-02-14 20:14:07.292243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.128 [2024-02-14 20:14:07.292271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.128 [2024-02-14 20:14:07.292298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.128 [2024-02-14 20:14:07.292324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.128 [2024-02-14 20:14:07.292352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.128 [2024-02-14 20:14:07.292381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.128 [2024-02-14 20:14:07.292421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.128 [2024-02-14 20:14:07.292455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.128 [2024-02-14 20:14:07.292501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.128 [2024-02-14 20:14:07.292548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.128 [2024-02-14 20:14:07.292593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.128 [2024-02-14 20:14:07.292640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.128 [2024-02-14 20:14:07.292695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.128 [2024-02-14 20:14:07.292743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.128 [2024-02-14 20:14:07.292786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.128 [2024-02-14 20:14:07.292836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.128 [2024-02-14 20:14:07.292883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.128 [2024-02-14 20:14:07.292927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.128 [2024-02-14 20:14:07.292971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.128 [2024-02-14 20:14:07.293013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.128 [2024-02-14 20:14:07.293060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.128 [2024-02-14 20:14:07.293098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.128 [2024-02-14 20:14:07.293137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.128 [2024-02-14 20:14:07.293174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.128 [2024-02-14 20:14:07.293213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.128 [2024-02-14 20:14:07.293250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.128 [2024-02-14 20:14:07.293279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.128 [2024-02-14 20:14:07.293306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.128 [2024-02-14 20:14:07.293346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.128 [2024-02-14 20:14:07.293384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.128 [2024-02-14 20:14:07.293427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.128 [2024-02-14 20:14:07.293462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.128 [2024-02-14 20:14:07.293504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.128 [2024-02-14 20:14:07.293542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.128 [2024-02-14 20:14:07.293576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.128 [2024-02-14 20:14:07.293627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.128 [2024-02-14 20:14:07.293675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.128 [2024-02-14 20:14:07.293724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.128 [2024-02-14 20:14:07.293768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.128 [2024-02-14 20:14:07.293811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.128 [2024-02-14 20:14:07.293862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.128 [2024-02-14 20:14:07.293903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.128 [2024-02-14 20:14:07.293951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.128 [2024-02-14 20:14:07.293995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.128 [2024-02-14 20:14:07.294042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.128 [2024-02-14 20:14:07.294093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.128 [2024-02-14 20:14:07.294141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.128 [2024-02-14 20:14:07.294190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.128 [2024-02-14 20:14:07.294241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.128 [2024-02-14 20:14:07.294287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.128 [2024-02-14 20:14:07.294332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.128 [2024-02-14 20:14:07.294382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.128 [2024-02-14 20:14:07.294423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.128 [2024-02-14 20:14:07.294468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.128 [2024-02-14 20:14:07.294512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.128 [2024-02-14 20:14:07.294559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.128 [2024-02-14 20:14:07.294606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.128 [2024-02-14 20:14:07.294656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.128 [2024-02-14 20:14:07.294701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.128 [2024-02-14 20:14:07.294750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.128 [2024-02-14 20:14:07.294796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.128 [2024-02-14 20:14:07.295130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.128 [2024-02-14 20:14:07.295177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.128 [2024-02-14 20:14:07.295226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.128 [2024-02-14 20:14:07.295280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.128 [2024-02-14 20:14:07.295327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.128 [2024-02-14 20:14:07.295378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.128 [2024-02-14 20:14:07.295429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.128 [2024-02-14 20:14:07.295474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.128 [2024-02-14 20:14:07.295517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.128 [2024-02-14 20:14:07.295564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.128 [2024-02-14 20:14:07.295610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.128 [2024-02-14 20:14:07.295661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.128 [2024-02-14 20:14:07.295710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.128 [2024-02-14 20:14:07.295756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.129 [2024-02-14 20:14:07.295801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.129 [2024-02-14 20:14:07.295838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.129 [2024-02-14 20:14:07.295877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.129 [2024-02-14 20:14:07.295916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.129 [2024-02-14 20:14:07.295957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.129 [2024-02-14 20:14:07.296001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.129 [2024-02-14 20:14:07.296040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.129 [2024-02-14 20:14:07.296077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.129 [2024-02-14 20:14:07.296105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.129 [2024-02-14 20:14:07.296141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.129 [2024-02-14 20:14:07.296185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.129 [2024-02-14 20:14:07.296234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.129 [2024-02-14 20:14:07.296278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.129 [2024-02-14 20:14:07.296318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.129 [2024-02-14 20:14:07.296354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.129 [2024-02-14 20:14:07.296392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.129 [2024-02-14 20:14:07.296430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.129 [2024-02-14 20:14:07.296466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.129 [2024-02-14 20:14:07.296499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.129 [2024-02-14 20:14:07.296531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.129 [2024-02-14 20:14:07.296575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.129 [2024-02-14 20:14:07.296613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.129 [2024-02-14 20:14:07.296659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.129 [2024-02-14 20:14:07.296701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.129 [2024-02-14 20:14:07.296737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.129 [2024-02-14 20:14:07.296774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.129 [2024-02-14 20:14:07.296808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.129 [2024-02-14 20:14:07.296841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.129 [2024-02-14 20:14:07.296883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.129 [2024-02-14 20:14:07.296935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.129 [2024-02-14 20:14:07.296982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.129 [2024-02-14 20:14:07.297029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.129 [2024-02-14 20:14:07.297076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.129 [2024-02-14 20:14:07.297120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.129 [2024-02-14 20:14:07.297167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.129 [2024-02-14 20:14:07.297214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.129 [2024-02-14 20:14:07.297262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.129 [2024-02-14 20:14:07.297309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.129 [2024-02-14 20:14:07.297354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.129 [2024-02-14 20:14:07.297402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.129 [2024-02-14 20:14:07.297451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.129 [2024-02-14 20:14:07.297499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.129 [2024-02-14 20:14:07.297545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.129 [2024-02-14 20:14:07.297581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.129 [2024-02-14 20:14:07.297610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.129 [2024-02-14 20:14:07.297661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.129 [2024-02-14 20:14:07.297705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.129 [2024-02-14 20:14:07.297745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.129 [2024-02-14 20:14:07.297786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.129 [2024-02-14 20:14:07.298120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.129 [2024-02-14 20:14:07.298174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.129 [2024-02-14 20:14:07.298220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.129 [2024-02-14 20:14:07.298265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.129 [2024-02-14 20:14:07.298304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.129 [2024-02-14 20:14:07.298346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.129 [2024-02-14 20:14:07.298375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.129 [2024-02-14 20:14:07.298410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.129 [2024-02-14 20:14:07.298441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.129 [2024-02-14 20:14:07.298471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.129 [2024-02-14 20:14:07.298499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.129 [2024-02-14 20:14:07.298527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.129 [2024-02-14 20:14:07.298555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.129 [2024-02-14 20:14:07.298581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.129 [2024-02-14 20:14:07.298609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.129 [2024-02-14 20:14:07.298636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.129 [2024-02-14 20:14:07.298668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.129 [2024-02-14 20:14:07.298695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.129 [2024-02-14 20:14:07.298724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.129 [2024-02-14 20:14:07.298751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.129 [2024-02-14 20:14:07.298778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.129 [2024-02-14 20:14:07.298805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.129 [2024-02-14 20:14:07.298832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.129 [2024-02-14 20:14:07.298860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.129 [2024-02-14 20:14:07.298887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.129 [2024-02-14 20:14:07.298916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.129 [2024-02-14 20:14:07.298943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.129 [2024-02-14 20:14:07.298972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.129 [2024-02-14 20:14:07.299009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.129 [2024-02-14 20:14:07.299048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.129 [2024-02-14 20:14:07.299078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.129 [2024-02-14 20:14:07.299116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.129 [2024-02-14 20:14:07.299157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.129 [2024-02-14 20:14:07.299212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.129 [2024-02-14 20:14:07.299259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.129 [2024-02-14 20:14:07.299307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.129 [2024-02-14 20:14:07.299364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.129 [2024-02-14 20:14:07.299410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.129 [2024-02-14 20:14:07.299455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.129 [2024-02-14 20:14:07.299503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.129 [2024-02-14 20:14:07.299549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.129 [2024-02-14 20:14:07.299596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.129 [2024-02-14 20:14:07.299644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.129 [2024-02-14 20:14:07.299699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.129 [2024-02-14 20:14:07.299740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.129 [2024-02-14 20:14:07.299777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.129 [2024-02-14 20:14:07.299809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.129 [2024-02-14 20:14:07.299837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.130 [2024-02-14 20:14:07.299874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.130 [2024-02-14 20:14:07.299915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.130 [2024-02-14 20:14:07.299958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.130 [2024-02-14 20:14:07.299994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.130 [2024-02-14 20:14:07.300036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.130 [2024-02-14 20:14:07.300077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.130 [2024-02-14 20:14:07.300115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.130 [2024-02-14 20:14:07.300155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.130 [2024-02-14 20:14:07.300188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.130 [2024-02-14 20:14:07.300230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.130 [2024-02-14 20:14:07.300281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.130 [2024-02-14 20:14:07.300323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.130 [2024-02-14 20:14:07.300368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.130 [2024-02-14 20:14:07.300417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.130 [2024-02-14 20:14:07.300464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.130 [2024-02-14 20:14:07.300515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.130 [2024-02-14 20:14:07.300848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.130 [2024-02-14 20:14:07.300897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.130 [2024-02-14 20:14:07.300945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.130 [2024-02-14 20:14:07.300993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.130 [2024-02-14 20:14:07.301037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.130 [2024-02-14 20:14:07.301086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.130 [2024-02-14 20:14:07.301136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.130 [2024-02-14 20:14:07.301185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.130 [2024-02-14 20:14:07.301232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.130 [2024-02-14 20:14:07.301278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.130 [2024-02-14 20:14:07.301326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.130 [2024-02-14 20:14:07.301374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.130 [2024-02-14 20:14:07.301421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.130 [2024-02-14 20:14:07.301464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.130 [2024-02-14 20:14:07.301517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.130 [2024-02-14 20:14:07.301560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.130 [2024-02-14 20:14:07.301606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.130 [2024-02-14 20:14:07.301658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.130 [2024-02-14 20:14:07.301699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.130 [2024-02-14 20:14:07.301742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.130 [2024-02-14 20:14:07.301782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.130 [2024-02-14 20:14:07.301822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.130 [2024-02-14 20:14:07.301859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.130 [2024-02-14 20:14:07.301898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.130 [2024-02-14 20:14:07.301935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.130 [2024-02-14 20:14:07.301979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.130 [2024-02-14 20:14:07.302017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.130 [2024-02-14 20:14:07.302046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.130 [2024-02-14 20:14:07.302078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.130 [2024-02-14 20:14:07.302117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.130 [2024-02-14 20:14:07.302155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.130 [2024-02-14 20:14:07.302195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.130 [2024-02-14 20:14:07.302233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.130 [2024-02-14 20:14:07.302278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.130 [2024-02-14 20:14:07.302318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.130 [2024-02-14 20:14:07.302359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.130 [2024-02-14 20:14:07.302389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.130 [2024-02-14 20:14:07.302427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.130 [2024-02-14 20:14:07.302468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.130 [2024-02-14 20:14:07.302508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.130 [2024-02-14 20:14:07.302544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.130 [2024-02-14 20:14:07.302574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.130 [2024-02-14 20:14:07.302622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.130 [2024-02-14 20:14:07.302676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.130 [2024-02-14 20:14:07.302721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.130 [2024-02-14 20:14:07.302764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.130 [2024-02-14 20:14:07.302812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.130 [2024-02-14 20:14:07.302859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.130 [2024-02-14 20:14:07.302904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.130 [2024-02-14 20:14:07.302949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.130 [2024-02-14 20:14:07.302992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.130 [2024-02-14 20:14:07.303043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.130 [2024-02-14 20:14:07.303084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.130 [2024-02-14 20:14:07.303134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.130 [2024-02-14 20:14:07.303185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.130 [2024-02-14 20:14:07.303235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.130 [2024-02-14 20:14:07.303283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.130 [2024-02-14 20:14:07.303334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.130 [2024-02-14 20:14:07.303379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.130 [2024-02-14 20:14:07.303433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.130 [2024-02-14 20:14:07.303476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.130 [2024-02-14 20:14:07.303526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.130 [2024-02-14 20:14:07.303579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.130 [2024-02-14 20:14:07.303937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.131 [2024-02-14 20:14:07.303980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.131 [2024-02-14 20:14:07.304021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.131 [2024-02-14 20:14:07.304058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.131 [2024-02-14 20:14:07.304103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.131 [2024-02-14 20:14:07.304151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.131 [2024-02-14 20:14:07.304182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.131 [2024-02-14 20:14:07.304213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.131 [2024-02-14 20:14:07.304250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.131 [2024-02-14 20:14:07.304295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.131 [2024-02-14 20:14:07.304339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.131 [2024-02-14 20:14:07.304386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.131 [2024-02-14 20:14:07.304428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.131 [2024-02-14 20:14:07.304467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.131 [2024-02-14 20:14:07.304502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.131 [2024-02-14 20:14:07.304530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.131 [2024-02-14 20:14:07.304562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.131 [2024-02-14 20:14:07.304590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.131 [2024-02-14 20:14:07.304621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.131 [2024-02-14 20:14:07.304660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.131 [2024-02-14 20:14:07.304710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.131 [2024-02-14 20:14:07.304751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.131 [2024-02-14 20:14:07.304800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.131 [2024-02-14 20:14:07.304846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.131 [2024-02-14 20:14:07.304888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.131 [2024-02-14 20:14:07.304923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.131 [2024-02-14 20:14:07.304965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.131 [2024-02-14 20:14:07.304993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.131 [2024-02-14 20:14:07.305025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.131 [2024-02-14 20:14:07.305063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.131 [2024-02-14 20:14:07.305098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.131 [2024-02-14 20:14:07.305130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.131 [2024-02-14 20:14:07.305179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.131 [2024-02-14 20:14:07.305228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.131 [2024-02-14 20:14:07.305274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.131 [2024-02-14 20:14:07.305325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.131 [2024-02-14 20:14:07.305373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.131 [2024-02-14 20:14:07.305421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.131 [2024-02-14 20:14:07.305471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.131 [2024-02-14 20:14:07.305517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.131 [2024-02-14 20:14:07.305565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.131 [2024-02-14 20:14:07.305609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.131 [2024-02-14 20:14:07.305659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.131 [2024-02-14 20:14:07.305698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.131 [2024-02-14 20:14:07.305740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.131 [2024-02-14 20:14:07.305781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.131 [2024-02-14 20:14:07.305826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.131 [2024-02-14 20:14:07.305856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.131 [2024-02-14 20:14:07.305884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.131 [2024-02-14 20:14:07.305913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.131 [2024-02-14 20:14:07.305950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.131 [2024-02-14 20:14:07.305987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.131 [2024-02-14 20:14:07.306024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.131 [2024-02-14 20:14:07.306063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.131 [2024-02-14 20:14:07.306095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.131 [2024-02-14 20:14:07.306137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.131 [2024-02-14 20:14:07.306188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.131 [2024-02-14 20:14:07.306236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.131 [2024-02-14 20:14:07.306281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.131 [2024-02-14 20:14:07.306329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.131 [2024-02-14 20:14:07.306374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.131 [2024-02-14 20:14:07.306423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.131 [2024-02-14 20:14:07.306470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.131 [2024-02-14 20:14:07.306518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.131 [2024-02-14 20:14:07.306867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.131 [2024-02-14 20:14:07.306916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.131 [2024-02-14 20:14:07.306959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.131 [2024-02-14 20:14:07.307008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.131 [2024-02-14 20:14:07.307053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.131 [2024-02-14 20:14:07.307113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.131 [2024-02-14 20:14:07.307158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.131 [2024-02-14 20:14:07.307206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.131 [2024-02-14 20:14:07.307251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.131 [2024-02-14 20:14:07.307289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.131 [2024-02-14 20:14:07.307333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.131 [2024-02-14 20:14:07.307371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.131 [2024-02-14 20:14:07.307410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.131 [2024-02-14 20:14:07.307440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.131 [2024-02-14 20:14:07.307467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.131 [2024-02-14 20:14:07.307510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.131 [2024-02-14 20:14:07.307551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.131 [2024-02-14 20:14:07.307588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.131 [2024-02-14 20:14:07.307632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.131 [2024-02-14 20:14:07.307682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.131 [2024-02-14 20:14:07.307733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.131 [2024-02-14 20:14:07.307772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.131 [2024-02-14 20:14:07.307802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.131 [2024-02-14 20:14:07.307840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.131 [2024-02-14 20:14:07.307876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.131 [2024-02-14 20:14:07.307906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.131 [2024-02-14 20:14:07.307951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.131 [2024-02-14 20:14:07.307997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.131 [2024-02-14 20:14:07.308048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.131 [2024-02-14 20:14:07.308099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.131 [2024-02-14 20:14:07.308146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.131 [2024-02-14 20:14:07.308198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.131 [2024-02-14 20:14:07.308244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.131 [2024-02-14 20:14:07.308293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.132 [2024-02-14 20:14:07.308345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.132 [2024-02-14 20:14:07.308397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.132 [2024-02-14 20:14:07.308445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.132 [2024-02-14 20:14:07.308490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.132 [2024-02-14 20:14:07.308535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.132 [2024-02-14 20:14:07.308587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.132 [2024-02-14 20:14:07.308633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.132 [2024-02-14 20:14:07.308686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.132 [2024-02-14 20:14:07.308734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.132 [2024-02-14 20:14:07.308786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.132 [2024-02-14 20:14:07.308834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.132 [2024-02-14 20:14:07.308881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.132 [2024-02-14 20:14:07.308934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.132 [2024-02-14 20:14:07.308984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.132 [2024-02-14 20:14:07.309032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.132 [2024-02-14 20:14:07.309074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.132 [2024-02-14 20:14:07.309120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.132 [2024-02-14 20:14:07.309161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.132 [2024-02-14 20:14:07.309202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.132 [2024-02-14 20:14:07.309245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.132 [2024-02-14 20:14:07.309284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.132 [2024-02-14 20:14:07.309323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.132 [2024-02-14 20:14:07.309352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.132 [2024-02-14 20:14:07.309380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.132 [2024-02-14 20:14:07.309418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.132 [2024-02-14 20:14:07.309459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.132 [2024-02-14 20:14:07.309501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.132 [2024-02-14 20:14:07.309542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.132 [2024-02-14 20:14:07.309578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.132 [2024-02-14 20:14:07.309910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.132 [2024-02-14 20:14:07.309959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.132 [2024-02-14 20:14:07.310009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.132 [2024-02-14 20:14:07.310052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.132 [2024-02-14 20:14:07.310098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.132 [2024-02-14 20:14:07.310149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.132 [2024-02-14 20:14:07.310196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.132 [2024-02-14 20:14:07.310246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.132 [2024-02-14 20:14:07.310290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.132 [2024-02-14 20:14:07.310336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.132 [2024-02-14 20:14:07.310384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.132 [2024-02-14 20:14:07.310429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.132 [2024-02-14 20:14:07.310472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.132 [2024-02-14 20:14:07.310524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.132 [2024-02-14 20:14:07.310567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.132 [2024-02-14 20:14:07.310616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.132 [2024-02-14 20:14:07.310660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.132 [2024-02-14 20:14:07.310702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.132 [2024-02-14 20:14:07.310743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.132 [2024-02-14 20:14:07.310772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.132 [2024-02-14 20:14:07.310800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.132 [2024-02-14 20:14:07.310841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.132 [2024-02-14 20:14:07.310879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.132 [2024-02-14 20:14:07.310927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.132 [2024-02-14 20:14:07.310962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.132 [2024-02-14 20:14:07.311001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.132 [2024-02-14 20:14:07.311037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.132 [2024-02-14 20:14:07.311071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.132 [2024-02-14 20:14:07.311109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.132 [2024-02-14 20:14:07.311142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.132 [2024-02-14 20:14:07.311173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.132 [2024-02-14 20:14:07.311213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.132 [2024-02-14 20:14:07.311254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.132 [2024-02-14 20:14:07.311302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.132 [2024-02-14 20:14:07.311346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.132 [2024-02-14 20:14:07.311396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.132 [2024-02-14 20:14:07.311442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.132 [2024-02-14 20:14:07.311484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.132 [2024-02-14 20:14:07.311529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.132 [2024-02-14 20:14:07.311579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.132 [2024-02-14 20:14:07.311623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.132 [2024-02-14 20:14:07.311667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.132 [2024-02-14 20:14:07.311707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.132 [2024-02-14 20:14:07.311746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.132 [2024-02-14 20:14:07.311789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.132 [2024-02-14 20:14:07.311829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.132 [2024-02-14 20:14:07.311868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.132 [2024-02-14 20:14:07.311897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.132 [2024-02-14 20:14:07.311935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.132 [2024-02-14 20:14:07.311978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.132 [2024-02-14 20:14:07.312017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.132 [2024-02-14 20:14:07.312055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.132 [2024-02-14 20:14:07.312097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.132 [2024-02-14 20:14:07.312128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.132 [2024-02-14 20:14:07.312173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.132 [2024-02-14 20:14:07.312222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.132 [2024-02-14 20:14:07.312270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.132 [2024-02-14 20:14:07.312322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.132 [2024-02-14 20:14:07.312368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.132 [2024-02-14 20:14:07.312416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.132 [2024-02-14 20:14:07.312462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.132 [2024-02-14 20:14:07.312505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.132 [2024-02-14 20:14:07.312549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.132 [2024-02-14 20:14:07.312594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.132 [2024-02-14 20:14:07.312932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.132 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:13:30.132 [2024-02-14 20:14:07.312982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.132 [2024-02-14 20:14:07.313033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.132 [2024-02-14 20:14:07.313081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.133 [2024-02-14 20:14:07.313124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.133 [2024-02-14 20:14:07.313177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.133 [2024-02-14 20:14:07.313221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.133 [2024-02-14 20:14:07.313258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.133 [2024-02-14 20:14:07.313297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.133 [2024-02-14 20:14:07.313341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.133 [2024-02-14 20:14:07.313381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.133 [2024-02-14 20:14:07.313424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.133 [2024-02-14 20:14:07.313461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.133 [2024-02-14 20:14:07.313500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.133 [2024-02-14 20:14:07.313541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.133 [2024-02-14 20:14:07.313570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.133 [2024-02-14 20:14:07.313608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.133 [2024-02-14 20:14:07.313652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.133 [2024-02-14 20:14:07.313696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.133 [2024-02-14 20:14:07.313738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.133 [2024-02-14 20:14:07.313777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.133 [2024-02-14 20:14:07.313806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.133 [2024-02-14 20:14:07.313842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.133 [2024-02-14 20:14:07.313881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.133 [2024-02-14 20:14:07.313920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.133 [2024-02-14 20:14:07.313951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.133 [2024-02-14 20:14:07.314003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.133 [2024-02-14 20:14:07.314050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.133 [2024-02-14 20:14:07.314099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.133 [2024-02-14 20:14:07.314145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.133 [2024-02-14 20:14:07.314192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.133 [2024-02-14 20:14:07.314238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.133 [2024-02-14 20:14:07.314284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.133 [2024-02-14 20:14:07.314336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.133 [2024-02-14 20:14:07.314379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.133 [2024-02-14 20:14:07.314428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.133 [2024-02-14 20:14:07.314475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.133 [2024-02-14 20:14:07.314524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.133 [2024-02-14 20:14:07.314571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.133 [2024-02-14 20:14:07.314615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.133 [2024-02-14 20:14:07.314667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.133 [2024-02-14 20:14:07.314718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.133 [2024-02-14 20:14:07.314764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.133 [2024-02-14 20:14:07.314809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.133 [2024-02-14 20:14:07.314863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.133 [2024-02-14 20:14:07.314908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.133 [2024-02-14 20:14:07.314953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.133 [2024-02-14 20:14:07.315007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.133 [2024-02-14 20:14:07.315053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.133 [2024-02-14 20:14:07.315101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.133 [2024-02-14 20:14:07.315145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.133 [2024-02-14 20:14:07.315191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.133 [2024-02-14 20:14:07.315235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.133 [2024-02-14 20:14:07.315276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.133 [2024-02-14 20:14:07.315311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.133 [2024-02-14 20:14:07.315339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.133 [2024-02-14 20:14:07.315373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.133 [2024-02-14 20:14:07.315415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.133 [2024-02-14 20:14:07.315456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.133 [2024-02-14 20:14:07.315490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.133 [2024-02-14 20:14:07.315529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.133 [2024-02-14 20:14:07.315567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.133 [2024-02-14 20:14:07.315611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.133 [2024-02-14 20:14:07.315960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.133 [2024-02-14 20:14:07.316011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.133 [2024-02-14 20:14:07.316055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.133 [2024-02-14 20:14:07.316096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.133 [2024-02-14 20:14:07.316137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.133 [2024-02-14 20:14:07.316181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.133 [2024-02-14 20:14:07.316223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.133 [2024-02-14 20:14:07.316273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.133 [2024-02-14 20:14:07.316303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.133 [2024-02-14 20:14:07.316331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.133 [2024-02-14 20:14:07.316370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.133 [2024-02-14 20:14:07.316409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.133 [2024-02-14 20:14:07.316449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.133 [2024-02-14 20:14:07.316480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.133 [2024-02-14 20:14:07.316511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.133 [2024-02-14 20:14:07.316549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.133 [2024-02-14 20:14:07.316598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.133 [2024-02-14 20:14:07.316645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.133 [2024-02-14 20:14:07.316696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.133 [2024-02-14 20:14:07.316738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.133 [2024-02-14 20:14:07.316782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.133 [2024-02-14 20:14:07.316825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.133 [2024-02-14 20:14:07.316874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.133 [2024-02-14 20:14:07.316917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.133 [2024-02-14 20:14:07.316964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.133 [2024-02-14 20:14:07.317013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.133 [2024-02-14 20:14:07.317062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.133 [2024-02-14 20:14:07.317105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.133 [2024-02-14 20:14:07.317157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.133 [2024-02-14 20:14:07.317203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.133 [2024-02-14 20:14:07.317254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.133 [2024-02-14 20:14:07.317303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.133 [2024-02-14 20:14:07.317352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.133 [2024-02-14 20:14:07.317399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.133 [2024-02-14 20:14:07.317447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.133 [2024-02-14 20:14:07.317492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.133 [2024-02-14 20:14:07.317540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.133 [2024-02-14 20:14:07.317585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.133 [2024-02-14 20:14:07.317627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.134 [2024-02-14 20:14:07.317675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.134 [2024-02-14 20:14:07.317718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.134 [2024-02-14 20:14:07.317748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.134 [2024-02-14 20:14:07.317781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.134 [2024-02-14 20:14:07.317821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.134 [2024-02-14 20:14:07.317866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.134 [2024-02-14 20:14:07.317907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.134 [2024-02-14 20:14:07.317947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.134 [2024-02-14 20:14:07.317984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.134 [2024-02-14 20:14:07.318024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.134 [2024-02-14 20:14:07.318061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.134 [2024-02-14 20:14:07.318090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.134 [2024-02-14 20:14:07.318131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.134 [2024-02-14 20:14:07.318168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.134 [2024-02-14 20:14:07.318207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.134 [2024-02-14 20:14:07.318237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.134 [2024-02-14 20:14:07.318284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.134 [2024-02-14 20:14:07.318325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.134 [2024-02-14 20:14:07.318372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.134 [2024-02-14 20:14:07.318421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.134 [2024-02-14 20:14:07.318465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.134 [2024-02-14 20:14:07.318511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.134 [2024-02-14 20:14:07.318558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.134 [2024-02-14 20:14:07.318610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.134 [2024-02-14 20:14:07.318659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.134 [2024-02-14 20:14:07.318991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.134 [2024-02-14 20:14:07.319044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.134 [2024-02-14 20:14:07.319091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.134 [2024-02-14 20:14:07.319139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.134 [2024-02-14 20:14:07.319186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.134 [2024-02-14 20:14:07.319232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.134 [2024-02-14 20:14:07.319281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.134 [2024-02-14 20:14:07.319327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.134 [2024-02-14 20:14:07.319367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.134 [2024-02-14 20:14:07.319410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.134 [2024-02-14 20:14:07.319450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.134 [2024-02-14 20:14:07.319491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.134 [2024-02-14 20:14:07.319528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.134 [2024-02-14 20:14:07.319575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.134 [2024-02-14 20:14:07.319612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.134 [2024-02-14 20:14:07.319652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.134 [2024-02-14 20:14:07.319683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.134 [2024-02-14 20:14:07.319715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.134 [2024-02-14 20:14:07.319765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.134 [2024-02-14 20:14:07.319806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.134 [2024-02-14 20:14:07.319846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.134 [2024-02-14 20:14:07.319882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.134 [2024-02-14 20:14:07.319916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.134 [2024-02-14 20:14:07.319953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.134 [2024-02-14 20:14:07.319992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.134 [2024-02-14 20:14:07.320026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.134 [2024-02-14 20:14:07.320055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.134 [2024-02-14 20:14:07.320085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.134 [2024-02-14 20:14:07.320129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.134 [2024-02-14 20:14:07.320172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.134 [2024-02-14 20:14:07.320222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.134 [2024-02-14 20:14:07.320271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.134 [2024-02-14 20:14:07.320315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.134 [2024-02-14 20:14:07.320362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.134 [2024-02-14 20:14:07.320409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.134 [2024-02-14 20:14:07.320462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.134 [2024-02-14 20:14:07.320507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.134 [2024-02-14 20:14:07.320556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.134 [2024-02-14 20:14:07.320604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.134 [2024-02-14 20:14:07.320655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.134 [2024-02-14 20:14:07.320698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.134 [2024-02-14 20:14:07.320752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.134 [2024-02-14 20:14:07.320797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.134 [2024-02-14 20:14:07.320839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.134 [2024-02-14 20:14:07.320891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.134 [2024-02-14 20:14:07.320933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.134 [2024-02-14 20:14:07.320983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.134 [2024-02-14 20:14:07.321027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.134 [2024-02-14 20:14:07.321075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.134 [2024-02-14 20:14:07.321128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.134 [2024-02-14 20:14:07.321172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.134 [2024-02-14 20:14:07.321218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.134 [2024-02-14 20:14:07.321257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.134 [2024-02-14 20:14:07.321298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.134 [2024-02-14 20:14:07.321342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.134 [2024-02-14 20:14:07.321378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.134 [2024-02-14 20:14:07.321408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.134 [2024-02-14 20:14:07.321440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.134 [2024-02-14 20:14:07.321478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.134 [2024-02-14 20:14:07.321522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.134 [2024-02-14 20:14:07.321556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.134 [2024-02-14 20:14:07.321600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.134 [2024-02-14 20:14:07.321644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.134 [2024-02-14 20:14:07.322026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.134 [2024-02-14 20:14:07.322078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.134 [2024-02-14 20:14:07.322125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.134 [2024-02-14 20:14:07.322167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.134 [2024-02-14 20:14:07.322214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.134 [2024-02-14 20:14:07.322251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.135 [2024-02-14 20:14:07.322291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.135 [2024-02-14 20:14:07.322332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.135 [2024-02-14 20:14:07.322362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.135 [2024-02-14 20:14:07.322391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.135 [2024-02-14 20:14:07.322423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.135 [2024-02-14 20:14:07.322462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.135 [2024-02-14 20:14:07.322503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.135 [2024-02-14 20:14:07.322542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.135 [2024-02-14 20:14:07.322572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.135 [2024-02-14 20:14:07.322602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.135 [2024-02-14 20:14:07.322645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.135 [2024-02-14 20:14:07.322695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.135 [2024-02-14 20:14:07.322741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.135 [2024-02-14 20:14:07.322782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.135 [2024-02-14 20:14:07.322831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.135 [2024-02-14 20:14:07.322878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.135 [2024-02-14 20:14:07.322923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.135 [2024-02-14 20:14:07.322969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.135 [2024-02-14 20:14:07.323013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.135 [2024-02-14 20:14:07.323055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.135 [2024-02-14 20:14:07.323097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.135 [2024-02-14 20:14:07.323144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.135 [2024-02-14 20:14:07.323194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.135 [2024-02-14 20:14:07.323236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.135 [2024-02-14 20:14:07.323279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.135 [2024-02-14 20:14:07.323329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.135 [2024-02-14 20:14:07.323374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.135 [2024-02-14 20:14:07.323420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.135 [2024-02-14 20:14:07.323472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.135 [2024-02-14 20:14:07.323515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.135 [2024-02-14 20:14:07.323563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.135 [2024-02-14 20:14:07.323610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.135 [2024-02-14 20:14:07.323656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.135 [2024-02-14 20:14:07.323696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.135 [2024-02-14 20:14:07.323733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.135 [2024-02-14 20:14:07.323763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.135 [2024-02-14 20:14:07.323789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.135 [2024-02-14 20:14:07.323827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.135 [2024-02-14 20:14:07.323868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.135 [2024-02-14 20:14:07.323912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.135 [2024-02-14 20:14:07.323950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.135 [2024-02-14 20:14:07.323990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.135 [2024-02-14 20:14:07.324030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.135 [2024-02-14 20:14:07.324068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.135 [2024-02-14 20:14:07.324104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.135 [2024-02-14 20:14:07.324132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.135 [2024-02-14 20:14:07.324165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.135 [2024-02-14 20:14:07.324206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.135 [2024-02-14 20:14:07.324237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.135 [2024-02-14 20:14:07.324279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.135 [2024-02-14 20:14:07.324320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.135 [2024-02-14 20:14:07.324367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.135 [2024-02-14 20:14:07.324414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.135 [2024-02-14 20:14:07.324457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.135 [2024-02-14 20:14:07.324510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.135 [2024-02-14 20:14:07.324557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.135 [2024-02-14 20:14:07.324605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.135 [2024-02-14 20:14:07.324664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.135 [2024-02-14 20:14:07.324998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.135 [2024-02-14 20:14:07.325039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.135 [2024-02-14 20:14:07.325081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.135 [2024-02-14 20:14:07.325121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.135 [2024-02-14 20:14:07.325161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.135 [2024-02-14 20:14:07.325200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.135 [2024-02-14 20:14:07.325230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.135 [2024-02-14 20:14:07.325259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.135 [2024-02-14 20:14:07.325304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.135 [2024-02-14 20:14:07.325340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.135 [2024-02-14 20:14:07.325379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.135 [2024-02-14 20:14:07.325416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.135 [2024-02-14 20:14:07.325452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.135 [2024-02-14 20:14:07.325483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.135 [2024-02-14 20:14:07.325516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.135 [2024-02-14 20:14:07.325561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.135 [2024-02-14 20:14:07.325608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.135 [2024-02-14 20:14:07.325657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.135 [2024-02-14 20:14:07.325706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.135 [2024-02-14 20:14:07.325750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.135 [2024-02-14 20:14:07.325799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.135 [2024-02-14 20:14:07.325847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.135 [2024-02-14 20:14:07.325904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.135 [2024-02-14 20:14:07.325951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.135 [2024-02-14 20:14:07.325994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.135 [2024-02-14 20:14:07.326043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.136 [2024-02-14 20:14:07.326091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.136 [2024-02-14 20:14:07.326135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.136 [2024-02-14 20:14:07.326180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.136 [2024-02-14 20:14:07.326233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.136 [2024-02-14 20:14:07.326281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.136 [2024-02-14 20:14:07.326328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.136 [2024-02-14 20:14:07.326375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.136 [2024-02-14 20:14:07.326421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.136 [2024-02-14 20:14:07.326463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.136 [2024-02-14 20:14:07.326513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.136 [2024-02-14 20:14:07.326557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.136 [2024-02-14 20:14:07.326604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.136 [2024-02-14 20:14:07.326654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.136 [2024-02-14 20:14:07.326697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.136 [2024-02-14 20:14:07.326746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.136 [2024-02-14 20:14:07.326790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.136 [2024-02-14 20:14:07.326833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.136 [2024-02-14 20:14:07.326872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.136 [2024-02-14 20:14:07.326912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.136 [2024-02-14 20:14:07.326950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.136 [2024-02-14 20:14:07.326989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.136 [2024-02-14 20:14:07.327018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.136 [2024-02-14 20:14:07.327054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.136 [2024-02-14 20:14:07.327097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.136 [2024-02-14 20:14:07.327135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.136 [2024-02-14 20:14:07.327173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.136 [2024-02-14 20:14:07.327211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.136 [2024-02-14 20:14:07.327253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.136 [2024-02-14 20:14:07.327296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.136 [2024-02-14 20:14:07.327337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.136 [2024-02-14 20:14:07.327375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.136 [2024-02-14 20:14:07.327405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.136 [2024-02-14 20:14:07.327435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.136 [2024-02-14 20:14:07.327472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.136 [2024-02-14 20:14:07.327502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.136 [2024-02-14 20:14:07.327531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.136 [2024-02-14 20:14:07.327574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.136 [2024-02-14 20:14:07.327930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.136 [2024-02-14 20:14:07.327981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.136 [2024-02-14 20:14:07.328024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.136 [2024-02-14 20:14:07.328071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.136 [2024-02-14 20:14:07.328116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.136 [2024-02-14 20:14:07.328156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.136 [2024-02-14 20:14:07.328198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.136 [2024-02-14 20:14:07.328240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.136 [2024-02-14 20:14:07.328278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.136 [2024-02-14 20:14:07.328307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.136 [2024-02-14 20:14:07.328347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.136 [2024-02-14 20:14:07.328392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.136 [2024-02-14 20:14:07.328425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.136 [2024-02-14 20:14:07.328462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.136 [2024-02-14 20:14:07.328498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.136 [2024-02-14 20:14:07.328537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.136 [2024-02-14 20:14:07.328567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.136 [2024-02-14 20:14:07.328597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.136 [2024-02-14 20:14:07.328650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.136 [2024-02-14 20:14:07.328699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.136 [2024-02-14 20:14:07.328747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.136 [2024-02-14 20:14:07.328789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.136 [2024-02-14 20:14:07.328836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.136 [2024-02-14 20:14:07.328880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.136 [2024-02-14 20:14:07.328926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.136 [2024-02-14 20:14:07.328976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.136 [2024-02-14 20:14:07.329021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.136 [2024-02-14 20:14:07.329066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.136 [2024-02-14 20:14:07.329118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.136 [2024-02-14 20:14:07.329163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.136 [2024-02-14 20:14:07.329210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.136 [2024-02-14 20:14:07.329252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.136 [2024-02-14 20:14:07.329295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.136 [2024-02-14 20:14:07.329342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.136 [2024-02-14 20:14:07.329388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.136 [2024-02-14 20:14:07.329439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.136 [2024-02-14 20:14:07.329495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.136 [2024-02-14 20:14:07.329541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.136 [2024-02-14 20:14:07.329587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.136 [2024-02-14 20:14:07.329632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.136 [2024-02-14 20:14:07.329678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.136 [2024-02-14 20:14:07.329718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.136 [2024-02-14 20:14:07.329759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.136 [2024-02-14 20:14:07.329793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.136 [2024-02-14 20:14:07.329822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.136 [2024-02-14 20:14:07.329860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.136 [2024-02-14 20:14:07.329899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.136 [2024-02-14 20:14:07.329936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.136 [2024-02-14 20:14:07.329985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.136 [2024-02-14 20:14:07.330023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.136 [2024-02-14 20:14:07.330065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.136 [2024-02-14 20:14:07.330106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.136 [2024-02-14 20:14:07.330136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.136 [2024-02-14 20:14:07.330164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.136 [2024-02-14 20:14:07.330205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.136 [2024-02-14 20:14:07.330246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.136 [2024-02-14 20:14:07.330277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.136 [2024-02-14 20:14:07.330323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.136 [2024-02-14 20:14:07.330371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.136 [2024-02-14 20:14:07.330425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.136 [2024-02-14 20:14:07.330468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.137 [2024-02-14 20:14:07.330520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.137 [2024-02-14 20:14:07.330568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.137 [2024-02-14 20:14:07.330615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.137 [2024-02-14 20:14:07.330956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.137 [2024-02-14 20:14:07.330997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.137 [2024-02-14 20:14:07.331050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.137 [2024-02-14 20:14:07.331095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.137 [2024-02-14 20:14:07.331133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.137 [2024-02-14 20:14:07.331171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.137 [2024-02-14 20:14:07.331208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.137 [2024-02-14 20:14:07.331238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.137 [2024-02-14 20:14:07.331267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.137 [2024-02-14 20:14:07.331304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.137 [2024-02-14 20:14:07.331346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.137 [2024-02-14 20:14:07.331381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.137 [2024-02-14 20:14:07.331424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.137 [2024-02-14 20:14:07.331462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.137 [2024-02-14 20:14:07.331496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.137 [2024-02-14 20:14:07.331525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.137 [2024-02-14 20:14:07.331553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.137 [2024-02-14 20:14:07.331603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.137 [2024-02-14 20:14:07.331655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.137 [2024-02-14 20:14:07.331704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.137 [2024-02-14 20:14:07.331754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.137 [2024-02-14 20:14:07.331806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.137 [2024-02-14 20:14:07.331853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.137 [2024-02-14 20:14:07.331898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.137 [2024-02-14 20:14:07.331943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.137 [2024-02-14 20:14:07.331994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.137 [2024-02-14 20:14:07.332040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.137 [2024-02-14 20:14:07.332087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.137 [2024-02-14 20:14:07.332131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.137 [2024-02-14 20:14:07.332179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.137 [2024-02-14 20:14:07.332221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.137 [2024-02-14 20:14:07.332270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.137 [2024-02-14 20:14:07.332316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.137 [2024-02-14 20:14:07.332363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.137 [2024-02-14 20:14:07.332409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.137 [2024-02-14 20:14:07.332458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.137 [2024-02-14 20:14:07.332505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.137 [2024-02-14 20:14:07.332551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.137 [2024-02-14 20:14:07.332598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.137 [2024-02-14 20:14:07.332650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.137 [2024-02-14 20:14:07.332696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.137 [2024-02-14 20:14:07.332737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.137 [2024-02-14 20:14:07.332782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.137 [2024-02-14 20:14:07.332821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.137 [2024-02-14 20:14:07.332864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.137 [2024-02-14 20:14:07.332895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.137 [2024-02-14 20:14:07.332924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.137 [2024-02-14 20:14:07.332964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.137 [2024-02-14 20:14:07.333008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.137 [2024-02-14 20:14:07.333048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.137 [2024-02-14 20:14:07.333091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.137 [2024-02-14 20:14:07.333130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.137 [2024-02-14 20:14:07.333171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.137 [2024-02-14 20:14:07.333216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.137 [2024-02-14 20:14:07.333251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.137 [2024-02-14 20:14:07.333280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.137 [2024-02-14 20:14:07.333319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.137 [2024-02-14 20:14:07.333360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.137 [2024-02-14 20:14:07.333391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.137 [2024-02-14 20:14:07.333433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.137 [2024-02-14 20:14:07.333477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.137 [2024-02-14 20:14:07.333526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.137 [2024-02-14 20:14:07.333573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.137 [2024-02-14 20:14:07.333928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.137 [2024-02-14 20:14:07.333978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.137 [2024-02-14 20:14:07.334026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.137 [2024-02-14 20:14:07.334075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.137 [2024-02-14 20:14:07.334117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.137 [2024-02-14 20:14:07.334160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.137 [2024-02-14 20:14:07.334201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.137 [2024-02-14 20:14:07.334242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.137 [2024-02-14 20:14:07.334282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.137 [2024-02-14 20:14:07.334321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.137 [2024-02-14 20:14:07.334360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.137 [2024-02-14 20:14:07.334389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.137 [2024-02-14 20:14:07.334427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.137 [2024-02-14 20:14:07.334463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.137 [2024-02-14 20:14:07.334496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.137 [2024-02-14 20:14:07.334532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.137 [2024-02-14 20:14:07.334572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.137 [2024-02-14 20:14:07.334603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.137 [2024-02-14 20:14:07.334634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.137 [2024-02-14 20:14:07.334667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.137 [2024-02-14 20:14:07.334711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.137 [2024-02-14 20:14:07.334755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.137 [2024-02-14 20:14:07.334799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.137 [2024-02-14 20:14:07.334847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.137 [2024-02-14 20:14:07.334894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.137 [2024-02-14 20:14:07.334938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.137 [2024-02-14 20:14:07.334985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.137 [2024-02-14 20:14:07.335028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.137 [2024-02-14 20:14:07.335069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.137 [2024-02-14 20:14:07.335116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.137 [2024-02-14 20:14:07.335167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.137 [2024-02-14 20:14:07.335211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.138 [2024-02-14 20:14:07.335257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.138 [2024-02-14 20:14:07.335302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.138 [2024-02-14 20:14:07.335347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.138 [2024-02-14 20:14:07.335398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.138 [2024-02-14 20:14:07.335442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.138 [2024-02-14 20:14:07.335486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.138 [2024-02-14 20:14:07.335535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.138 [2024-02-14 20:14:07.335586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.138 [2024-02-14 20:14:07.335636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.138 [2024-02-14 20:14:07.335690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.138 [2024-02-14 20:14:07.335733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.138 [2024-02-14 20:14:07.335777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.138 [2024-02-14 20:14:07.335815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.138 [2024-02-14 20:14:07.335844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.138 [2024-02-14 20:14:07.335876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.138 [2024-02-14 20:14:07.335919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.138 [2024-02-14 20:14:07.335958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.138 [2024-02-14 20:14:07.335994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.138 [2024-02-14 20:14:07.336035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.138 [2024-02-14 20:14:07.336077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.138 [2024-02-14 20:14:07.336121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.138 [2024-02-14 20:14:07.336151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.138 [2024-02-14 20:14:07.336183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.138 [2024-02-14 20:14:07.336220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.138 [2024-02-14 20:14:07.336252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.138 [2024-02-14 20:14:07.336284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.138 [2024-02-14 20:14:07.336325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.138 [2024-02-14 20:14:07.336372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.138 [2024-02-14 20:14:07.336415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.138 [2024-02-14 20:14:07.336473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.138 [2024-02-14 20:14:07.336520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.138 [2024-02-14 20:14:07.336567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.138 [2024-02-14 20:14:07.336902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.138 [2024-02-14 20:14:07.336946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.138 [2024-02-14 20:14:07.336997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.138 [2024-02-14 20:14:07.337038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.138 [2024-02-14 20:14:07.337076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.138 [2024-02-14 20:14:07.337123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.138 [2024-02-14 20:14:07.337163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.138 [2024-02-14 20:14:07.337203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.138 [2024-02-14 20:14:07.337234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.138 [2024-02-14 20:14:07.337263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.138 [2024-02-14 20:14:07.337301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.138 [2024-02-14 20:14:07.337338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.138 [2024-02-14 20:14:07.337370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.138 [2024-02-14 20:14:07.337409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.138 [2024-02-14 20:14:07.337446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.138 [2024-02-14 20:14:07.337484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.138 [2024-02-14 20:14:07.337515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.138 [2024-02-14 20:14:07.337544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.138 [2024-02-14 20:14:07.337584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.138 [2024-02-14 20:14:07.337629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.138 [2024-02-14 20:14:07.337681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.138 [2024-02-14 20:14:07.337728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.138 [2024-02-14 20:14:07.337771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.138 [2024-02-14 20:14:07.337820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.138 [2024-02-14 20:14:07.337880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.138 [2024-02-14 20:14:07.337924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.138 [2024-02-14 20:14:07.337973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.138 [2024-02-14 20:14:07.338026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.138 [2024-02-14 20:14:07.338070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.138 [2024-02-14 20:14:07.338114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.138 [2024-02-14 20:14:07.338167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.138 [2024-02-14 20:14:07.338212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.138 [2024-02-14 20:14:07.338256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.138 [2024-02-14 20:14:07.338301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.138 [2024-02-14 20:14:07.338344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.138 [2024-02-14 20:14:07.338394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.138 [2024-02-14 20:14:07.338441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.138 [2024-02-14 20:14:07.338487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.138 [2024-02-14 20:14:07.338535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.138 [2024-02-14 20:14:07.338583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.138 [2024-02-14 20:14:07.338633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.138 [2024-02-14 20:14:07.338689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.138 [2024-02-14 20:14:07.338739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.138 [2024-02-14 20:14:07.338782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.138 [2024-02-14 20:14:07.338825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.138 [2024-02-14 20:14:07.338870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.138 [2024-02-14 20:14:07.338910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.138 [2024-02-14 20:14:07.338942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.138 [2024-02-14 20:14:07.338971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.138 [2024-02-14 20:14:07.339010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.138 [2024-02-14 20:14:07.339045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.138 [2024-02-14 20:14:07.339082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.138 [2024-02-14 20:14:07.339127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.138 [2024-02-14 20:14:07.339167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.138 [2024-02-14 20:14:07.339205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.138 [2024-02-14 20:14:07.339240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.138 [2024-02-14 20:14:07.339269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.138 [2024-02-14 20:14:07.339304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.138 [2024-02-14 20:14:07.339344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.138 [2024-02-14 20:14:07.339376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.138 [2024-02-14 20:14:07.339406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.138 [2024-02-14 20:14:07.339453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.138 [2024-02-14 20:14:07.339503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.138 [2024-02-14 20:14:07.339869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.138 [2024-02-14 20:14:07.339918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.139 [2024-02-14 20:14:07.339964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.139 [2024-02-14 20:14:07.340009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.139 [2024-02-14 20:14:07.340056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.139 [2024-02-14 20:14:07.340099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.139 [2024-02-14 20:14:07.340146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.139 [2024-02-14 20:14:07.340186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.139 [2024-02-14 20:14:07.340225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.139 [2024-02-14 20:14:07.340264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.139 [2024-02-14 20:14:07.340304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.139 [2024-02-14 20:14:07.340350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.139 [2024-02-14 20:14:07.340379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.139 [2024-02-14 20:14:07.340410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.139 [2024-02-14 20:14:07.340448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.139 [2024-02-14 20:14:07.340487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.139 [2024-02-14 20:14:07.340524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.139 [2024-02-14 20:14:07.340562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.139 [2024-02-14 20:14:07.340600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.139 [2024-02-14 20:14:07.340630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.139 [2024-02-14 20:14:07.340663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.139 [2024-02-14 20:14:07.340708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.139 [2024-02-14 20:14:07.340751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.139 [2024-02-14 20:14:07.340800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.139 [2024-02-14 20:14:07.340846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.139 [2024-02-14 20:14:07.340888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.139 [2024-02-14 20:14:07.340937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.139 [2024-02-14 20:14:07.340983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.139 [2024-02-14 20:14:07.341031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.139 [2024-02-14 20:14:07.341077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.139 [2024-02-14 20:14:07.341124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.139 [2024-02-14 20:14:07.341174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.139 [2024-02-14 20:14:07.341220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.139 [2024-02-14 20:14:07.341269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.139 [2024-02-14 20:14:07.341312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.139 [2024-02-14 20:14:07.341363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.139 [2024-02-14 20:14:07.341409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.139 [2024-02-14 20:14:07.341455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.139 [2024-02-14 20:14:07.341506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.139 [2024-02-14 20:14:07.341550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.139 [2024-02-14 20:14:07.341593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.139 [2024-02-14 20:14:07.341638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.139 [2024-02-14 20:14:07.341692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.139 [2024-02-14 20:14:07.341735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.139 [2024-02-14 20:14:07.341779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.139 [2024-02-14 20:14:07.341826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.139 [2024-02-14 20:14:07.341856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.139 [2024-02-14 20:14:07.341884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.139 [2024-02-14 20:14:07.341923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.139 [2024-02-14 20:14:07.341962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.139 [2024-02-14 20:14:07.342005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.139 [2024-02-14 20:14:07.342051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.139 [2024-02-14 20:14:07.342093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.139 [2024-02-14 20:14:07.342131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.139 [2024-02-14 20:14:07.342167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.139 [2024-02-14 20:14:07.342195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.139 [2024-02-14 20:14:07.342238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.139 [2024-02-14 20:14:07.342276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.139 [2024-02-14 20:14:07.342310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.139 [2024-02-14 20:14:07.342351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.139 [2024-02-14 20:14:07.342391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.139 [2024-02-14 20:14:07.342442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.139 [2024-02-14 20:14:07.342485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.139 [2024-02-14 20:14:07.342531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.139 [2024-02-14 20:14:07.342871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.139 [2024-02-14 20:14:07.342920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.139 [2024-02-14 20:14:07.342967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.139 [2024-02-14 20:14:07.343017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.139 [2024-02-14 20:14:07.343054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.139 [2024-02-14 20:14:07.343098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.139 [2024-02-14 20:14:07.343140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.139 [2024-02-14 20:14:07.343179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.139 [2024-02-14 20:14:07.343221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.139 [2024-02-14 20:14:07.343252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.139 [2024-02-14 20:14:07.343281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.139 [2024-02-14 20:14:07.343320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.139 [2024-02-14 20:14:07.343365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.139 [2024-02-14 20:14:07.343398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.139 [2024-02-14 20:14:07.343435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.139 [2024-02-14 20:14:07.343471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.139 [2024-02-14 20:14:07.343506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.139 [2024-02-14 20:14:07.343536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.139 [2024-02-14 20:14:07.343565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.139 [2024-02-14 20:14:07.343605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.139 [2024-02-14 20:14:07.343654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.139 [2024-02-14 20:14:07.343703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.140 [2024-02-14 20:14:07.343751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.140 [2024-02-14 20:14:07.343799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.140 [2024-02-14 20:14:07.343844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.140 [2024-02-14 20:14:07.343888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.140 [2024-02-14 20:14:07.343929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.140 [2024-02-14 20:14:07.343984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.140 [2024-02-14 20:14:07.344032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.140 [2024-02-14 20:14:07.344081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.140 [2024-02-14 20:14:07.344133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.140 [2024-02-14 20:14:07.344178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.140 [2024-02-14 20:14:07.344223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.140 [2024-02-14 20:14:07.344266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.140 [2024-02-14 20:14:07.344313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.140 [2024-02-14 20:14:07.344356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.140 [2024-02-14 20:14:07.344407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.140 [2024-02-14 20:14:07.344450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.140 [2024-02-14 20:14:07.344498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.140 [2024-02-14 20:14:07.344548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.140 [2024-02-14 20:14:07.344597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.140 [2024-02-14 20:14:07.344644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.140 [2024-02-14 20:14:07.344698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.140 [2024-02-14 20:14:07.344742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.140 [2024-02-14 20:14:07.344780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.140 [2024-02-14 20:14:07.344821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.140 [2024-02-14 20:14:07.344860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.140 [2024-02-14 20:14:07.344904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.140 [2024-02-14 20:14:07.344933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.140 [2024-02-14 20:14:07.344965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.140 [2024-02-14 20:14:07.345004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.140 [2024-02-14 20:14:07.345042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.140 [2024-02-14 20:14:07.345081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.140 [2024-02-14 20:14:07.345118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.140 [2024-02-14 20:14:07.345154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.140 [2024-02-14 20:14:07.345193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.140 [2024-02-14 20:14:07.345232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.140 [2024-02-14 20:14:07.345262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.140 [2024-02-14 20:14:07.345298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.140 [2024-02-14 20:14:07.345340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.140 [2024-02-14 20:14:07.345371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.140 [2024-02-14 20:14:07.345400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.140 [2024-02-14 20:14:07.345441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.140 [2024-02-14 20:14:07.345783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.140 [2024-02-14 20:14:07.345836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.140 [2024-02-14 20:14:07.345883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.140 [2024-02-14 20:14:07.345934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.140 [2024-02-14 20:14:07.345979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.140 [2024-02-14 20:14:07.346023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.140 [2024-02-14 20:14:07.346076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.140 [2024-02-14 20:14:07.346118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.140 [2024-02-14 20:14:07.346159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.140 [2024-02-14 20:14:07.346197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.140 [2024-02-14 20:14:07.346241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.140 [2024-02-14 20:14:07.346284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.140 [2024-02-14 20:14:07.346324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.140 [2024-02-14 20:14:07.346354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.140 [2024-02-14 20:14:07.346382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.140 [2024-02-14 20:14:07.346419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.140 [2024-02-14 20:14:07.346461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.140 [2024-02-14 20:14:07.346496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.140 [2024-02-14 20:14:07.346532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.140 [2024-02-14 20:14:07.346573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.140 [2024-02-14 20:14:07.346603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.140 [2024-02-14 20:14:07.346634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.140 [2024-02-14 20:14:07.346682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.140 [2024-02-14 20:14:07.346729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.140 [2024-02-14 20:14:07.346774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.140 [2024-02-14 20:14:07.346815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.140 [2024-02-14 20:14:07.346862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.140 [2024-02-14 20:14:07.346914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.140 [2024-02-14 20:14:07.346957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.140 [2024-02-14 20:14:07.347005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.140 [2024-02-14 20:14:07.347046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.140 [2024-02-14 20:14:07.347087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.140 [2024-02-14 20:14:07.347149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.140 [2024-02-14 20:14:07.347194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.140 [2024-02-14 20:14:07.347243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.140 [2024-02-14 20:14:07.347290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.140 [2024-02-14 20:14:07.347339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.140 [2024-02-14 20:14:07.347384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.140 [2024-02-14 20:14:07.347429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.140 [2024-02-14 20:14:07.347472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.140 [2024-02-14 20:14:07.347518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.140 [2024-02-14 20:14:07.347563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.140 [2024-02-14 20:14:07.347606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.140 [2024-02-14 20:14:07.347658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.140 [2024-02-14 20:14:07.347697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.140 [2024-02-14 20:14:07.347735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.140 [2024-02-14 20:14:07.347775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.140 [2024-02-14 20:14:07.347808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.140 [2024-02-14 20:14:07.347837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.140 [2024-02-14 20:14:07.347873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.140 [2024-02-14 20:14:07.347915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.140 [2024-02-14 20:14:07.347965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.140 [2024-02-14 20:14:07.348003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.140 [2024-02-14 20:14:07.348042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.140 [2024-02-14 20:14:07.348077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.140 [2024-02-14 20:14:07.348115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.141 [2024-02-14 20:14:07.348144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.141 [2024-02-14 20:14:07.348182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.141 [2024-02-14 20:14:07.348221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.141 [2024-02-14 20:14:07.348252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.141 [2024-02-14 20:14:07.348283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.141 [2024-02-14 20:14:07.348324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.141 [2024-02-14 20:14:07.348373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.141 [2024-02-14 20:14:07.348417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.141 [2024-02-14 20:14:07.348756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.141 [2024-02-14 20:14:07.348805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.141 [2024-02-14 20:14:07.348847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.141 [2024-02-14 20:14:07.348892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.141 [2024-02-14 20:14:07.348940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.141 [2024-02-14 20:14:07.348991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.141 [2024-02-14 20:14:07.349035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.141 [2024-02-14 20:14:07.349074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.141 [2024-02-14 20:14:07.349114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.141 [2024-02-14 20:14:07.349152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.141 [2024-02-14 20:14:07.349193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.141 [2024-02-14 20:14:07.349223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.141 [2024-02-14 20:14:07.349253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.141 [2024-02-14 20:14:07.349295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.141 [2024-02-14 20:14:07.349345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.141 [2024-02-14 20:14:07.349382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.141 [2024-02-14 20:14:07.349418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.141 [2024-02-14 20:14:07.349455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.141 [2024-02-14 20:14:07.349488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.141 [2024-02-14 20:14:07.349520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.141 [2024-02-14 20:14:07.349549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.141 [2024-02-14 20:14:07.349593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.141 [2024-02-14 20:14:07.349641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.141 [2024-02-14 20:14:07.349696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.141 [2024-02-14 20:14:07.349741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.141 [2024-02-14 20:14:07.349788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.141 [2024-02-14 20:14:07.349831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.141 [2024-02-14 20:14:07.349877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.141 [2024-02-14 20:14:07.349928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.141 [2024-02-14 20:14:07.349971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.141 [2024-02-14 20:14:07.350016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.141 [2024-02-14 20:14:07.350059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.141 [2024-02-14 20:14:07.350102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.141 [2024-02-14 20:14:07.350152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.141 [2024-02-14 20:14:07.350199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.141 [2024-02-14 20:14:07.350243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.141 [2024-02-14 20:14:07.350296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.141 [2024-02-14 20:14:07.350341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.141 [2024-02-14 20:14:07.350389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.141 [2024-02-14 20:14:07.350435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.141 [2024-02-14 20:14:07.350486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.141 [2024-02-14 20:14:07.350530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.141 [2024-02-14 20:14:07.350581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.141 [2024-02-14 20:14:07.350625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.141 [2024-02-14 20:14:07.350673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.141 [2024-02-14 20:14:07.350720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.141 [2024-02-14 20:14:07.350766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.141 [2024-02-14 20:14:07.350803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.141 [2024-02-14 20:14:07.350842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.141 [2024-02-14 20:14:07.350877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.141 [2024-02-14 20:14:07.350906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.141 [2024-02-14 20:14:07.350941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.141 [2024-02-14 20:14:07.350980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.141 [2024-02-14 20:14:07.351020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.141 [2024-02-14 20:14:07.351058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.141 [2024-02-14 20:14:07.351095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.141 [2024-02-14 20:14:07.351137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.141 [2024-02-14 20:14:07.351183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.141 [2024-02-14 20:14:07.351223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.141 [2024-02-14 20:14:07.351251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.141 [2024-02-14 20:14:07.351287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.141 [2024-02-14 20:14:07.351324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.141 [2024-02-14 20:14:07.351359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.141 [2024-02-14 20:14:07.351696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.141 [2024-02-14 20:14:07.351749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.141 [2024-02-14 20:14:07.351802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.141 [2024-02-14 20:14:07.351845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.141 [2024-02-14 20:14:07.351888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.141 [2024-02-14 20:14:07.351936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.141 [2024-02-14 20:14:07.351983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.141 [2024-02-14 20:14:07.352033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.141 [2024-02-14 20:14:07.352082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.141 [2024-02-14 20:14:07.352121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.141 [2024-02-14 20:14:07.352159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.141 [2024-02-14 20:14:07.352202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.141 [2024-02-14 20:14:07.352238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.141 [2024-02-14 20:14:07.352278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.141 [2024-02-14 20:14:07.352313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.141 [2024-02-14 20:14:07.352342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.141 [2024-02-14 20:14:07.352378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.141 [2024-02-14 20:14:07.352420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.141 [2024-02-14 20:14:07.352454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.141 [2024-02-14 20:14:07.352496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.141 [2024-02-14 20:14:07.352535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.141 [2024-02-14 20:14:07.352567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.141 [2024-02-14 20:14:07.352599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.141 [2024-02-14 20:14:07.352630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.141 [2024-02-14 20:14:07.352677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.141 [2024-02-14 20:14:07.352725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.141 [2024-02-14 20:14:07.352769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.142 [2024-02-14 20:14:07.352816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.142 [2024-02-14 20:14:07.352863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.142 [2024-02-14 20:14:07.352913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.142 [2024-02-14 20:14:07.352956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.142 [2024-02-14 20:14:07.353000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.142 [2024-02-14 20:14:07.353047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.142 [2024-02-14 20:14:07.353097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.142 [2024-02-14 20:14:07.353142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.142 [2024-02-14 20:14:07.353186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.142 [2024-02-14 20:14:07.353244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.142 [2024-02-14 20:14:07.353296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.142 [2024-02-14 20:14:07.353343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.142 [2024-02-14 20:14:07.353394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.142 [2024-02-14 20:14:07.353443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.142 [2024-02-14 20:14:07.353493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.142 [2024-02-14 20:14:07.353544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.142 [2024-02-14 20:14:07.353590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.142 [2024-02-14 20:14:07.353636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.142 [2024-02-14 20:14:07.353688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.142 [2024-02-14 20:14:07.353735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.142 [2024-02-14 20:14:07.353777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.142 [2024-02-14 20:14:07.353820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.142 [2024-02-14 20:14:07.353850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.142 [2024-02-14 20:14:07.353883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.142 [2024-02-14 20:14:07.353924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.142 [2024-02-14 20:14:07.353961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.142 [2024-02-14 20:14:07.354002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.142 [2024-02-14 20:14:07.354041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.142 [2024-02-14 20:14:07.354080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.142 [2024-02-14 20:14:07.354120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.142 [2024-02-14 20:14:07.354155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.142 [2024-02-14 20:14:07.354183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.142 [2024-02-14 20:14:07.354228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.142 [2024-02-14 20:14:07.354266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.142 [2024-02-14 20:14:07.354296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.142 [2024-02-14 20:14:07.354342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.142 [2024-02-14 20:14:07.354387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.142 [2024-02-14 20:14:07.354730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.142 [2024-02-14 20:14:07.354780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.142 [2024-02-14 20:14:07.354827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.142 [2024-02-14 20:14:07.354872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.142 [2024-02-14 20:14:07.354919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.142 [2024-02-14 20:14:07.354961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.142 [2024-02-14 20:14:07.355004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.142 [2024-02-14 20:14:07.355053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.142 [2024-02-14 20:14:07.355094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.142 [2024-02-14 20:14:07.355135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.142 [2024-02-14 20:14:07.355179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.142 [2024-02-14 20:14:07.355214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.142 [2024-02-14 20:14:07.355255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.142 [2024-02-14 20:14:07.355283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.142 [2024-02-14 20:14:07.355318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.142 [2024-02-14 20:14:07.355359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.142 [2024-02-14 20:14:07.355397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.142 [2024-02-14 20:14:07.355439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.142 [2024-02-14 20:14:07.355476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.142 [2024-02-14 20:14:07.355513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.142 [2024-02-14 20:14:07.355544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.142 [2024-02-14 20:14:07.355574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.142 [2024-02-14 20:14:07.355617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.142 [2024-02-14 20:14:07.355668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.142 [2024-02-14 20:14:07.355720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.142 [2024-02-14 20:14:07.355766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.142 [2024-02-14 20:14:07.355815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.142 [2024-02-14 20:14:07.355861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.142 [2024-02-14 20:14:07.355904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.142 [2024-02-14 20:14:07.355948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.142 [2024-02-14 20:14:07.355996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.142 [2024-02-14 20:14:07.356046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.142 [2024-02-14 20:14:07.356097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.142 [2024-02-14 20:14:07.356146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.142 [2024-02-14 20:14:07.356194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.142 [2024-02-14 20:14:07.356240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.142 [2024-02-14 20:14:07.356287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.142 [2024-02-14 20:14:07.356338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.142 [2024-02-14 20:14:07.356384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.142 [2024-02-14 20:14:07.356432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.142 [2024-02-14 20:14:07.356482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.142 [2024-02-14 20:14:07.356528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.142 [2024-02-14 20:14:07.356575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.142 [2024-02-14 20:14:07.356631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.142 [2024-02-14 20:14:07.356680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.142 [2024-02-14 20:14:07.356731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.142 [2024-02-14 20:14:07.356779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.142 [2024-02-14 20:14:07.356825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.142 [2024-02-14 20:14:07.356874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.142 [2024-02-14 20:14:07.356912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.142 [2024-02-14 20:14:07.356960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.142 [2024-02-14 20:14:07.356994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.142 [2024-02-14 20:14:07.357023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.142 [2024-02-14 20:14:07.357062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.142 [2024-02-14 20:14:07.357106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.142 [2024-02-14 20:14:07.357149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.142 [2024-02-14 20:14:07.357185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.142 [2024-02-14 20:14:07.357226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.142 [2024-02-14 20:14:07.357273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.142 [2024-02-14 20:14:07.357316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.142 [2024-02-14 20:14:07.357358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.143 [2024-02-14 20:14:07.357388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.143 [2024-02-14 20:14:07.357427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.143 [2024-02-14 20:14:07.357781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.143 [2024-02-14 20:14:07.357833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.143 [2024-02-14 20:14:07.357875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.143 [2024-02-14 20:14:07.357922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.143 [2024-02-14 20:14:07.357966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.143 [2024-02-14 20:14:07.358010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.143 [2024-02-14 20:14:07.358063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.143 [2024-02-14 20:14:07.358108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.143 [2024-02-14 20:14:07.358153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.143 [2024-02-14 20:14:07.358197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.143 [2024-02-14 20:14:07.358247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.143 [2024-02-14 20:14:07.358286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.143 [2024-02-14 20:14:07.358332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.143 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:13:30.143 [2024-02-14 20:14:07.358376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.143 [2024-02-14 20:14:07.358413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.143 [2024-02-14 20:14:07.358451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.143 [2024-02-14 20:14:07.358480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.143 [2024-02-14 20:14:07.358508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.143 [2024-02-14 20:14:07.358545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.143 [2024-02-14 20:14:07.358584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.143 [2024-02-14 20:14:07.358618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.143 [2024-02-14 20:14:07.358661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.143 [2024-02-14 20:14:07.358700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.143 [2024-02-14 20:14:07.358735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.143 [2024-02-14 20:14:07.358766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.143 [2024-02-14 20:14:07.358797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.143 [2024-02-14 20:14:07.358841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.143 [2024-02-14 20:14:07.358885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.143 [2024-02-14 20:14:07.358933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.143 [2024-02-14 20:14:07.358980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.143 [2024-02-14 20:14:07.359028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.143 [2024-02-14 20:14:07.359072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.143 [2024-02-14 20:14:07.359120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.143 [2024-02-14 20:14:07.359164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.143 [2024-02-14 20:14:07.359207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.143 [2024-02-14 20:14:07.359253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.143 [2024-02-14 20:14:07.359301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.143 [2024-02-14 20:14:07.359345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.143 [2024-02-14 20:14:07.359395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.143 [2024-02-14 20:14:07.359440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.143 [2024-02-14 20:14:07.359489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.143 [2024-02-14 20:14:07.359533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.143 [2024-02-14 20:14:07.359576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.143 [2024-02-14 20:14:07.359626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.143 [2024-02-14 20:14:07.359686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.143 [2024-02-14 20:14:07.359733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.143 [2024-02-14 20:14:07.359781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.143 [2024-02-14 20:14:07.359825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.143 [2024-02-14 20:14:07.359864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.143 [2024-02-14 20:14:07.359903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.143 [2024-02-14 20:14:07.359941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.143 [2024-02-14 20:14:07.359970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.143 [2024-02-14 20:14:07.360006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.143 [2024-02-14 20:14:07.360045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.143 [2024-02-14 20:14:07.360086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.143 [2024-02-14 20:14:07.360128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.143 [2024-02-14 20:14:07.360169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.143 [2024-02-14 20:14:07.360216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.143 [2024-02-14 20:14:07.360253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.143 [2024-02-14 20:14:07.360283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.143 [2024-02-14 20:14:07.360316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.143 [2024-02-14 20:14:07.360352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.143 [2024-02-14 20:14:07.360385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.143 [2024-02-14 20:14:07.360414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.143 [2024-02-14 20:14:07.360744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.143 [2024-02-14 20:14:07.360793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.143 [2024-02-14 20:14:07.360842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.143 [2024-02-14 20:14:07.360893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.143 [2024-02-14 20:14:07.360936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.143 [2024-02-14 20:14:07.360983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.143 [2024-02-14 20:14:07.361025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.143 [2024-02-14 20:14:07.361068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.143 [2024-02-14 20:14:07.361116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.143 [2024-02-14 20:14:07.361163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.143 [2024-02-14 20:14:07.361204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.143 [2024-02-14 20:14:07.361244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.143 [2024-02-14 20:14:07.361283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.143 [2024-02-14 20:14:07.361333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.143 [2024-02-14 20:14:07.361366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.143 [2024-02-14 20:14:07.361394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.143 [2024-02-14 20:14:07.361431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.143 [2024-02-14 20:14:07.361474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.143 [2024-02-14 20:14:07.361507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.143 [2024-02-14 20:14:07.361546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.143 [2024-02-14 20:14:07.361582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.143 [2024-02-14 20:14:07.361622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.143 [2024-02-14 20:14:07.361658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.143 [2024-02-14 20:14:07.361689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.143 [2024-02-14 20:14:07.361736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.143 [2024-02-14 20:14:07.361783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.143 [2024-02-14 20:14:07.361833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.143 [2024-02-14 20:14:07.361881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.143 [2024-02-14 20:14:07.361931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.143 [2024-02-14 20:14:07.361974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.143 [2024-02-14 20:14:07.362018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.144 [2024-02-14 20:14:07.362064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.144 [2024-02-14 20:14:07.362113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.144 [2024-02-14 20:14:07.362157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.144 [2024-02-14 20:14:07.362201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.144 [2024-02-14 20:14:07.362249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.144 [2024-02-14 20:14:07.362298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.144 [2024-02-14 20:14:07.362344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.144 [2024-02-14 20:14:07.362393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.144 [2024-02-14 20:14:07.362444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.144 [2024-02-14 20:14:07.362487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.144 [2024-02-14 20:14:07.362531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.144 [2024-02-14 20:14:07.362583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.144 [2024-02-14 20:14:07.362632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.144 [2024-02-14 20:14:07.362682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.144 [2024-02-14 20:14:07.362737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.144 [2024-02-14 20:14:07.362786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.144 [2024-02-14 20:14:07.362832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.144 [2024-02-14 20:14:07.362877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.144 [2024-02-14 20:14:07.362922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.144 [2024-02-14 20:14:07.362956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.144 [2024-02-14 20:14:07.362997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.144 [2024-02-14 20:14:07.363044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.144 [2024-02-14 20:14:07.363073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.144 [2024-02-14 20:14:07.363100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.144 [2024-02-14 20:14:07.363135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.144 [2024-02-14 20:14:07.363177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.144 [2024-02-14 20:14:07.363215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.144 [2024-02-14 20:14:07.363253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.144 [2024-02-14 20:14:07.363294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.144 [2024-02-14 20:14:07.363339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.144 [2024-02-14 20:14:07.363380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.144 [2024-02-14 20:14:07.363417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.144 [2024-02-14 20:14:07.363755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.144 [2024-02-14 20:14:07.363805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.144 [2024-02-14 20:14:07.363854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.144 [2024-02-14 20:14:07.363899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.144 [2024-02-14 20:14:07.363950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.144 [2024-02-14 20:14:07.363993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.144 [2024-02-14 20:14:07.364045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.144 [2024-02-14 20:14:07.364095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.144 [2024-02-14 20:14:07.364142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.144 [2024-02-14 20:14:07.364186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.144 [2024-02-14 20:14:07.364234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.144 [2024-02-14 20:14:07.364278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.144 [2024-02-14 20:14:07.364318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.144 [2024-02-14 20:14:07.364356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.144 [2024-02-14 20:14:07.364397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.144 [2024-02-14 20:14:07.364446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.144 [2024-02-14 20:14:07.364494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.144 [2024-02-14 20:14:07.364529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.144 [2024-02-14 20:14:07.364559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.144 [2024-02-14 20:14:07.364592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.144 [2024-02-14 20:14:07.364633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.144 [2024-02-14 20:14:07.364674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.144 [2024-02-14 20:14:07.364719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.144 [2024-02-14 20:14:07.364759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.144 [2024-02-14 20:14:07.364795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.144 [2024-02-14 20:14:07.364824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.144 [2024-02-14 20:14:07.364856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.144 [2024-02-14 20:14:07.364903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.144 [2024-02-14 20:14:07.364956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.144 [2024-02-14 20:14:07.365000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.144 [2024-02-14 20:14:07.365045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.144 [2024-02-14 20:14:07.365092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.144 [2024-02-14 20:14:07.365138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.144 [2024-02-14 20:14:07.365180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.144 [2024-02-14 20:14:07.365228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.144 [2024-02-14 20:14:07.365274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.144 [2024-02-14 20:14:07.365325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.144 [2024-02-14 20:14:07.365371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.144 [2024-02-14 20:14:07.365416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.144 [2024-02-14 20:14:07.365464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.144 [2024-02-14 20:14:07.365510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.144 [2024-02-14 20:14:07.365559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.144 [2024-02-14 20:14:07.365606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.144 [2024-02-14 20:14:07.365655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.144 [2024-02-14 20:14:07.365706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.144 [2024-02-14 20:14:07.365750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.144 [2024-02-14 20:14:07.365799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.144 [2024-02-14 20:14:07.365845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.144 [2024-02-14 20:14:07.365886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.145 [2024-02-14 20:14:07.365925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.145 [2024-02-14 20:14:07.365962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.145 [2024-02-14 20:14:07.366004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.145 [2024-02-14 20:14:07.366033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.145 [2024-02-14 20:14:07.366060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.145 [2024-02-14 20:14:07.366098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.145 [2024-02-14 20:14:07.366140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.145 [2024-02-14 20:14:07.366179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.145 [2024-02-14 20:14:07.366216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.145 [2024-02-14 20:14:07.366261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.145 [2024-02-14 20:14:07.366301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.145 [2024-02-14 20:14:07.366340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.145 [2024-02-14 20:14:07.366368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.145 [2024-02-14 20:14:07.366406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.145 [2024-02-14 20:14:07.366444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.145 [2024-02-14 20:14:07.366792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.145 [2024-02-14 20:14:07.366842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.145 [2024-02-14 20:14:07.366887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.145 [2024-02-14 20:14:07.366930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.145 [2024-02-14 20:14:07.366989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.145 [2024-02-14 20:14:07.367032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.145 [2024-02-14 20:14:07.367079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.145 [2024-02-14 20:14:07.367123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.145 [2024-02-14 20:14:07.367169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.145 [2024-02-14 20:14:07.367218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.145 [2024-02-14 20:14:07.367259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.145 [2024-02-14 20:14:07.367305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.145 [2024-02-14 20:14:07.367345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.145 [2024-02-14 20:14:07.367385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.145 [2024-02-14 20:14:07.367424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.145 [2024-02-14 20:14:07.367460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.145 [2024-02-14 20:14:07.367490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.145 [2024-02-14 20:14:07.367524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.145 [2024-02-14 20:14:07.367567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.145 [2024-02-14 20:14:07.367603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.145 [2024-02-14 20:14:07.367643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.145 [2024-02-14 20:14:07.367684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.145 [2024-02-14 20:14:07.367727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.145 [2024-02-14 20:14:07.367758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.145 [2024-02-14 20:14:07.367790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.145 [2024-02-14 20:14:07.367832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.145 [2024-02-14 20:14:07.367878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.145 [2024-02-14 20:14:07.367923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.145 [2024-02-14 20:14:07.367977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.145 [2024-02-14 20:14:07.368022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.145 [2024-02-14 20:14:07.368068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.145 [2024-02-14 20:14:07.368117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.145 [2024-02-14 20:14:07.368166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.145 [2024-02-14 20:14:07.368214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.145 [2024-02-14 20:14:07.368261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.145 [2024-02-14 20:14:07.368307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.145 [2024-02-14 20:14:07.368354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.145 [2024-02-14 20:14:07.368400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.145 [2024-02-14 20:14:07.368450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.145 [2024-02-14 20:14:07.368492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.145 [2024-02-14 20:14:07.368537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.145 [2024-02-14 20:14:07.368582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.145 [2024-02-14 20:14:07.368624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.145 [2024-02-14 20:14:07.368679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.145 [2024-02-14 20:14:07.368729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.145 [2024-02-14 20:14:07.368775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.145 [2024-02-14 20:14:07.368821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.145 [2024-02-14 20:14:07.368867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.145 [2024-02-14 20:14:07.368917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.145 [2024-02-14 20:14:07.368962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.145 [2024-02-14 20:14:07.369012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.145 [2024-02-14 20:14:07.369054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.145 [2024-02-14 20:14:07.369092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.145 [2024-02-14 20:14:07.369141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.145 [2024-02-14 20:14:07.369171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.145 [2024-02-14 20:14:07.369201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.145 [2024-02-14 20:14:07.369248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.145 [2024-02-14 20:14:07.369289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.145 [2024-02-14 20:14:07.369334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.145 [2024-02-14 20:14:07.369372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.145 [2024-02-14 20:14:07.369417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.145 [2024-02-14 20:14:07.369457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.145 [2024-02-14 20:14:07.369498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.145 [2024-02-14 20:14:07.369849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.145 [2024-02-14 20:14:07.369902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.145 [2024-02-14 20:14:07.369945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.145 [2024-02-14 20:14:07.369996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.145 [2024-02-14 20:14:07.370046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.145 [2024-02-14 20:14:07.370092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.145 [2024-02-14 20:14:07.370139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.145 [2024-02-14 20:14:07.370189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.145 [2024-02-14 20:14:07.370237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.145 [2024-02-14 20:14:07.370291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.145 [2024-02-14 20:14:07.370336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.145 [2024-02-14 20:14:07.370384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.145 [2024-02-14 20:14:07.370427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.145 [2024-02-14 20:14:07.370469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.145 [2024-02-14 20:14:07.370506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.145 [2024-02-14 20:14:07.370547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.145 [2024-02-14 20:14:07.370587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.145 [2024-02-14 20:14:07.370633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.145 [2024-02-14 20:14:07.370676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.145 [2024-02-14 20:14:07.370705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.146 [2024-02-14 20:14:07.370745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.146 [2024-02-14 20:14:07.370786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.146 [2024-02-14 20:14:07.370826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.146 [2024-02-14 20:14:07.370862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.146 [2024-02-14 20:14:07.370900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.146 [2024-02-14 20:14:07.370933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.146 [2024-02-14 20:14:07.370963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.146 [2024-02-14 20:14:07.370994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.146 [2024-02-14 20:14:07.371039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.146 [2024-02-14 20:14:07.371088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.146 [2024-02-14 20:14:07.371135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.146 [2024-02-14 20:14:07.371183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.146 [2024-02-14 20:14:07.371232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.146 [2024-02-14 20:14:07.371277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.146 [2024-02-14 20:14:07.371324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.146 [2024-02-14 20:14:07.371369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.146 [2024-02-14 20:14:07.371419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.146 [2024-02-14 20:14:07.371470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.146 [2024-02-14 20:14:07.371515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.146 [2024-02-14 20:14:07.371561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.146 [2024-02-14 20:14:07.371608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.146 [2024-02-14 20:14:07.371658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.146 [2024-02-14 20:14:07.371707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.146 [2024-02-14 20:14:07.371754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.146 [2024-02-14 20:14:07.371800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.146 [2024-02-14 20:14:07.371846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.146 [2024-02-14 20:14:07.371892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.146 [2024-02-14 20:14:07.371939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.146 [2024-02-14 20:14:07.371994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.146 [2024-02-14 20:14:07.372038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.146 [2024-02-14 20:14:07.372081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.146 [2024-02-14 20:14:07.372130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.146 [2024-02-14 20:14:07.372179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.146 [2024-02-14 20:14:07.372209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.146 [2024-02-14 20:14:07.372242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.146 [2024-02-14 20:14:07.372285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.146 [2024-02-14 20:14:07.372321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.146 [2024-02-14 20:14:07.372362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.146 [2024-02-14 20:14:07.372403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.146 [2024-02-14 20:14:07.372442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.146 [2024-02-14 20:14:07.372484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.146 [2024-02-14 20:14:07.372516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.146 [2024-02-14 20:14:07.372547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.146 [2024-02-14 20:14:07.372585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.146 [2024-02-14 20:14:07.372928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.146 [2024-02-14 20:14:07.372974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.146 [2024-02-14 20:14:07.373023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.146 [2024-02-14 20:14:07.373068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.146 [2024-02-14 20:14:07.373116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.146 [2024-02-14 20:14:07.373162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.146 [2024-02-14 20:14:07.373208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.146 [2024-02-14 20:14:07.373258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.146 [2024-02-14 20:14:07.373303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.146 [2024-02-14 20:14:07.373350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.146 [2024-02-14 20:14:07.373401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.146 [2024-02-14 20:14:07.373442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.146 [2024-02-14 20:14:07.373479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.146 [2024-02-14 20:14:07.373518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.146 [2024-02-14 20:14:07.373558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.146 [2024-02-14 20:14:07.373599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.146 [2024-02-14 20:14:07.373634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.146 [2024-02-14 20:14:07.373668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.146 [2024-02-14 20:14:07.373706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.146 [2024-02-14 20:14:07.373747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.146 [2024-02-14 20:14:07.373780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.146 [2024-02-14 20:14:07.373818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.146 [2024-02-14 20:14:07.373859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.146 [2024-02-14 20:14:07.373896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.146 [2024-02-14 20:14:07.373926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.146 [2024-02-14 20:14:07.373957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.146 [2024-02-14 20:14:07.374003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.146 [2024-02-14 20:14:07.374048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.146 [2024-02-14 20:14:07.374096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.146 [2024-02-14 20:14:07.374142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.146 [2024-02-14 20:14:07.374187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.146 [2024-02-14 20:14:07.374234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.146 [2024-02-14 20:14:07.374283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.146 [2024-02-14 20:14:07.374328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.146 [2024-02-14 20:14:07.374373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.146 [2024-02-14 20:14:07.374420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.146 [2024-02-14 20:14:07.374468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.146 [2024-02-14 20:14:07.374514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.146 [2024-02-14 20:14:07.374558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.146 [2024-02-14 20:14:07.374607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.146 [2024-02-14 20:14:07.374654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.146 [2024-02-14 20:14:07.374700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.146 [2024-02-14 20:14:07.374749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.146 [2024-02-14 20:14:07.374792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.146 [2024-02-14 20:14:07.374839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.146 [2024-02-14 20:14:07.374885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.146 [2024-02-14 20:14:07.374931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.146 [2024-02-14 20:14:07.374976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.146 [2024-02-14 20:14:07.375026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.146 [2024-02-14 20:14:07.375074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.146 [2024-02-14 20:14:07.375113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.146 [2024-02-14 20:14:07.375159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.146 [2024-02-14 20:14:07.375210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.146 [2024-02-14 20:14:07.375247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.147 [2024-02-14 20:14:07.375284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.147 [2024-02-14 20:14:07.375313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.147 [2024-02-14 20:14:07.375356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.147 [2024-02-14 20:14:07.375395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.147 [2024-02-14 20:14:07.375437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.147 [2024-02-14 20:14:07.375482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.147 [2024-02-14 20:14:07.375524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.147 [2024-02-14 20:14:07.375574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.147 [2024-02-14 20:14:07.375615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.147 [2024-02-14 20:14:07.375966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.147 [2024-02-14 20:14:07.376016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.147 [2024-02-14 20:14:07.376061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.147 [2024-02-14 20:14:07.376108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.147 [2024-02-14 20:14:07.376156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.147 [2024-02-14 20:14:07.376206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.147 [2024-02-14 20:14:07.376251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.147 [2024-02-14 20:14:07.376297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.147 [2024-02-14 20:14:07.376345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.147 [2024-02-14 20:14:07.376392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.147 [2024-02-14 20:14:07.376439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.147 [2024-02-14 20:14:07.376487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.147 [2024-02-14 20:14:07.376531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.147 [2024-02-14 20:14:07.376574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.147 [2024-02-14 20:14:07.376612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.147 [2024-02-14 20:14:07.376653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.147 [2024-02-14 20:14:07.376700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.147 [2024-02-14 20:14:07.376743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.147 [2024-02-14 20:14:07.376790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.147 [2024-02-14 20:14:07.376819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.147 [2024-02-14 20:14:07.376853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.147 [2024-02-14 20:14:07.376893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.147 [2024-02-14 20:14:07.376938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.147 [2024-02-14 20:14:07.376978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.147 [2024-02-14 20:14:07.377022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.147 [2024-02-14 20:14:07.377061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.147 [2024-02-14 20:14:07.377090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.147 [2024-02-14 20:14:07.377120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.147 [2024-02-14 20:14:07.377161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.147 [2024-02-14 20:14:07.377207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.147 [2024-02-14 20:14:07.377257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.147 [2024-02-14 20:14:07.377307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.147 [2024-02-14 20:14:07.377362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.147 [2024-02-14 20:14:07.377406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.147 [2024-02-14 20:14:07.377457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.147 [2024-02-14 20:14:07.377512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.147 [2024-02-14 20:14:07.377558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.147 [2024-02-14 20:14:07.377604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.147 [2024-02-14 20:14:07.377653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.147 [2024-02-14 20:14:07.377707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.147 [2024-02-14 20:14:07.377756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.147 [2024-02-14 20:14:07.377805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.147 [2024-02-14 20:14:07.377850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.147 [2024-02-14 20:14:07.377900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.147 [2024-02-14 20:14:07.377946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.147 [2024-02-14 20:14:07.377992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.147 [2024-02-14 20:14:07.378037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.147 [2024-02-14 20:14:07.378089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.147 [2024-02-14 20:14:07.378131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.147 [2024-02-14 20:14:07.378174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.147 [2024-02-14 20:14:07.378219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.147 [2024-02-14 20:14:07.378258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.147 [2024-02-14 20:14:07.378292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.147 [2024-02-14 20:14:07.378322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.147 [2024-02-14 20:14:07.378357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.147 [2024-02-14 20:14:07.378396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.147 [2024-02-14 20:14:07.378435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.147 [2024-02-14 20:14:07.378482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.147 [2024-02-14 20:14:07.378525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.147 [2024-02-14 20:14:07.378568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.147 [2024-02-14 20:14:07.378613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.147 [2024-02-14 20:14:07.378643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.147 [2024-02-14 20:14:07.378685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.147 [2024-02-14 20:14:07.378726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.147 [2024-02-14 20:14:07.379060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.147 [2024-02-14 20:14:07.379110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.147 [2024-02-14 20:14:07.379155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.147 [2024-02-14 20:14:07.379206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.147 [2024-02-14 20:14:07.379253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.147 [2024-02-14 20:14:07.379298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.147 [2024-02-14 20:14:07.379351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.147 [2024-02-14 20:14:07.379399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.147 [2024-02-14 20:14:07.379447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.147 [2024-02-14 20:14:07.379495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.147 [2024-02-14 20:14:07.379536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.147 [2024-02-14 20:14:07.379579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.147 [2024-02-14 20:14:07.379618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.147 [2024-02-14 20:14:07.379672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.147 [2024-02-14 20:14:07.379713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.147 [2024-02-14 20:14:07.379744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.147 [2024-02-14 20:14:07.379773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.147 [2024-02-14 20:14:07.379812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.147 [2024-02-14 20:14:07.379850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.147 [2024-02-14 20:14:07.379883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.147 [2024-02-14 20:14:07.379924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.147 [2024-02-14 20:14:07.379966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.147 [2024-02-14 20:14:07.380002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.147 [2024-02-14 20:14:07.380034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.148 [2024-02-14 20:14:07.380066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.148 [2024-02-14 20:14:07.380117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.148 [2024-02-14 20:14:07.380161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.148 [2024-02-14 20:14:07.380212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.148 [2024-02-14 20:14:07.380259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.148 [2024-02-14 20:14:07.380307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.148 [2024-02-14 20:14:07.380355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.148 [2024-02-14 20:14:07.380401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.148 [2024-02-14 20:14:07.380449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.148 [2024-02-14 20:14:07.380493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.148 [2024-02-14 20:14:07.380546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.148 [2024-02-14 20:14:07.380593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.148 [2024-02-14 20:14:07.380637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.148 [2024-02-14 20:14:07.380685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.148 [2024-02-14 20:14:07.380730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.148 [2024-02-14 20:14:07.380778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.148 [2024-02-14 20:14:07.380826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.148 [2024-02-14 20:14:07.380870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.148 [2024-02-14 20:14:07.380913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.148 [2024-02-14 20:14:07.380967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.148 [2024-02-14 20:14:07.381016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.148 [2024-02-14 20:14:07.381062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.148 [2024-02-14 20:14:07.381109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.148 [2024-02-14 20:14:07.381157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.148 [2024-02-14 20:14:07.381204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.148 [2024-02-14 20:14:07.381249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.148 [2024-02-14 20:14:07.381293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.148 [2024-02-14 20:14:07.381335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.148 [2024-02-14 20:14:07.381372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.148 [2024-02-14 20:14:07.381402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.148 [2024-02-14 20:14:07.381431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.148 [2024-02-14 20:14:07.381469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.148 [2024-02-14 20:14:07.381511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.148 [2024-02-14 20:14:07.381551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.148 [2024-02-14 20:14:07.381595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.148 [2024-02-14 20:14:07.381641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.148 [2024-02-14 20:14:07.381693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.148 [2024-02-14 20:14:07.381730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.148 [2024-02-14 20:14:07.381775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.148 [2024-02-14 20:14:07.382113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.148 [2024-02-14 20:14:07.382166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.148 [2024-02-14 20:14:07.382213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.148 [2024-02-14 20:14:07.382263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.148 [2024-02-14 20:14:07.382311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.148 [2024-02-14 20:14:07.382356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.148 [2024-02-14 20:14:07.382407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.148 [2024-02-14 20:14:07.382451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.148 [2024-02-14 20:14:07.382494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.148 [2024-02-14 20:14:07.382544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.148 [2024-02-14 20:14:07.382588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.148 [2024-02-14 20:14:07.382633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.148 [2024-02-14 20:14:07.382680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.148 [2024-02-14 20:14:07.382726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.148 [2024-02-14 20:14:07.382768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.148 [2024-02-14 20:14:07.382810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.148 [2024-02-14 20:14:07.382843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.148 [2024-02-14 20:14:07.382873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.148 [2024-02-14 20:14:07.382914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.148 [2024-02-14 20:14:07.382953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.148 [2024-02-14 20:14:07.382986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.148 [2024-02-14 20:14:07.383025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.148 [2024-02-14 20:14:07.383065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.148 [2024-02-14 20:14:07.383100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.148 [2024-02-14 20:14:07.383132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.148 [2024-02-14 20:14:07.383163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.148 [2024-02-14 20:14:07.383213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.148 [2024-02-14 20:14:07.383260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.148 [2024-02-14 20:14:07.383308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.148 [2024-02-14 20:14:07.383360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.148 [2024-02-14 20:14:07.383406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.148 [2024-02-14 20:14:07.383456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.148 [2024-02-14 20:14:07.383500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.148 [2024-02-14 20:14:07.383546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.148 [2024-02-14 20:14:07.383596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.148 [2024-02-14 20:14:07.383641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.148 [2024-02-14 20:14:07.383694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.148 [2024-02-14 20:14:07.383739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.148 [2024-02-14 20:14:07.383786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.148 [2024-02-14 20:14:07.383837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.148 [2024-02-14 20:14:07.383885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.148 [2024-02-14 20:14:07.383936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.148 [2024-02-14 20:14:07.383987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.148 [2024-02-14 20:14:07.384033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.149 [2024-02-14 20:14:07.384079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.149 [2024-02-14 20:14:07.384139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.149 [2024-02-14 20:14:07.384183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.149 [2024-02-14 20:14:07.384229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.149 [2024-02-14 20:14:07.384271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.149 [2024-02-14 20:14:07.384311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.149 [2024-02-14 20:14:07.384341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.149 [2024-02-14 20:14:07.384370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.149 [2024-02-14 20:14:07.384409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.149 [2024-02-14 20:14:07.384447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.149 [2024-02-14 20:14:07.384487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.149 [2024-02-14 20:14:07.384530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.149 [2024-02-14 20:14:07.384576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.149 [2024-02-14 20:14:07.384617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.149 [2024-02-14 20:14:07.384658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.149 [2024-02-14 20:14:07.384688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.149 [2024-02-14 20:14:07.384727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.149 [2024-02-14 20:14:07.384762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.149 [2024-02-14 20:14:07.384792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.149 [2024-02-14 20:14:07.384834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.149 [2024-02-14 20:14:07.385163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.149 [2024-02-14 20:14:07.385216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.149 [2024-02-14 20:14:07.385267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.149 [2024-02-14 20:14:07.385316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.149 [2024-02-14 20:14:07.385363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.149 [2024-02-14 20:14:07.385409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.149 [2024-02-14 20:14:07.385453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.149 [2024-02-14 20:14:07.385502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.149 [2024-02-14 20:14:07.385543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.149 [2024-02-14 20:14:07.385591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.149 [2024-02-14 20:14:07.385631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.149 [2024-02-14 20:14:07.385676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.149 [2024-02-14 20:14:07.385720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.149 [2024-02-14 20:14:07.385750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.149 [2024-02-14 20:14:07.385780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.149 [2024-02-14 20:14:07.385817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.149 [2024-02-14 20:14:07.385861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.149 [2024-02-14 20:14:07.385899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.149 [2024-02-14 20:14:07.385934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.149 [2024-02-14 20:14:07.385973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.149 [2024-02-14 20:14:07.386005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.149 [2024-02-14 20:14:07.386035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.149 [2024-02-14 20:14:07.386075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.149 [2024-02-14 20:14:07.386121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.149 [2024-02-14 20:14:07.386169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.149 [2024-02-14 20:14:07.386217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.149 [2024-02-14 20:14:07.386263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.149 [2024-02-14 20:14:07.386311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.149 [2024-02-14 20:14:07.386358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.149 [2024-02-14 20:14:07.386409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.149 [2024-02-14 20:14:07.386457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.149 [2024-02-14 20:14:07.386504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.149 [2024-02-14 20:14:07.386550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.149 [2024-02-14 20:14:07.386605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.149 [2024-02-14 20:14:07.386656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.149 [2024-02-14 20:14:07.386703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.149 [2024-02-14 20:14:07.386754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.149 [2024-02-14 20:14:07.386798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.149 [2024-02-14 20:14:07.386840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.149 [2024-02-14 20:14:07.386888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.149 [2024-02-14 20:14:07.386932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.149 [2024-02-14 20:14:07.386979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.149 [2024-02-14 20:14:07.387023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.149 [2024-02-14 20:14:07.387074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.149 [2024-02-14 20:14:07.387119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.149 [2024-02-14 20:14:07.387165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.149 [2024-02-14 20:14:07.387210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.149 [2024-02-14 20:14:07.387254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.149 [2024-02-14 20:14:07.387296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.149 [2024-02-14 20:14:07.387339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.149 [2024-02-14 20:14:07.387375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.149 [2024-02-14 20:14:07.387405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.149 [2024-02-14 20:14:07.387441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.149 [2024-02-14 20:14:07.387482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.149 [2024-02-14 20:14:07.387523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.149 [2024-02-14 20:14:07.387564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.149 [2024-02-14 20:14:07.387611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.149 [2024-02-14 20:14:07.387658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.149 [2024-02-14 20:14:07.387703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.149 [2024-02-14 20:14:07.387738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.149 [2024-02-14 20:14:07.387768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.149 [2024-02-14 20:14:07.387806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.149 [2024-02-14 20:14:07.387843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.149 [2024-02-14 20:14:07.388175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.149 [2024-02-14 20:14:07.388227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.149 [2024-02-14 20:14:07.388275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.149 [2024-02-14 20:14:07.388325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.149 [2024-02-14 20:14:07.388371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.149 [2024-02-14 20:14:07.388422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.149 [2024-02-14 20:14:07.388476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.149 [2024-02-14 20:14:07.388527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.149 [2024-02-14 20:14:07.388569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.149 [2024-02-14 20:14:07.388614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.149 [2024-02-14 20:14:07.388666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.149 [2024-02-14 20:14:07.388707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.149 [2024-02-14 20:14:07.388746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.149 [2024-02-14 20:14:07.388785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.149 [2024-02-14 20:14:07.388826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.150 [2024-02-14 20:14:07.388855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.150 [2024-02-14 20:14:07.388889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.150 [2024-02-14 20:14:07.388942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.150 [2024-02-14 20:14:07.388977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.150 [2024-02-14 20:14:07.389016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.150 [2024-02-14 20:14:07.389054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.150 [2024-02-14 20:14:07.389087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.150 [2024-02-14 20:14:07.389117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.150 [2024-02-14 20:14:07.389147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.150 [2024-02-14 20:14:07.389189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.150 [2024-02-14 20:14:07.389238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.150 [2024-02-14 20:14:07.389283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.150 [2024-02-14 20:14:07.389331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.150 [2024-02-14 20:14:07.389382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.150 [2024-02-14 20:14:07.389428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.150 [2024-02-14 20:14:07.389475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.150 [2024-02-14 20:14:07.389521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.150 [2024-02-14 20:14:07.389570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.150 [2024-02-14 20:14:07.389617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.150 [2024-02-14 20:14:07.389674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.150 [2024-02-14 20:14:07.389728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.150 [2024-02-14 20:14:07.389770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.150 [2024-02-14 20:14:07.389815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.150 [2024-02-14 20:14:07.389864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.150 [2024-02-14 20:14:07.389907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.150 [2024-02-14 20:14:07.389955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.150 [2024-02-14 20:14:07.390000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.150 [2024-02-14 20:14:07.390046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.150 [2024-02-14 20:14:07.390093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.150 [2024-02-14 20:14:07.390139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.150 [2024-02-14 20:14:07.390181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.150 [2024-02-14 20:14:07.390221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.150 [2024-02-14 20:14:07.390266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.150 [2024-02-14 20:14:07.390299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.150 [2024-02-14 20:14:07.390329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.150 [2024-02-14 20:14:07.390369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.150 [2024-02-14 20:14:07.390412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.150 [2024-02-14 20:14:07.390452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.150 [2024-02-14 20:14:07.390496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.150 [2024-02-14 20:14:07.390533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.150 [2024-02-14 20:14:07.390572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.150 [2024-02-14 20:14:07.390612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.150 [2024-02-14 20:14:07.390642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.150 [2024-02-14 20:14:07.390688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.150 [2024-02-14 20:14:07.390726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.150 [2024-02-14 20:14:07.390759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.150 [2024-02-14 20:14:07.390800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.150 [2024-02-14 20:14:07.390847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.150 [2024-02-14 20:14:07.390903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.150 [2024-02-14 20:14:07.391227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.150 [2024-02-14 20:14:07.391280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.150 [2024-02-14 20:14:07.391327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.150 [2024-02-14 20:14:07.391372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.150 [2024-02-14 20:14:07.391420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.150 [2024-02-14 20:14:07.391466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.150 [2024-02-14 20:14:07.391510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.150 [2024-02-14 20:14:07.391551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.150 [2024-02-14 20:14:07.391592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.150 [2024-02-14 20:14:07.391642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.150 [2024-02-14 20:14:07.391692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.150 [2024-02-14 20:14:07.391728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.150 [2024-02-14 20:14:07.391757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.150 [2024-02-14 20:14:07.391790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.150 [2024-02-14 20:14:07.391827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.150 [2024-02-14 20:14:07.391863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.150 [2024-02-14 20:14:07.391903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.150 [2024-02-14 20:14:07.391941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.150 [2024-02-14 20:14:07.391981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.150 [2024-02-14 20:14:07.392012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.150 [2024-02-14 20:14:07.392042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.150 [2024-02-14 20:14:07.392084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.150 [2024-02-14 20:14:07.392142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.150 [2024-02-14 20:14:07.392188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.150 [2024-02-14 20:14:07.392236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.150 [2024-02-14 20:14:07.392288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.150 [2024-02-14 20:14:07.392337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.150 [2024-02-14 20:14:07.392379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.150 [2024-02-14 20:14:07.392427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.150 [2024-02-14 20:14:07.392472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.150 [2024-02-14 20:14:07.392515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.150 [2024-02-14 20:14:07.392560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.150 [2024-02-14 20:14:07.392607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.150 [2024-02-14 20:14:07.392658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.150 [2024-02-14 20:14:07.392710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.150 [2024-02-14 20:14:07.392756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.150 [2024-02-14 20:14:07.392798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.150 [2024-02-14 20:14:07.392843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.150 [2024-02-14 20:14:07.392889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.150 [2024-02-14 20:14:07.392934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.150 [2024-02-14 20:14:07.392987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.150 [2024-02-14 20:14:07.393034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.150 [2024-02-14 20:14:07.393076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.150 [2024-02-14 20:14:07.393124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.150 [2024-02-14 20:14:07.393168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.150 [2024-02-14 20:14:07.393211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.150 [2024-02-14 20:14:07.393254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.150 [2024-02-14 20:14:07.393294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.150 [2024-02-14 20:14:07.393342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.151 [2024-02-14 20:14:07.393377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.151 [2024-02-14 20:14:07.393407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.151 [2024-02-14 20:14:07.393443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.151 [2024-02-14 20:14:07.393483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.151 [2024-02-14 20:14:07.393522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.151 [2024-02-14 20:14:07.393565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.151 [2024-02-14 20:14:07.393612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.151 [2024-02-14 20:14:07.393656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.151 [2024-02-14 20:14:07.393704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.151 [2024-02-14 20:14:07.393736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.151 [2024-02-14 20:14:07.393773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.151 [2024-02-14 20:14:07.393811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.151 [2024-02-14 20:14:07.393848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.151 [2024-02-14 20:14:07.393878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.151 [2024-02-14 20:14:07.394229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.151 [2024-02-14 20:14:07.394280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.151 [2024-02-14 20:14:07.394331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.151 [2024-02-14 20:14:07.394386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.151 [2024-02-14 20:14:07.394435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.151 [2024-02-14 20:14:07.394480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.151 [2024-02-14 20:14:07.394530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.151 [2024-02-14 20:14:07.394578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.151 [2024-02-14 20:14:07.394624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.151 [2024-02-14 20:14:07.394673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.151 [2024-02-14 20:14:07.394720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.151 [2024-02-14 20:14:07.394758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.151 [2024-02-14 20:14:07.394797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.151 [2024-02-14 20:14:07.394831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.151 [2024-02-14 20:14:07.394861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.151 [2024-02-14 20:14:07.394904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.151 [2024-02-14 20:14:07.394948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.151 [2024-02-14 20:14:07.394979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.151 [2024-02-14 20:14:07.395019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.151 [2024-02-14 20:14:07.395060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.151 [2024-02-14 20:14:07.395095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.151 [2024-02-14 20:14:07.395126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.151 [2024-02-14 20:14:07.395156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.151 [2024-02-14 20:14:07.395201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.151 [2024-02-14 20:14:07.395244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.151 [2024-02-14 20:14:07.395293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.151 [2024-02-14 20:14:07.395341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.151 [2024-02-14 20:14:07.395387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.151 [2024-02-14 20:14:07.395433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.151 [2024-02-14 20:14:07.395486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.151 [2024-02-14 20:14:07.395529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.151 [2024-02-14 20:14:07.395575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.151 [2024-02-14 20:14:07.395621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.151 [2024-02-14 20:14:07.395673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.151 [2024-02-14 20:14:07.395716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.151 [2024-02-14 20:14:07.395763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.151 [2024-02-14 20:14:07.395816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.151 [2024-02-14 20:14:07.395861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.151 [2024-02-14 20:14:07.395910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.151 [2024-02-14 20:14:07.395960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.151 [2024-02-14 20:14:07.396008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.151 [2024-02-14 20:14:07.396050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.151 [2024-02-14 20:14:07.396096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.151 [2024-02-14 20:14:07.396144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.151 [2024-02-14 20:14:07.396184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.151 [2024-02-14 20:14:07.396229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.151 [2024-02-14 20:14:07.396266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.151 [2024-02-14 20:14:07.396298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.151 [2024-02-14 20:14:07.396326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.151 [2024-02-14 20:14:07.396366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.151 [2024-02-14 20:14:07.396407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.151 [2024-02-14 20:14:07.396444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.151 [2024-02-14 20:14:07.396486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.151 [2024-02-14 20:14:07.396531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.151 [2024-02-14 20:14:07.396573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.151 [2024-02-14 20:14:07.396614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.151 [2024-02-14 20:14:07.396652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.151 [2024-02-14 20:14:07.396692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.151 [2024-02-14 20:14:07.396728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.151 [2024-02-14 20:14:07.396759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.151 [2024-02-14 20:14:07.396803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.151 [2024-02-14 20:14:07.396847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.151 [2024-02-14 20:14:07.396895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.151 [2024-02-14 20:14:07.396945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.151 [2024-02-14 20:14:07.397282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.151 [2024-02-14 20:14:07.397331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.151 [2024-02-14 20:14:07.397380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.151 [2024-02-14 20:14:07.397426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.151 [2024-02-14 20:14:07.397474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.151 [2024-02-14 20:14:07.397520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.151 [2024-02-14 20:14:07.397561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.151 [2024-02-14 20:14:07.397604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.151 [2024-02-14 20:14:07.397652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.151 [2024-02-14 20:14:07.397700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.151 [2024-02-14 20:14:07.397730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.151 [2024-02-14 20:14:07.397762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.151 [2024-02-14 20:14:07.397807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.151 [2024-02-14 20:14:07.397845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.151 [2024-02-14 20:14:07.397886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.151 [2024-02-14 20:14:07.397927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.151 [2024-02-14 20:14:07.397963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.151 [2024-02-14 20:14:07.397994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.151 [2024-02-14 20:14:07.398023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.151 [2024-02-14 20:14:07.398061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.152 [2024-02-14 20:14:07.398103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.152 [2024-02-14 20:14:07.398153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.152 [2024-02-14 20:14:07.398197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.152 [2024-02-14 20:14:07.398247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.152 [2024-02-14 20:14:07.398297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.152 [2024-02-14 20:14:07.398344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.152 [2024-02-14 20:14:07.398391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.152 [2024-02-14 20:14:07.398433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.152 [2024-02-14 20:14:07.398481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.152 [2024-02-14 20:14:07.398530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.152 [2024-02-14 20:14:07.398578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.152 [2024-02-14 20:14:07.398623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.152 [2024-02-14 20:14:07.398669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.152 [2024-02-14 20:14:07.398712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.152 [2024-02-14 20:14:07.398759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.152 [2024-02-14 20:14:07.398809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.152 [2024-02-14 20:14:07.398859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.152 [2024-02-14 20:14:07.398905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.152 [2024-02-14 20:14:07.398952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.152 [2024-02-14 20:14:07.399004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.152 [2024-02-14 20:14:07.399052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.152 [2024-02-14 20:14:07.399099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.152 [2024-02-14 20:14:07.399144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.152 [2024-02-14 20:14:07.399191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.152 [2024-02-14 20:14:07.399235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.152 [2024-02-14 20:14:07.399275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.152 [2024-02-14 20:14:07.399316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.152 [2024-02-14 20:14:07.399354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.152 [2024-02-14 20:14:07.399385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.152 [2024-02-14 20:14:07.399417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.152 [2024-02-14 20:14:07.399455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.152 [2024-02-14 20:14:07.399495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.152 [2024-02-14 20:14:07.399536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.152 [2024-02-14 20:14:07.399585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.152 [2024-02-14 20:14:07.399626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.152 [2024-02-14 20:14:07.399673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.152 [2024-02-14 20:14:07.399714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.152 [2024-02-14 20:14:07.399744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.152 [2024-02-14 20:14:07.399778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.152 [2024-02-14 20:14:07.399817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.152 [2024-02-14 20:14:07.399851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.152 [2024-02-14 20:14:07.399881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.152 [2024-02-14 20:14:07.399928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.152 [2024-02-14 20:14:07.400273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.152 [2024-02-14 20:14:07.400329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.152 [2024-02-14 20:14:07.400377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.152 [2024-02-14 20:14:07.400428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.152 [2024-02-14 20:14:07.400478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.152 [2024-02-14 20:14:07.400522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.152 [2024-02-14 20:14:07.400567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.152 [2024-02-14 20:14:07.400610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.152 [2024-02-14 20:14:07.400653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.152 [2024-02-14 20:14:07.400696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.152 [2024-02-14 20:14:07.400732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.152 [2024-02-14 20:14:07.400773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.152 [2024-02-14 20:14:07.400810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.152 [2024-02-14 20:14:07.400839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.152 [2024-02-14 20:14:07.400884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.152 [2024-02-14 20:14:07.400928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.152 [2024-02-14 20:14:07.400961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.152 [2024-02-14 20:14:07.401004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.152 [2024-02-14 20:14:07.401042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.152 [2024-02-14 20:14:07.401077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.152 [2024-02-14 20:14:07.401109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.152 [2024-02-14 20:14:07.401139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.152 [2024-02-14 20:14:07.401184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.152 [2024-02-14 20:14:07.401227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.152 [2024-02-14 20:14:07.401277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.152 [2024-02-14 20:14:07.401322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.152 [2024-02-14 20:14:07.401369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.152 [2024-02-14 20:14:07.401422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.152 [2024-02-14 20:14:07.401469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.152 [2024-02-14 20:14:07.401512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.152 [2024-02-14 20:14:07.401564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.152 [2024-02-14 20:14:07.401613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.152 [2024-02-14 20:14:07.401661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.152 [2024-02-14 20:14:07.401710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.152 [2024-02-14 20:14:07.401763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.152 [2024-02-14 20:14:07.401809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.152 [2024-02-14 20:14:07.401857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.152 [2024-02-14 20:14:07.401907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.152 [2024-02-14 20:14:07.401954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.152 [2024-02-14 20:14:07.402000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.152 [2024-02-14 20:14:07.402048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.152 [2024-02-14 20:14:07.402097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.152 [2024-02-14 20:14:07.402146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.152 [2024-02-14 20:14:07.402192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.152 [2024-02-14 20:14:07.402237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.152 [2024-02-14 20:14:07.402277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.152 [2024-02-14 20:14:07.402309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.152 [2024-02-14 20:14:07.402342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.152 [2024-02-14 20:14:07.402390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.152 [2024-02-14 20:14:07.402430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.152 [2024-02-14 20:14:07.402477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.152 [2024-02-14 20:14:07.402516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.152 [2024-02-14 20:14:07.402556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.152 [2024-02-14 20:14:07.402594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.152 [2024-02-14 20:14:07.402627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.153 [2024-02-14 20:14:07.402669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.153 [2024-02-14 20:14:07.402709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.153 [2024-02-14 20:14:07.402742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.153 [2024-02-14 20:14:07.402774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.153 [2024-02-14 20:14:07.402818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.153 [2024-02-14 20:14:07.402869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.153 [2024-02-14 20:14:07.402913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.153 [2024-02-14 20:14:07.402958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.153 [2024-02-14 20:14:07.403012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.153 [2024-02-14 20:14:07.403337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.153 [2024-02-14 20:14:07.403384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.153 [2024-02-14 20:14:07.403435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.153 [2024-02-14 20:14:07.403482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.153 [2024-02-14 20:14:07.403524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.153 [2024-02-14 20:14:07.403564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.153 [2024-02-14 20:14:07.403613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.153 [2024-02-14 20:14:07.403659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.153 [2024-02-14 20:14:07.403698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.153 [2024-02-14 20:14:07.403730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.153 [2024-02-14 20:14:07.403766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.153 [2024-02-14 20:14:07.403809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.153 [2024-02-14 20:14:07.403846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.153 [2024-02-14 20:14:07.403889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.153 [2024-02-14 20:14:07.403928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.153 [2024-02-14 20:14:07.403964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.153 [2024-02-14 20:14:07.403996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.153 [2024-02-14 20:14:07.404027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.153 [2024-02-14 20:14:07.404077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.153 [2024-02-14 20:14:07.404124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.153 [2024-02-14 20:14:07.404174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.153 [2024-02-14 20:14:07.404222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.153 [2024-02-14 20:14:07.404273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.153 [2024-02-14 20:14:07.404321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.153 [2024-02-14 20:14:07.404372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.153 [2024-02-14 20:14:07.404421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.153 [2024-02-14 20:14:07.404466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.153 [2024-02-14 20:14:07.404508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.153 [2024-02-14 20:14:07.404556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.153 [2024-02-14 20:14:07.404603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.153 [2024-02-14 20:14:07.404656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.153 [2024-02-14 20:14:07.404704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.153 [2024-02-14 20:14:07.404751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.153 [2024-02-14 20:14:07.404801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.153 [2024-02-14 20:14:07.404850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.153 [2024-02-14 20:14:07.404892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.153 [2024-02-14 20:14:07.404940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.153 [2024-02-14 20:14:07.404987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.153 [2024-02-14 20:14:07.405035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.153 [2024-02-14 20:14:07.405089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.153 [2024-02-14 20:14:07.405137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.153 [2024-02-14 20:14:07.405180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.153 [2024-02-14 20:14:07.405228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.153 [2024-02-14 20:14:07.405270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.153 [2024-02-14 20:14:07.405309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.153 [2024-02-14 20:14:07.405346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.153 [2024-02-14 20:14:07.405376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.153 [2024-02-14 20:14:07.405408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.153 [2024-02-14 20:14:07.405445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.153 [2024-02-14 20:14:07.405489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.153 [2024-02-14 20:14:07.405529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.153 [2024-02-14 20:14:07.405571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.153 [2024-02-14 20:14:07.405619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.153 [2024-02-14 20:14:07.405666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.153 [2024-02-14 20:14:07.405706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.153 [2024-02-14 20:14:07.405736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.153 [2024-02-14 20:14:07.405774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.153 [2024-02-14 20:14:07.405813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.153 [2024-02-14 20:14:07.405844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.153 [2024-02-14 20:14:07.405875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.153 [2024-02-14 20:14:07.405925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.153 [2024-02-14 20:14:07.405970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.153 [2024-02-14 20:14:07.406017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.153 [2024-02-14 20:14:07.406368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.153 [2024-02-14 20:14:07.406421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.153 [2024-02-14 20:14:07.406464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.153 [2024-02-14 20:14:07.406511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.153 [2024-02-14 20:14:07.406565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.153 [2024-02-14 20:14:07.406611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.153 [2024-02-14 20:14:07.406657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.153 [2024-02-14 20:14:07.406702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.153 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:13:30.153 [2024-02-14 20:14:07.406743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.153 [2024-02-14 20:14:07.406786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.153 [2024-02-14 20:14:07.406818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.153 [2024-02-14 20:14:07.406850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.153 [2024-02-14 20:14:07.406891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.406930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.406966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.407002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.407043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.407075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.407106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.407153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.407200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.407250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.407303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.407349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.407394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.407442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.407490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.407537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.407587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.407635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.407683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.407736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.407783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.407828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.407880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.407927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.407974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.408026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.408073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.408120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.408164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.408207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.408244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.408286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.408318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.408348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.408385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.408426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.408469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.408508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.408549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.408587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.408620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.408653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.408690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.408729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.408759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.408799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.408840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.408892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.408940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.408988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.409036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.409081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.409425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.409473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.409518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.409561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.409601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.409643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.409694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.409730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.409759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.409800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.409841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.409873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.409913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.409950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.409985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.410014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.410044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.410086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.410136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.410185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.410232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.410282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.410324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.410367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.410414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.410460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.410505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.410556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.410598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.410644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.410699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.410742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.410791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.410845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.410890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.410937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.410991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.411036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.411082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.411132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.411180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.411220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.411267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.411312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.411360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.411394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.411424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.411470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.411510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.411549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.411587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.411630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.411678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.411723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.411761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.154 [2024-02-14 20:14:07.411807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.411847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.411878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.411914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.411959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.412009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.412050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.412096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.412443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.412499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.412546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.412590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.412631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.412678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.412723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.412764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.412809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.412840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.412868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.412908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.412949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.412984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.413023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.413065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.413097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.413127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.413163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.413210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.413257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.413299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.413344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.413396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.413441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.413490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.413540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.413588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.413637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.413693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.413742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.413789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.413840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.413884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.413930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.413981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.414028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.414076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.414128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.414173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.414219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.414265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.414305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.414335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.414365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.414406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.414444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.414483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.414529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.414573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.414614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.414645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.414683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.414725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.414759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.414790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.414831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.414883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.414929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.414978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.415027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.415072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.415117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.415167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.415499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.415547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.415589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.415631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.415678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.415717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.415746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.415781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.415821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.415859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.415897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.415937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.415974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.416008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.416038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.416080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.416127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.416174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.416223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.416266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.416312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.416359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.416405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.416453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.416507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.416553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.416600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.416653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.416708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.416758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.416806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.416853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.416896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.416940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.155 [2024-02-14 20:14:07.416985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.417036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.417082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.417129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.417175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.417218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.417261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.417308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.417352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.417388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.417418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.417458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.417497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.417539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.417579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.417624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.417668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.417711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.417743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.417773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.417813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.417850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.417883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.417926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.417972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.418014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.418064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.418110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.418156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.418506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.418563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.418606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.418645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.418690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.418735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.418777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.418824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.418856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.418891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.418935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.418979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.419018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.419056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.419095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.419129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.419159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.419202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.419247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.419296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.419347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.419391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.419437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.419489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.419539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.419585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.419631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.419685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.419729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.419781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.419832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.419887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.419930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.419981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.420026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.420071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.420119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.420164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.420205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.420246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.420285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.420320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.420348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.420386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.420429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.420469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.420507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.420545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.420582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.420620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.420654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.420692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.420732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.420764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.420798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.420846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.420890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.420932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.420983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.421031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.421076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.421127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.421174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.421222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.421553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.421608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.421652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.421693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.421736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.421765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.421803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.421841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.421876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.421915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.421957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.156 [2024-02-14 20:14:07.421997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.157 [2024-02-14 20:14:07.422028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.157 [2024-02-14 20:14:07.422058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.157 [2024-02-14 20:14:07.422100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.157 [2024-02-14 20:14:07.422148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.157 [2024-02-14 20:14:07.422193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.157 [2024-02-14 20:14:07.422242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.157 [2024-02-14 20:14:07.422301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.157 [2024-02-14 20:14:07.422348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.157 [2024-02-14 20:14:07.422392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.157 [2024-02-14 20:14:07.422444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.157 [2024-02-14 20:14:07.422491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.157 [2024-02-14 20:14:07.422537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.157 [2024-02-14 20:14:07.422586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.157 [2024-02-14 20:14:07.422638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.157 [2024-02-14 20:14:07.422688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.157 [2024-02-14 20:14:07.422733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.157 [2024-02-14 20:14:07.422785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.157 [2024-02-14 20:14:07.422832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.157 [2024-02-14 20:14:07.422884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.157 [2024-02-14 20:14:07.422936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.157 [2024-02-14 20:14:07.422981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.157 [2024-02-14 20:14:07.423029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.157 [2024-02-14 20:14:07.423077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.157 [2024-02-14 20:14:07.423124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.157 [2024-02-14 20:14:07.423171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.157 [2024-02-14 20:14:07.423218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.157 [2024-02-14 20:14:07.423263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.157 [2024-02-14 20:14:07.423300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.157 [2024-02-14 20:14:07.423338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.157 [2024-02-14 20:14:07.423380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.157 [2024-02-14 20:14:07.423411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.157 [2024-02-14 20:14:07.423440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.157 [2024-02-14 20:14:07.423483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.157 [2024-02-14 20:14:07.423528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.157 [2024-02-14 20:14:07.423564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.157 [2024-02-14 20:14:07.423616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.157 [2024-02-14 20:14:07.423660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.157 [2024-02-14 20:14:07.423700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.157 [2024-02-14 20:14:07.423739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.157 [2024-02-14 20:14:07.423770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.157 [2024-02-14 20:14:07.423811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.157 [2024-02-14 20:14:07.423855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.157 [2024-02-14 20:14:07.423887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.157 [2024-02-14 20:14:07.423918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.157 [2024-02-14 20:14:07.423969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.157 [2024-02-14 20:14:07.424013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.157 [2024-02-14 20:14:07.424060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.157 [2024-02-14 20:14:07.424108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.157 [2024-02-14 20:14:07.424155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.157 [2024-02-14 20:14:07.424199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.157 [2024-02-14 20:14:07.424253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.157 [2024-02-14 20:14:07.424608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.157 [2024-02-14 20:14:07.424662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.157 [2024-02-14 20:14:07.424709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.157 [2024-02-14 20:14:07.424753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.157 [2024-02-14 20:14:07.424794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.157 [2024-02-14 20:14:07.424832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.157 [2024-02-14 20:14:07.424863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.157 [2024-02-14 20:14:07.424892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.157 [2024-02-14 20:14:07.424940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.157 [2024-02-14 20:14:07.424980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.157 [2024-02-14 20:14:07.425015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.157 [2024-02-14 20:14:07.425053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.157 [2024-02-14 20:14:07.425094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.157 [2024-02-14 20:14:07.425125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.157 [2024-02-14 20:14:07.425155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.157 [2024-02-14 20:14:07.425201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.157 [2024-02-14 20:14:07.425246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.157 [2024-02-14 20:14:07.425290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.157 [2024-02-14 20:14:07.425340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.157 [2024-02-14 20:14:07.425391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.157 [2024-02-14 20:14:07.425435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.157 [2024-02-14 20:14:07.425492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.157 [2024-02-14 20:14:07.425542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.157 [2024-02-14 20:14:07.425588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.157 [2024-02-14 20:14:07.425640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.157 [2024-02-14 20:14:07.425701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.157 [2024-02-14 20:14:07.425746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.157 [2024-02-14 20:14:07.425795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.157 [2024-02-14 20:14:07.425845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.157 [2024-02-14 20:14:07.425893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.157 [2024-02-14 20:14:07.425942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.157 [2024-02-14 20:14:07.425997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.157 [2024-02-14 20:14:07.426045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.157 [2024-02-14 20:14:07.426092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.157 [2024-02-14 20:14:07.426147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.157 [2024-02-14 20:14:07.426198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.157 [2024-02-14 20:14:07.426241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.157 [2024-02-14 20:14:07.426281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.157 [2024-02-14 20:14:07.426322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.157 [2024-02-14 20:14:07.426364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.158 [2024-02-14 20:14:07.426394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.158 [2024-02-14 20:14:07.426429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.158 [2024-02-14 20:14:07.426472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.158 [2024-02-14 20:14:07.426515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.158 [2024-02-14 20:14:07.426554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.158 [2024-02-14 20:14:07.426594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.158 [2024-02-14 20:14:07.426641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.158 [2024-02-14 20:14:07.426690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.158 [2024-02-14 20:14:07.426720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.158 [2024-02-14 20:14:07.426758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.158 [2024-02-14 20:14:07.426795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.158 [2024-02-14 20:14:07.426826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.158 [2024-02-14 20:14:07.426859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.158 [2024-02-14 20:14:07.426908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.158 [2024-02-14 20:14:07.426953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.158 [2024-02-14 20:14:07.426995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.158 [2024-02-14 20:14:07.427046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.158 [2024-02-14 20:14:07.427089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.158 [2024-02-14 20:14:07.427132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.158 [2024-02-14 20:14:07.427180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.158 [2024-02-14 20:14:07.427225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.158 [2024-02-14 20:14:07.427272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.158 [2024-02-14 20:14:07.427323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.158 [2024-02-14 20:14:07.427371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.158 [2024-02-14 20:14:07.427697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.158 [2024-02-14 20:14:07.427738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.158 [2024-02-14 20:14:07.427776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.158 [2024-02-14 20:14:07.427804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.158 [2024-02-14 20:14:07.427840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.158 [2024-02-14 20:14:07.427879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.158 [2024-02-14 20:14:07.427912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.158 [2024-02-14 20:14:07.427957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.158 [2024-02-14 20:14:07.427998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.158 [2024-02-14 20:14:07.428035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.158 [2024-02-14 20:14:07.428065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.158 [2024-02-14 20:14:07.428095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.158 [2024-02-14 20:14:07.428138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.158 [2024-02-14 20:14:07.428179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.158 [2024-02-14 20:14:07.428229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.158 [2024-02-14 20:14:07.428272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.158 [2024-02-14 20:14:07.428318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.158 [2024-02-14 20:14:07.428369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.158 [2024-02-14 20:14:07.428413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.158 [2024-02-14 20:14:07.428457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.158 [2024-02-14 20:14:07.428508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.158 [2024-02-14 20:14:07.428550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.158 [2024-02-14 20:14:07.428595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.158 [2024-02-14 20:14:07.428643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.158 [2024-02-14 20:14:07.428696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.158 [2024-02-14 20:14:07.428741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.158 [2024-02-14 20:14:07.428785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.158 [2024-02-14 20:14:07.428831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.158 [2024-02-14 20:14:07.428881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.158 [2024-02-14 20:14:07.428923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.158 [2024-02-14 20:14:07.428973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.158 [2024-02-14 20:14:07.429023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.158 [2024-02-14 20:14:07.429071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.158 [2024-02-14 20:14:07.429117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.158 [2024-02-14 20:14:07.429165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.158 [2024-02-14 20:14:07.429210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.158 [2024-02-14 20:14:07.429257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.158 [2024-02-14 20:14:07.429301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.158 [2024-02-14 20:14:07.429340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.158 [2024-02-14 20:14:07.429376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.158 [2024-02-14 20:14:07.429418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.158 [2024-02-14 20:14:07.429448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.158 [2024-02-14 20:14:07.429475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.158 [2024-02-14 20:14:07.429514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.158 [2024-02-14 20:14:07.429555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.158 [2024-02-14 20:14:07.429596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.158 [2024-02-14 20:14:07.429638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.158 [2024-02-14 20:14:07.429685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.158 [2024-02-14 20:14:07.429724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.158 [2024-02-14 20:14:07.429763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.158 [2024-02-14 20:14:07.429795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.158 [2024-02-14 20:14:07.429825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.158 [2024-02-14 20:14:07.429870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.158 [2024-02-14 20:14:07.429903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.158 [2024-02-14 20:14:07.429932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.158 [2024-02-14 20:14:07.429976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.158 [2024-02-14 20:14:07.430021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.158 [2024-02-14 20:14:07.430066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.158 [2024-02-14 20:14:07.430111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.158 [2024-02-14 20:14:07.430154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.158 [2024-02-14 20:14:07.430198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.158 [2024-02-14 20:14:07.430243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.158 [2024-02-14 20:14:07.430286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.158 [2024-02-14 20:14:07.430677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.158 [2024-02-14 20:14:07.430721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.158 [2024-02-14 20:14:07.430761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.158 [2024-02-14 20:14:07.430800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.158 [2024-02-14 20:14:07.430839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.158 [2024-02-14 20:14:07.430868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.158 [2024-02-14 20:14:07.430903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.158 [2024-02-14 20:14:07.430952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.158 [2024-02-14 20:14:07.430987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.158 [2024-02-14 20:14:07.431029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.158 [2024-02-14 20:14:07.431068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.158 [2024-02-14 20:14:07.431105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.158 [2024-02-14 20:14:07.431134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.158 [2024-02-14 20:14:07.431165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.158 [2024-02-14 20:14:07.431209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.158 [2024-02-14 20:14:07.431254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.431297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.431346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.431397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.431444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.431493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.431541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.431587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.431638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.431685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.431734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.431781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.431829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.431873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.431931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.431976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.432023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.432065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.432113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.432163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.432205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.432256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.432293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.432332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.432361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.432389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.432431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.432477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.432521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.432562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.432602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.432640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.432680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.432709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.432749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.432788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.432819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.432853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.432900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.432946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.432995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.433038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.433089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.433138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.433185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.433235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.433281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.433330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.433376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.433709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.433753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.433789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.433818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.433856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.433896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.433932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.433973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.434004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.434033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.434060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.434089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.434116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.434144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.434171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.434198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.434226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.434253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.434281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.434309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.434350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.434387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.434427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.434457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.434486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.434529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.434577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.434627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.434677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.434727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.434771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.434811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.434855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.434902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.434947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.434995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.435042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.435084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.435127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.435177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.435220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.435270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.435318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.435360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.435414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.435458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.435504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.435552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.435598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.435641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.435690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.435719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.435747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.435788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.435824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.435867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.435910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.435950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.435991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.436031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.436071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.436111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.436141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.159 [2024-02-14 20:14:07.436505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.160 [2024-02-14 20:14:07.436557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.160 [2024-02-14 20:14:07.436606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.160 [2024-02-14 20:14:07.436659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.160 [2024-02-14 20:14:07.436705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.160 [2024-02-14 20:14:07.436756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.160 [2024-02-14 20:14:07.436803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.160 [2024-02-14 20:14:07.436848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.160 [2024-02-14 20:14:07.436892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.160 [2024-02-14 20:14:07.436944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.160 [2024-02-14 20:14:07.436986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.160 [2024-02-14 20:14:07.437028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.160 [2024-02-14 20:14:07.437067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.160 [2024-02-14 20:14:07.437114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.160 [2024-02-14 20:14:07.437158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.160 [2024-02-14 20:14:07.437196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.160 [2024-02-14 20:14:07.437226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.160 [2024-02-14 20:14:07.437260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.160 [2024-02-14 20:14:07.437299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.160 [2024-02-14 20:14:07.437341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.160 [2024-02-14 20:14:07.437375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.160 [2024-02-14 20:14:07.437415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.160 [2024-02-14 20:14:07.437448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.160 [2024-02-14 20:14:07.437477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.160 [2024-02-14 20:14:07.437504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.160 [2024-02-14 20:14:07.437533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.160 [2024-02-14 20:14:07.437561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.160 true 00:13:30.160 [2024-02-14 20:14:07.437588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.160 [2024-02-14 20:14:07.437624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.160 [2024-02-14 20:14:07.437655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.160 [2024-02-14 20:14:07.437683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.160 [2024-02-14 20:14:07.437714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.160 [2024-02-14 20:14:07.437742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.160 [2024-02-14 20:14:07.437771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.160 [2024-02-14 20:14:07.437798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.160 [2024-02-14 20:14:07.437825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.160 [2024-02-14 20:14:07.437853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.160 [2024-02-14 20:14:07.437881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.160 [2024-02-14 20:14:07.437916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.160 [2024-02-14 20:14:07.437954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.160 [2024-02-14 20:14:07.437995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.160 [2024-02-14 20:14:07.438026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.160 [2024-02-14 20:14:07.438057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.160 [2024-02-14 20:14:07.438110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.160 [2024-02-14 20:14:07.438153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.160 [2024-02-14 20:14:07.438196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.160 [2024-02-14 20:14:07.438239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.160 [2024-02-14 20:14:07.438289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.160 [2024-02-14 20:14:07.438333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.160 [2024-02-14 20:14:07.438377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.160 [2024-02-14 20:14:07.438427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.160 [2024-02-14 20:14:07.438474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.160 [2024-02-14 20:14:07.438524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.160 [2024-02-14 20:14:07.438569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.160 [2024-02-14 20:14:07.438616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.160 [2024-02-14 20:14:07.438665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.160 [2024-02-14 20:14:07.438711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.160 [2024-02-14 20:14:07.438756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.160 [2024-02-14 20:14:07.438813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.160 [2024-02-14 20:14:07.438856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.160 [2024-02-14 20:14:07.438901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.160 [2024-02-14 20:14:07.438949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.160 [2024-02-14 20:14:07.438992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.160 [2024-02-14 20:14:07.439040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.160 [2024-02-14 20:14:07.439378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.160 [2024-02-14 20:14:07.439420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.160 [2024-02-14 20:14:07.439460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.160 [2024-02-14 20:14:07.439494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.160 [2024-02-14 20:14:07.439535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.160 [2024-02-14 20:14:07.439572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.160 [2024-02-14 20:14:07.439601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.160 [2024-02-14 20:14:07.439640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.160 [2024-02-14 20:14:07.439682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.160 [2024-02-14 20:14:07.439720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.160 [2024-02-14 20:14:07.439765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.160 [2024-02-14 20:14:07.439814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.160 [2024-02-14 20:14:07.439862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.160 [2024-02-14 20:14:07.439908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.160 [2024-02-14 20:14:07.439957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.160 [2024-02-14 20:14:07.439999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.160 [2024-02-14 20:14:07.440044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.160 [2024-02-14 20:14:07.440094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.160 [2024-02-14 20:14:07.440141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.160 [2024-02-14 20:14:07.440187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.160 [2024-02-14 20:14:07.440234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.160 [2024-02-14 20:14:07.440279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.160 [2024-02-14 20:14:07.440321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.160 [2024-02-14 20:14:07.440361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.160 [2024-02-14 20:14:07.440404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.160 [2024-02-14 20:14:07.440448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.160 [2024-02-14 20:14:07.440497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.160 [2024-02-14 20:14:07.440531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.160 [2024-02-14 20:14:07.440559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.160 [2024-02-14 20:14:07.440594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.160 [2024-02-14 20:14:07.440626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.160 [2024-02-14 20:14:07.440669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.160 [2024-02-14 20:14:07.440707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.160 [2024-02-14 20:14:07.440754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.440800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.440845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.440897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.440942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.440986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.441038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.441081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.441128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.441171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.441217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.441264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.441303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.441341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.441380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.441419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.441447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.441489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.441543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.441592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.441635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.441685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.441729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.441772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.441827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.441872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.441918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.441965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.442004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.442044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.442400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.442453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.442496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.442534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.442575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.442606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.442634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.442679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.442709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.442750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.442794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.442846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.442890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.442936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.442985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.443028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.443079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.443121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.443169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.443207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.443249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.443277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.443311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.443346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.443385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.443415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.443447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.443489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.443528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.443575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.443617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.443671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.443715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.443765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.443810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.443855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.443902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.443955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.443999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.444052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.444093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.444134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.444176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.444217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.444255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.444284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.444314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.444355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.444399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.444449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.444495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.444541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.444595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.444642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.444695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.444742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.444788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.444840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.444885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.444934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.444976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.445023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.445078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.445121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.445461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.445502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.445541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.445574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.445604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.445643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.445686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.445717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.445755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.445795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.445825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.445861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.445911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.445955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.446006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.446049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.446092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.446140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.446186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.446232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.446282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.446325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.446388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.446434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.446483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.446531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.446570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.446624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.446672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.446722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.446771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.446819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.446870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.446915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.446963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.447011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.447060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.447107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.447154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.447196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.447241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.447293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.447337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.447386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.447442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.447488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.447533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.447575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.447616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.447656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.447695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.447724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.447755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.447796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.447835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.447874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.447917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.447959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.447997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.448038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.448079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.448118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.448149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.448439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.448471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.448498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.448526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.161 [2024-02-14 20:14:07.448552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.448580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.448608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.448636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.448669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.448715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.448747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.448775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.448803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.448832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.448860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.448888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.448916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.448943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.448971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.448998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.449035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.449074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.449110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.449141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.449183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.449227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.449275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.449317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.449362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.449408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.449456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.449499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.449542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.449588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.449631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.449679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.449726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.449773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.449820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.449864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.449912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.449959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.450001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.450046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.450093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.450140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.450190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.450235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.450278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.450318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.450357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.450393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.450433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.450461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.450492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.450534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.450571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.450612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.450658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.450697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.450734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.450763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.450792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.450824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.451167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.451213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.451253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.451289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.451324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.451354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.451381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.451408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.451437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.451465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.451502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.451540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.451574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.451610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.451659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.451709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.451757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.451806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.451849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.451896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.451943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.451988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.452032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.452075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.452119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.452174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.452217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.452263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.452306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.452353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.452410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.452458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.452503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.452540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.452568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.452602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.452639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.452683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.452721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.452765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.452809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.452847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.452890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.452927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.452957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.452985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.453017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.453056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.453096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.453141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.453188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.453237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.453286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.453333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.453380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.453426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.453471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.453520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.453567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.453610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.453663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.453713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.453759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.454083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.454128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.454160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.454198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.454229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.454258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 Message suppressed 999 times: [2024-02-14 20:14:07.454307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 Read completed with error (sct=0, sc=15) 00:13:30.162 [2024-02-14 20:14:07.454355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.454401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.454449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.454502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.454549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.454591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.454635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.454685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.454728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.454776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.454820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.454865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.454906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.454950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.455000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.455053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.455105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.455152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.455195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.455238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.455289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.455332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.455383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.455425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.455465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.455503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.455543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.455572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.455601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.455642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.455685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.455727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.455769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.455809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.455857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.455898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.455933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.455966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.456007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.456037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.456072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.456117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.456159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.456207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.456259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.456306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.456356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.456409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.456452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.456500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.456546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.456592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.456640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.456689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.162 [2024-02-14 20:14:07.456733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.456773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.456825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.457134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.457176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.457206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.457249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.457293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.457339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.457389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.457443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.457486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.457534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.457582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.457628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.457683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.457730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.457772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.457821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.457868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.457913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.457964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.458006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.458054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.458102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.458146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.458193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.458242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.458290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.458335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.458376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.458419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.458460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.458501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.458539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.458576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.458605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.458638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.458683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.458721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.458763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.458807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.458837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.458867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.458903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.458942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.458975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.459007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.459057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.459100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.459147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.459194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.459241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.459291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.459336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.459386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.459432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.459479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.459525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.459576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.459622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.459671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.459716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.459762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.459807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.459847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.460223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.460265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.460296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.460327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.460370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.460412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.460464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.460511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.460555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.460603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.460651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.460702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.460747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.460791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.460830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.460870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.460898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.460926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.460961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.461000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.461041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.461079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.461109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 20:14:07 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1709311 00:13:30.163 [2024-02-14 20:14:07.461151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.461199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.461242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.461285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.461335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.461384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.461430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 20:14:07 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:30.163 [2024-02-14 20:14:07.461482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.461530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.461574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.461622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.461672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.461717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.461769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.461811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.461858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.461904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.461947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.461999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.462051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.462096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.462141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.462183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.462223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.462261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.462308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.462350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.462398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.462436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.462464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.462501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.462540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.462570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.462602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.462642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.462685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.462722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.462754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.462803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.462847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.462899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.463244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.463292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.463338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.463384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.463430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.463474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.463523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.463569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.463612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.463665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.463707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.463759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.463802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.463849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.463896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.463946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.463992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.464041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.464086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.464130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.464176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.464219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.464266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.464307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.464345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.464384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.464421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.464454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.464482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.464521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.464561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.464598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.464640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.464688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.464730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.464771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.163 [2024-02-14 20:14:07.464811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.464849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.464879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.464916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.464956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.464992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.465023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.465053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.465099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.465147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.465197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.465239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.465289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.465332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.465371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.465400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.465430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.465472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.465508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.465547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.465586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.465619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.465665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.465715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.465759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.465809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.465858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.466208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.466257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.466304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.466347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.466393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.466441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.466482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.466522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.466561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.466598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.466640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.466689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.466726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.466759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.466788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.466828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.466870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.466901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.466931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.466971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.467015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.467053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.467084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.467132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.467174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.467222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.467266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.467312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.467366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.467410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.467457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.467503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.467549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.467597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.467642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.467694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.467738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.467785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.467838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.467885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.467934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.467979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.468022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.468066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.468105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.468143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.468173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.468201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.468245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.468289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.468332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.468380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.468420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.468459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.468507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.468542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.468578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.468616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.468651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.468696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.468745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.468790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.468836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.468879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.469211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.469244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.469274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.469315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.469345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.469384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.469421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.469457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.469492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.469523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.469559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.469607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.469654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.469702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.469750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.469795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.469845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.469894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.469942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.469990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.470039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.470085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.470130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.470180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.470224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.470269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.470323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.470363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.470416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.470463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.470507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.470559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.470604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.470653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.470702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.470749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.470792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.470840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.470888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.470932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.470980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.471017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.471056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.471091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.471119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.471159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.471196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.471233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.471275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.471319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.471358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.471406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.471447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.471486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.471516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.471545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.471588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.471629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.471664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.471707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.471751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.471796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.471839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.472176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.472228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.472272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.472317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.472358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.472403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.472441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.472477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.472506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.472539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.472579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.472618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.472653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.472696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.472737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.472769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.472800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.472831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.472882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.472928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.472976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.473021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.473067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.473118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.473164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.473207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.473260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.473308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.473353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.473407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.473451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.473498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.164 [2024-02-14 20:14:07.473542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.473588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.473644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.473696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.473736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.473784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.473822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.473864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.473902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.473942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.473971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.474006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.474043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.474082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.474121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.474161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.474201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.474239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.474269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.474310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.474345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.474376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.474421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.474467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.474515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.474563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.474607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.474656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.474697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.474744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.474791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.474837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.475165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.475207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.475243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.475273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.475301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.475335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.475371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.475412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.475452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.475485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.475515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.475545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.475593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.475643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.475695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.475742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.475786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.475829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.475871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.475921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.475967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.476013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.476062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.476113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.476159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.476205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.476249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.476295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.476343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.476388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.476436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.476483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.476531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.476576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.476626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.476678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.476727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.476774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.476819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.476856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.476886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.476914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.476953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.476996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.477033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.477071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.477117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.477156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.477196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.477233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.477271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.477301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.477341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.477378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.477408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.477454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.477506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.477547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.477592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.477643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.477694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.477737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.477790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.478141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.478174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.478212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.478257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.478295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.478335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.478375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.478409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.478439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.478468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.478516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.478561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.478617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.478666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.478712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.478759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.478807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.478852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.478902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.478947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.478995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.479040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.479086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.479137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.479188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.479239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.479285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.479337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.479385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.479433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.479482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.479531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.479582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.479628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.479680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.479722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.479770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.479814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.479857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.479890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.479919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.479956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.479996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.480032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.480073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.480121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.480158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.480202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.480243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.480276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.480305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.480343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.480382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.480412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.480455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.480501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.480546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.480597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.480643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.480696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.480741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.480785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.480836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.480883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.481206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.481246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.481284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.481312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.481350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.481394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.481429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.481469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.481508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.481546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.481576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.481606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.481651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.481710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.481756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.481800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.481847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.481894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.481941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.481987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.482034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.482082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.482122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.482168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.482214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.482256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.482304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.482349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.165 [2024-02-14 20:14:07.482397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.482445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.482489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.482534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.482592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.482638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.482686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.482734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.482782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.482822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.482853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.482880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.482917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.482963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.483000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.483041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.483081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.483120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.483161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.483191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.483231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.483269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.483299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.483342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.483383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.483435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.483477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.483522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.483570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.483613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.483664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.483719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.483765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.483812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.483855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.484198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.484232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.484259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.484299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.484341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.484382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.484420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.484460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.484491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.484522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.484557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.484599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.484655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.484702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.484750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.484800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.484849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.484893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.484941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.484982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.485033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.485076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.485124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.485172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.485215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.485259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.485312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.485356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.485403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.485453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.485503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.485551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.485602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.485651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.485696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.485738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.485783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.485822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.485852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.485881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.485922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.485965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.486004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.486047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.486086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.486127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.486169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.486200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.486232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.486268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.486302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.486332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.486377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.486424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.486470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.486518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.486566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.486614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.486664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.486713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.486764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.486832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.486881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.486926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.487264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.487305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.487336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.487364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.487402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.487441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.487475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.487512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.487547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.487582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.487612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.487651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.487699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.487745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.487789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.487835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.487881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.487930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.487977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.488025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.488071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.488115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.488169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.488215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.488261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.488311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.488356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.488403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.488452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.488496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.488547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.488597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.488645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.488693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.488737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.488774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.488809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.488851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.488880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.488918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.488957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.488993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.489031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.489069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.489115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.489161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.489192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.489224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.489263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.489294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.489324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.489367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.489414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.489458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.489508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.489550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.489602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.489654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.489698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.489744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.489792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.489835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.489883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.490229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.490264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.490292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.490328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.490368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.490403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.490443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.490481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.166 [2024-02-14 20:14:07.490512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.490541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.490585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.490627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.490681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.490726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.490771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.490820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.490869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.490913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.490962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.491008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.491057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.491109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.491151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.491196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.491242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.491293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.491339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.491388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.491435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.491483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.491529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.491576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.491625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.491679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.491717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.491760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.491798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.491839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.491869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.491898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.491943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.491983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.492019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.492057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.492094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.492131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.492173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.492204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.492236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.492274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.492309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.492340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.492379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.492421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.492467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.492516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.492564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.492613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.492665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.492710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.492753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.492799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.492847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.492894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.493229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.493273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.493311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.493342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.493370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.493414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.493450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.493486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.493522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.493561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.493594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.493624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.493659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.493707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.493752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.493796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.493844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.493890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.493937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.493991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.494035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.494085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.494132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.494182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.494228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.494274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.494321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.494368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.494412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.494461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.494511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.494556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.494601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.494653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.494704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.494742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.494784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.494829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.494865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.494893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.494931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.494972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.495014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.495055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.495096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.495138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.495182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.495212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.495248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.495286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.495316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.495346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.495393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.495438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.495485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.495531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.495577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.495618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.495668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.495716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.495760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.495811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.495855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.496203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.496243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.496273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.496305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.496347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.496378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.496415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.496453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.496488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.496518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.496547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.496594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.496640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.496692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.496740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.496788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.496832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.496876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.496921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.496970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.497025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.497074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.497125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.497171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.497220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.497264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.497313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.497360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.497407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.497450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.497500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.497546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.497590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.497631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.497679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.497720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.497763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.497805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.497842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.497873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.497901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.497940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.497983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.498021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.498062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.498100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.498138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.498179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.498210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.498242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.498278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.498316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.498348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.498394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.498441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.498487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.498534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.498581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.498628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.498682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.498728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.498778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.498820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.498868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.499200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.499241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.499283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.167 [2024-02-14 20:14:07.499321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.499357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.499388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.499430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.499467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.499501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.499537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.499579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.499612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.499642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.499681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.499724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.499769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.499820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.499867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.499917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.499962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.500008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.500051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.500103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.500152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.500203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 Message suppressed 999 times: [2024-02-14 20:14:07.500252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 Read completed with error (sct=0, sc=15) 00:13:30.168 [2024-02-14 20:14:07.500301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.500350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.500397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.500444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.500494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.500539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.500586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.500631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.500686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.500735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.500774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.500814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.500864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.500903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.500933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.500962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.501007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.501049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.501095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.501136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.501179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.501215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.501250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.501279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.501317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.501355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.501385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.501425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.501470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.501514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.501561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.501607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.501665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.501709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.501750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.501801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.501848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.502182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.502225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.502267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.502310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.502339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.502374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.502414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.502454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.502493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.502529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.502562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.502593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.502621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.502676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.502720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.502764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.502811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.502863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.502911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.502959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.503003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.503052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.503099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.503145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.503189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.503236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.503275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.503327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.503370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.503416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.503466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.503508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.503553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.503605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.503656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.503703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.503749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.503790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.503832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.503871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.503907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.503936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.503973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.504014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.504053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.504092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.504133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.504171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.504212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.504256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.504285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.504321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.504358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.504389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.504422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.504468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.504519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.504567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.504625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.504679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.504727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.504778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.504819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.504864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.505192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.505232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.505275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.505313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.505363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.505402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.505431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.505463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.505500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.505539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.505581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.505619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.505663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.505694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.505723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.505765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.505808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.505858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.505904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.505950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.505999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.506046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.506088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.506134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.506180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.506227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.506270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.506317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.506367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.506411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.506456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.506502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.506550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.506595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.506645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.506696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.506740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.506782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.506825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.506863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.506901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.506939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.506972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.507013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.507061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.168 [2024-02-14 20:14:07.507105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.507147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.507189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.507229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.507268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.507297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.507336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.507376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.507406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.507437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.507485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.507531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.507576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.507620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.507669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.507714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.507766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.507817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.508147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.508191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.508231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.508268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.508309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.508341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.508375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.508416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.508448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.508486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.508522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.508562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.508596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.508625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.508674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.508724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.508769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.508813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.508864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.508913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.508958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.509007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.509056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.509104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.509155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.509204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.509254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.509300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.509346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.509396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.509442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.509484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.509537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.509580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.509631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.509681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.509728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.509784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.509827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.509870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.509910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.509951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.509981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.510008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.510048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.510086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.510124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.510164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.510203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.510245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.510290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.510321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.510352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.510395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.510434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.510464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.510503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.510550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.510597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.510642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.510694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.510739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.510786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.510833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.511159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.511214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.511260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.511302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.511341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.511382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.511423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.511458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.511489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.511526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.511571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.511607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.511653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.511694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.511731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.511761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.511790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.511833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.511877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.511921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.511970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.512018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.512066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.512114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.512161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.512204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.512256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.512306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.512349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.512401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.512445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.512489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.512540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.512583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.512631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.512687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.512733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.512784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.512828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.512868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.512909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.512949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.512989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.513019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.513046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.513091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.513136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.513177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.513215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.513255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.513293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.513332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.513360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.513397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.513435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.513465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.513512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.513558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.513604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.513658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.513704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.513754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.513800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.514163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.514210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.514252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.514290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.514334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.514374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.514404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.514437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.514474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.514513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.514550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.514581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.514627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.514676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.514710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.514743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.514788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.514835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.514889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.514938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.514981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.515032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.515079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.515122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.515173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.515218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.515265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.515318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.515365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.515417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.515463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.515511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.515561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.515608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.515657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.515702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.515747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.515802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.515846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.515891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.515934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.515975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.516014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.169 [2024-02-14 20:14:07.516051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.170 [2024-02-14 20:14:07.516086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.170 [2024-02-14 20:14:07.516115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.170 [2024-02-14 20:14:07.516153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.170 [2024-02-14 20:14:07.516190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.170 [2024-02-14 20:14:07.516230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.170 [2024-02-14 20:14:07.516271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.170 [2024-02-14 20:14:07.516313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.170 [2024-02-14 20:14:07.516352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.170 [2024-02-14 20:14:07.516400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.170 [2024-02-14 20:14:07.516434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.170 [2024-02-14 20:14:07.516468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.170 [2024-02-14 20:14:07.516511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.170 [2024-02-14 20:14:07.516558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.170 [2024-02-14 20:14:07.516590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.170 [2024-02-14 20:14:07.516637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.170 [2024-02-14 20:14:07.516684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.170 [2024-02-14 20:14:07.516731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.170 [2024-02-14 20:14:07.516777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.170 [2024-02-14 20:14:07.516826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.170 [2024-02-14 20:14:07.516875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.170 [2024-02-14 20:14:07.517204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.170 [2024-02-14 20:14:07.517248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.170 [2024-02-14 20:14:07.517283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.170 [2024-02-14 20:14:07.517321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.170 [2024-02-14 20:14:07.517358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.170 [2024-02-14 20:14:07.517407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.170 [2024-02-14 20:14:07.517443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.170 [2024-02-14 20:14:07.517480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.170 [2024-02-14 20:14:07.517512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.170 [2024-02-14 20:14:07.517541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.170 [2024-02-14 20:14:07.517570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.170 [2024-02-14 20:14:07.517610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.170 [2024-02-14 20:14:07.517661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.170 [2024-02-14 20:14:07.517708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.170 [2024-02-14 20:14:07.517757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.170 [2024-02-14 20:14:07.517804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.170 [2024-02-14 20:14:07.517849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.170 [2024-02-14 20:14:07.517898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.170 [2024-02-14 20:14:07.517945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.170 [2024-02-14 20:14:07.517989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.170 [2024-02-14 20:14:07.518036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.170 [2024-02-14 20:14:07.518087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.170 [2024-02-14 20:14:07.518133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.170 [2024-02-14 20:14:07.518178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.170 [2024-02-14 20:14:07.518233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.170 [2024-02-14 20:14:07.518276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.170 [2024-02-14 20:14:07.518324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.170 [2024-02-14 20:14:07.518373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.170 [2024-02-14 20:14:07.518419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.170 [2024-02-14 20:14:07.518460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.170 [2024-02-14 20:14:07.518507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.170 [2024-02-14 20:14:07.518557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.170 [2024-02-14 20:14:07.518603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.170 [2024-02-14 20:14:07.518658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.170 [2024-02-14 20:14:07.518704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.170 [2024-02-14 20:14:07.518742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.170 [2024-02-14 20:14:07.518772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.170 [2024-02-14 20:14:07.518800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.170 [2024-02-14 20:14:07.518842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.170 [2024-02-14 20:14:07.518879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.170 [2024-02-14 20:14:07.518917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.170 [2024-02-14 20:14:07.518958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.170 [2024-02-14 20:14:07.519006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.170 [2024-02-14 20:14:07.519045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.170 [2024-02-14 20:14:07.519090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.170 [2024-02-14 20:14:07.519128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.170 [2024-02-14 20:14:07.519165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.170 [2024-02-14 20:14:07.519194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.170 [2024-02-14 20:14:07.519229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.170 [2024-02-14 20:14:07.519272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.170 [2024-02-14 20:14:07.519304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.170 [2024-02-14 20:14:07.519346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.170 [2024-02-14 20:14:07.519393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.170 [2024-02-14 20:14:07.519441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.170 [2024-02-14 20:14:07.519486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.428 [2024-02-14 20:14:07.519532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.428 [2024-02-14 20:14:07.519577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.428 [2024-02-14 20:14:07.519627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.428 [2024-02-14 20:14:07.519678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.428 [2024-02-14 20:14:07.519722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.428 [2024-02-14 20:14:07.519771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.428 [2024-02-14 20:14:07.519817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.428 [2024-02-14 20:14:07.519864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:30.428 [2024-02-14 20:14:07.520193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:31.363 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:31.363 20:14:08 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:31.363 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:31.363 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:31.363 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:31.363 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:31.363 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:31.363 20:14:08 -- target/ns_hotplug_stress.sh@40 -- # null_size=1015 00:13:31.363 20:14:08 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:13:31.622 true 00:13:31.622 20:14:08 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1709311 00:13:31.622 20:14:08 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:32.593 20:14:09 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:32.593 20:14:09 -- target/ns_hotplug_stress.sh@40 -- # null_size=1016 00:13:32.593 20:14:09 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:13:32.852 true 00:13:32.852 20:14:10 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1709311 00:13:32.852 20:14:10 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:33.115 20:14:10 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:33.115 20:14:10 -- target/ns_hotplug_stress.sh@40 -- # null_size=1017 00:13:33.115 20:14:10 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:13:33.386 true 00:13:33.386 20:14:10 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1709311 00:13:33.386 20:14:10 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:34.764 20:14:11 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:34.764 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:34.764 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:34.764 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:34.764 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:34.764 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:34.764 20:14:12 -- target/ns_hotplug_stress.sh@40 -- # null_size=1018 00:13:34.764 20:14:12 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:13:34.764 true 00:13:35.024 20:14:12 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1709311 00:13:35.024 20:14:12 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:35.962 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:35.962 20:14:13 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:35.962 20:14:13 -- target/ns_hotplug_stress.sh@40 -- # null_size=1019 00:13:35.962 20:14:13 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:13:35.962 true 00:13:36.221 20:14:13 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1709311 00:13:36.221 20:14:13 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:36.221 20:14:13 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:36.480 20:14:13 -- target/ns_hotplug_stress.sh@40 -- # null_size=1020 00:13:36.480 20:14:13 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:13:36.740 true 00:13:36.740 20:14:13 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1709311 00:13:36.740 20:14:13 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:36.740 20:14:14 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:36.999 20:14:14 -- target/ns_hotplug_stress.sh@40 -- # null_size=1021 00:13:36.999 20:14:14 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:13:37.258 true 00:13:37.258 20:14:14 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1709311 00:13:37.258 20:14:14 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:37.258 20:14:14 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:37.517 20:14:14 -- target/ns_hotplug_stress.sh@40 -- # null_size=1022 00:13:37.517 20:14:14 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:13:37.776 true 00:13:37.777 20:14:15 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1709311 00:13:37.777 20:14:15 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:37.777 20:14:15 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:38.036 20:14:15 -- target/ns_hotplug_stress.sh@40 -- # null_size=1023 00:13:38.036 20:14:15 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:13:38.295 true 00:13:38.295 20:14:15 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1709311 00:13:38.295 20:14:15 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:38.554 20:14:15 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:38.554 20:14:15 -- target/ns_hotplug_stress.sh@40 -- # null_size=1024 00:13:38.554 20:14:15 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:13:38.814 true 00:13:38.814 20:14:16 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1709311 00:13:38.814 20:14:16 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:39.073 20:14:16 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:39.073 20:14:16 -- target/ns_hotplug_stress.sh@40 -- # null_size=1025 00:13:39.073 20:14:16 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:13:39.332 true 00:13:39.332 20:14:16 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1709311 00:13:39.332 20:14:16 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:39.592 20:14:16 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:39.851 20:14:17 -- target/ns_hotplug_stress.sh@40 -- # null_size=1026 00:13:39.851 20:14:17 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:13:39.851 true 00:13:39.851 20:14:17 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1709311 00:13:39.851 20:14:17 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:41.230 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:41.230 20:14:18 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:41.230 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:41.230 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:41.230 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:41.230 20:14:18 -- target/ns_hotplug_stress.sh@40 -- # null_size=1027 00:13:41.230 20:14:18 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:13:41.230 true 00:13:41.489 20:14:18 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1709311 00:13:41.489 20:14:18 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:41.489 20:14:18 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:41.749 20:14:18 -- target/ns_hotplug_stress.sh@40 -- # null_size=1028 00:13:41.749 20:14:18 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:13:41.749 true 00:13:42.009 20:14:19 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1709311 00:13:42.009 20:14:19 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:42.009 20:14:19 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:42.269 20:14:19 -- target/ns_hotplug_stress.sh@40 -- # null_size=1029 00:13:42.269 20:14:19 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:13:42.269 true 00:13:42.528 20:14:19 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1709311 00:13:42.528 20:14:19 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:42.528 20:14:19 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:42.788 20:14:20 -- target/ns_hotplug_stress.sh@40 -- # null_size=1030 00:13:42.788 20:14:20 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:13:42.788 true 00:13:43.046 20:14:20 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1709311 00:13:43.046 20:14:20 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:43.046 20:14:20 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:43.305 20:14:20 -- target/ns_hotplug_stress.sh@40 -- # null_size=1031 00:13:43.305 20:14:20 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:13:43.564 true 00:13:43.564 20:14:20 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1709311 00:13:43.564 20:14:20 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:43.564 20:14:20 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:43.823 20:14:21 -- target/ns_hotplug_stress.sh@40 -- # null_size=1032 00:13:43.823 20:14:21 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:13:44.082 true 00:13:44.082 20:14:21 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1709311 00:13:44.082 20:14:21 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:44.082 20:14:21 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:44.341 20:14:21 -- target/ns_hotplug_stress.sh@40 -- # null_size=1033 00:13:44.342 20:14:21 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:13:44.600 true 00:13:44.600 20:14:21 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1709311 00:13:44.600 20:14:21 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:44.600 20:14:22 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:44.859 20:14:22 -- target/ns_hotplug_stress.sh@40 -- # null_size=1034 00:13:44.859 20:14:22 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:13:45.119 true 00:13:45.119 20:14:22 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1709311 00:13:45.119 20:14:22 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:45.377 20:14:22 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:45.377 20:14:22 -- target/ns_hotplug_stress.sh@40 -- # null_size=1035 00:13:45.377 20:14:22 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:13:45.636 true 00:13:45.636 20:14:22 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1709311 00:13:45.636 20:14:22 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:45.895 20:14:23 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:45.895 20:14:23 -- target/ns_hotplug_stress.sh@40 -- # null_size=1036 00:13:45.895 20:14:23 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:13:46.154 true 00:13:46.154 20:14:23 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1709311 00:13:46.155 20:14:23 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:46.414 20:14:23 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:46.414 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:46.414 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:46.414 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:46.414 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:46.414 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:46.673 20:14:23 -- target/ns_hotplug_stress.sh@40 -- # null_size=1037 00:13:46.673 20:14:23 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:13:46.673 true 00:13:46.673 20:14:24 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1709311 00:13:46.673 20:14:24 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:47.609 20:14:24 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:47.868 20:14:25 -- target/ns_hotplug_stress.sh@40 -- # null_size=1038 00:13:47.868 20:14:25 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:13:47.868 true 00:13:47.868 20:14:25 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1709311 00:13:47.868 20:14:25 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:48.126 20:14:25 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:48.386 20:14:25 -- target/ns_hotplug_stress.sh@40 -- # null_size=1039 00:13:48.386 20:14:25 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:13:48.386 true 00:13:48.386 20:14:25 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1709311 00:13:48.386 20:14:25 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:49.763 Initializing NVMe Controllers 00:13:49.763 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:49.763 Controller IO queue size 128, less than required. 00:13:49.763 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:49.763 Controller IO queue size 128, less than required. 00:13:49.763 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:49.763 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:49.763 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:49.763 Initialization complete. Launching workers. 00:13:49.763 ======================================================== 00:13:49.763 Latency(us) 00:13:49.763 Device Information : IOPS MiB/s Average min max 00:13:49.763 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2003.77 0.98 28846.97 1898.10 1083880.11 00:13:49.763 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 12278.73 6.00 10424.55 2334.96 295574.71 00:13:49.763 ======================================================== 00:13:49.763 Total : 14282.50 6.97 13009.12 1898.10 1083880.11 00:13:49.763 00:13:49.763 20:14:26 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:49.763 20:14:27 -- target/ns_hotplug_stress.sh@40 -- # null_size=1040 00:13:49.763 20:14:27 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:13:50.022 true 00:13:50.022 20:14:27 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1709311 00:13:50.022 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 35: kill: (1709311) - No such process 00:13:50.022 20:14:27 -- target/ns_hotplug_stress.sh@44 -- # wait 1709311 00:13:50.022 20:14:27 -- target/ns_hotplug_stress.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:13:50.022 20:14:27 -- target/ns_hotplug_stress.sh@48 -- # nvmftestfini 00:13:50.022 20:14:27 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:50.022 20:14:27 -- nvmf/common.sh@116 -- # sync 00:13:50.022 20:14:27 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:50.022 20:14:27 -- nvmf/common.sh@119 -- # set +e 00:13:50.022 20:14:27 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:50.022 20:14:27 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:50.022 rmmod nvme_tcp 00:13:50.022 rmmod nvme_fabrics 00:13:50.022 rmmod nvme_keyring 00:13:50.022 20:14:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:50.022 20:14:27 -- nvmf/common.sh@123 -- # set -e 00:13:50.022 20:14:27 -- nvmf/common.sh@124 -- # return 0 00:13:50.022 20:14:27 -- nvmf/common.sh@477 -- # '[' -n 1708821 ']' 00:13:50.022 20:14:27 -- nvmf/common.sh@478 -- # killprocess 1708821 00:13:50.022 20:14:27 -- common/autotest_common.sh@924 -- # '[' -z 1708821 ']' 00:13:50.022 20:14:27 -- common/autotest_common.sh@928 -- # kill -0 1708821 00:13:50.022 20:14:27 -- common/autotest_common.sh@929 -- # uname 00:13:50.022 20:14:27 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:13:50.022 20:14:27 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 1708821 00:13:50.022 20:14:27 -- common/autotest_common.sh@930 -- # process_name=reactor_1 00:13:50.022 20:14:27 -- common/autotest_common.sh@934 -- # '[' reactor_1 = sudo ']' 00:13:50.022 20:14:27 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 1708821' 00:13:50.022 killing process with pid 1708821 00:13:50.022 20:14:27 -- common/autotest_common.sh@943 -- # kill 1708821 00:13:50.022 20:14:27 -- common/autotest_common.sh@948 -- # wait 1708821 00:13:50.279 20:14:27 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:50.279 20:14:27 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:50.279 20:14:27 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:50.279 20:14:27 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:50.279 20:14:27 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:50.279 20:14:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:50.279 20:14:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:50.279 20:14:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:52.231 20:14:29 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:52.231 00:13:52.231 real 0m41.458s 00:13:52.231 user 2m29.345s 00:13:52.231 sys 0m10.879s 00:13:52.231 20:14:29 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:52.231 20:14:29 -- common/autotest_common.sh@10 -- # set +x 00:13:52.231 ************************************ 00:13:52.231 END TEST nvmf_ns_hotplug_stress 00:13:52.231 ************************************ 00:13:52.491 20:14:29 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:52.491 20:14:29 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:13:52.491 20:14:29 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:13:52.491 20:14:29 -- common/autotest_common.sh@10 -- # set +x 00:13:52.491 ************************************ 00:13:52.491 START TEST nvmf_connect_stress 00:13:52.491 ************************************ 00:13:52.491 20:14:29 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:52.491 * Looking for test storage... 00:13:52.491 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:52.491 20:14:29 -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:52.491 20:14:29 -- nvmf/common.sh@7 -- # uname -s 00:13:52.491 20:14:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:52.491 20:14:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:52.491 20:14:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:52.491 20:14:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:52.491 20:14:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:52.491 20:14:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:52.491 20:14:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:52.491 20:14:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:52.491 20:14:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:52.491 20:14:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:52.491 20:14:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:13:52.491 20:14:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:13:52.491 20:14:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:52.491 20:14:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:52.491 20:14:29 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:52.491 20:14:29 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:52.491 20:14:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:52.491 20:14:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:52.491 20:14:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:52.491 20:14:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.491 20:14:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.491 20:14:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.491 20:14:29 -- paths/export.sh@5 -- # export PATH 00:13:52.491 20:14:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.491 20:14:29 -- nvmf/common.sh@46 -- # : 0 00:13:52.491 20:14:29 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:52.491 20:14:29 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:52.491 20:14:29 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:52.491 20:14:29 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:52.491 20:14:29 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:52.491 20:14:29 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:52.491 20:14:29 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:52.491 20:14:29 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:52.491 20:14:29 -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:52.491 20:14:29 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:52.491 20:14:29 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:52.491 20:14:29 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:52.491 20:14:29 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:52.491 20:14:29 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:52.491 20:14:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:52.491 20:14:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:52.492 20:14:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:52.492 20:14:29 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:52.492 20:14:29 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:52.492 20:14:29 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:52.492 20:14:29 -- common/autotest_common.sh@10 -- # set +x 00:13:57.764 20:14:34 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:57.764 20:14:34 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:57.764 20:14:34 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:57.764 20:14:34 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:57.764 20:14:34 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:57.764 20:14:34 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:57.764 20:14:34 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:57.764 20:14:34 -- nvmf/common.sh@294 -- # net_devs=() 00:13:57.764 20:14:34 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:57.764 20:14:34 -- nvmf/common.sh@295 -- # e810=() 00:13:57.764 20:14:34 -- nvmf/common.sh@295 -- # local -ga e810 00:13:57.764 20:14:34 -- nvmf/common.sh@296 -- # x722=() 00:13:57.764 20:14:34 -- nvmf/common.sh@296 -- # local -ga x722 00:13:57.764 20:14:34 -- nvmf/common.sh@297 -- # mlx=() 00:13:57.764 20:14:34 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:57.764 20:14:34 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:57.764 20:14:34 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:57.764 20:14:34 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:57.764 20:14:34 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:57.764 20:14:34 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:57.764 20:14:34 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:57.764 20:14:34 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:57.764 20:14:34 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:57.764 20:14:34 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:57.764 20:14:34 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:57.764 20:14:34 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:57.764 20:14:34 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:57.764 20:14:34 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:57.764 20:14:34 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:57.764 20:14:34 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:57.764 20:14:34 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:57.764 20:14:34 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:57.764 20:14:34 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:57.764 20:14:34 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:57.764 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:57.764 20:14:34 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:57.764 20:14:34 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:57.764 20:14:34 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:57.764 20:14:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:57.764 20:14:34 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:57.764 20:14:34 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:57.764 20:14:34 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:57.764 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:57.764 20:14:34 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:57.764 20:14:34 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:57.764 20:14:34 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:57.764 20:14:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:57.764 20:14:34 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:57.764 20:14:34 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:57.764 20:14:34 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:57.764 20:14:34 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:57.764 20:14:34 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:57.764 20:14:34 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:57.764 20:14:34 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:57.765 20:14:34 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:57.765 20:14:34 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:57.765 Found net devices under 0000:af:00.0: cvl_0_0 00:13:57.765 20:14:34 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:57.765 20:14:34 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:57.765 20:14:34 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:57.765 20:14:34 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:57.765 20:14:34 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:57.765 20:14:34 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:57.765 Found net devices under 0000:af:00.1: cvl_0_1 00:13:57.765 20:14:34 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:57.765 20:14:34 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:57.765 20:14:34 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:57.765 20:14:34 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:57.765 20:14:34 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:57.765 20:14:34 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:57.765 20:14:34 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:57.765 20:14:34 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:57.765 20:14:34 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:57.765 20:14:34 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:57.765 20:14:34 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:57.765 20:14:34 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:57.765 20:14:34 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:57.765 20:14:34 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:57.765 20:14:34 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:57.765 20:14:34 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:57.765 20:14:34 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:57.765 20:14:34 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:57.765 20:14:34 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:57.765 20:14:34 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:57.765 20:14:34 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:57.765 20:14:34 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:57.765 20:14:34 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:57.765 20:14:35 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:57.765 20:14:35 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:57.765 20:14:35 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:57.765 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:57.765 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:13:57.765 00:13:57.765 --- 10.0.0.2 ping statistics --- 00:13:57.765 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:57.765 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:13:57.765 20:14:35 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:57.765 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:57.765 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.365 ms 00:13:57.765 00:13:57.765 --- 10.0.0.1 ping statistics --- 00:13:57.765 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:57.765 rtt min/avg/max/mdev = 0.365/0.365/0.365/0.000 ms 00:13:57.765 20:14:35 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:57.765 20:14:35 -- nvmf/common.sh@410 -- # return 0 00:13:57.765 20:14:35 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:57.765 20:14:35 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:57.765 20:14:35 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:57.765 20:14:35 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:57.765 20:14:35 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:57.765 20:14:35 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:57.765 20:14:35 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:57.765 20:14:35 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:57.765 20:14:35 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:57.765 20:14:35 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:57.765 20:14:35 -- common/autotest_common.sh@10 -- # set +x 00:13:57.765 20:14:35 -- nvmf/common.sh@469 -- # nvmfpid=1718279 00:13:57.765 20:14:35 -- nvmf/common.sh@470 -- # waitforlisten 1718279 00:13:57.765 20:14:35 -- common/autotest_common.sh@817 -- # '[' -z 1718279 ']' 00:13:57.765 20:14:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:57.765 20:14:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:57.765 20:14:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:57.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:57.765 20:14:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:57.765 20:14:35 -- common/autotest_common.sh@10 -- # set +x 00:13:57.765 20:14:35 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:57.765 [2024-02-14 20:14:35.155989] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:13:57.765 [2024-02-14 20:14:35.156032] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:58.024 EAL: No free 2048 kB hugepages reported on node 1 00:13:58.024 [2024-02-14 20:14:35.218696] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:58.024 [2024-02-14 20:14:35.293666] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:58.024 [2024-02-14 20:14:35.293785] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:58.024 [2024-02-14 20:14:35.293792] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:58.024 [2024-02-14 20:14:35.293798] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:58.024 [2024-02-14 20:14:35.293831] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:58.024 [2024-02-14 20:14:35.293855] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:58.024 [2024-02-14 20:14:35.293856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:58.592 20:14:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:58.592 20:14:35 -- common/autotest_common.sh@850 -- # return 0 00:13:58.592 20:14:35 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:58.592 20:14:35 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:58.592 20:14:35 -- common/autotest_common.sh@10 -- # set +x 00:13:58.592 20:14:36 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:58.592 20:14:36 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:58.592 20:14:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:58.592 20:14:36 -- common/autotest_common.sh@10 -- # set +x 00:13:58.851 [2024-02-14 20:14:36.012539] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:58.851 20:14:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:58.851 20:14:36 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:58.851 20:14:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:58.851 20:14:36 -- common/autotest_common.sh@10 -- # set +x 00:13:58.851 20:14:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:58.851 20:14:36 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:58.851 20:14:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:58.851 20:14:36 -- common/autotest_common.sh@10 -- # set +x 00:13:58.851 [2024-02-14 20:14:36.040752] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:58.851 20:14:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:58.851 20:14:36 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:58.851 20:14:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:58.851 20:14:36 -- common/autotest_common.sh@10 -- # set +x 00:13:58.851 NULL1 00:13:58.851 20:14:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:58.851 20:14:36 -- target/connect_stress.sh@21 -- # PERF_PID=1718523 00:13:58.851 20:14:36 -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:58.851 20:14:36 -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:58.851 20:14:36 -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:58.851 20:14:36 -- target/connect_stress.sh@27 -- # seq 1 20 00:13:58.851 20:14:36 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:58.851 20:14:36 -- target/connect_stress.sh@28 -- # cat 00:13:58.851 20:14:36 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:58.851 20:14:36 -- target/connect_stress.sh@28 -- # cat 00:13:58.851 20:14:36 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:58.851 20:14:36 -- target/connect_stress.sh@28 -- # cat 00:13:58.851 20:14:36 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:58.851 20:14:36 -- target/connect_stress.sh@28 -- # cat 00:13:58.851 20:14:36 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:58.851 20:14:36 -- target/connect_stress.sh@28 -- # cat 00:13:58.851 20:14:36 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:58.851 20:14:36 -- target/connect_stress.sh@28 -- # cat 00:13:58.851 EAL: No free 2048 kB hugepages reported on node 1 00:13:58.851 20:14:36 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:58.851 20:14:36 -- target/connect_stress.sh@28 -- # cat 00:13:58.851 20:14:36 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:58.851 20:14:36 -- target/connect_stress.sh@28 -- # cat 00:13:58.851 20:14:36 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:58.851 20:14:36 -- target/connect_stress.sh@28 -- # cat 00:13:58.851 20:14:36 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:58.851 20:14:36 -- target/connect_stress.sh@28 -- # cat 00:13:58.851 20:14:36 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:58.851 20:14:36 -- target/connect_stress.sh@28 -- # cat 00:13:58.851 20:14:36 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:58.851 20:14:36 -- target/connect_stress.sh@28 -- # cat 00:13:58.851 20:14:36 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:58.851 20:14:36 -- target/connect_stress.sh@28 -- # cat 00:13:58.851 20:14:36 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:58.851 20:14:36 -- target/connect_stress.sh@28 -- # cat 00:13:58.851 20:14:36 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:58.851 20:14:36 -- target/connect_stress.sh@28 -- # cat 00:13:58.851 20:14:36 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:58.851 20:14:36 -- target/connect_stress.sh@28 -- # cat 00:13:58.851 20:14:36 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:58.851 20:14:36 -- target/connect_stress.sh@28 -- # cat 00:13:58.851 20:14:36 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:58.851 20:14:36 -- target/connect_stress.sh@28 -- # cat 00:13:58.851 20:14:36 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:58.851 20:14:36 -- target/connect_stress.sh@28 -- # cat 00:13:58.852 20:14:36 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:58.852 20:14:36 -- target/connect_stress.sh@28 -- # cat 00:13:58.852 20:14:36 -- target/connect_stress.sh@34 -- # kill -0 1718523 00:13:58.852 20:14:36 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:58.852 20:14:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:58.852 20:14:36 -- common/autotest_common.sh@10 -- # set +x 00:13:59.111 20:14:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:59.111 20:14:36 -- target/connect_stress.sh@34 -- # kill -0 1718523 00:13:59.111 20:14:36 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:59.111 20:14:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:59.111 20:14:36 -- common/autotest_common.sh@10 -- # set +x 00:13:59.370 20:14:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:59.370 20:14:36 -- target/connect_stress.sh@34 -- # kill -0 1718523 00:13:59.370 20:14:36 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:59.370 20:14:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:59.370 20:14:36 -- common/autotest_common.sh@10 -- # set +x 00:13:59.938 20:14:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:59.938 20:14:37 -- target/connect_stress.sh@34 -- # kill -0 1718523 00:13:59.938 20:14:37 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:59.938 20:14:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:59.938 20:14:37 -- common/autotest_common.sh@10 -- # set +x 00:14:00.197 20:14:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:00.197 20:14:37 -- target/connect_stress.sh@34 -- # kill -0 1718523 00:14:00.197 20:14:37 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:00.197 20:14:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:00.197 20:14:37 -- common/autotest_common.sh@10 -- # set +x 00:14:00.456 20:14:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:00.456 20:14:37 -- target/connect_stress.sh@34 -- # kill -0 1718523 00:14:00.456 20:14:37 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:00.456 20:14:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:00.456 20:14:37 -- common/autotest_common.sh@10 -- # set +x 00:14:00.715 20:14:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:00.715 20:14:38 -- target/connect_stress.sh@34 -- # kill -0 1718523 00:14:00.715 20:14:38 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:00.715 20:14:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:00.715 20:14:38 -- common/autotest_common.sh@10 -- # set +x 00:14:00.974 20:14:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:00.974 20:14:38 -- target/connect_stress.sh@34 -- # kill -0 1718523 00:14:00.974 20:14:38 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:00.974 20:14:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:00.974 20:14:38 -- common/autotest_common.sh@10 -- # set +x 00:14:01.542 20:14:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:01.542 20:14:38 -- target/connect_stress.sh@34 -- # kill -0 1718523 00:14:01.542 20:14:38 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:01.542 20:14:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:01.542 20:14:38 -- common/autotest_common.sh@10 -- # set +x 00:14:01.801 20:14:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:01.801 20:14:39 -- target/connect_stress.sh@34 -- # kill -0 1718523 00:14:01.801 20:14:39 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:01.801 20:14:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:01.801 20:14:39 -- common/autotest_common.sh@10 -- # set +x 00:14:02.060 20:14:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:02.060 20:14:39 -- target/connect_stress.sh@34 -- # kill -0 1718523 00:14:02.060 20:14:39 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:02.060 20:14:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:02.060 20:14:39 -- common/autotest_common.sh@10 -- # set +x 00:14:02.319 20:14:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:02.319 20:14:39 -- target/connect_stress.sh@34 -- # kill -0 1718523 00:14:02.319 20:14:39 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:02.319 20:14:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:02.319 20:14:39 -- common/autotest_common.sh@10 -- # set +x 00:14:02.578 20:14:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:02.578 20:14:39 -- target/connect_stress.sh@34 -- # kill -0 1718523 00:14:02.578 20:14:39 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:02.578 20:14:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:02.578 20:14:39 -- common/autotest_common.sh@10 -- # set +x 00:14:03.146 20:14:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:03.146 20:14:40 -- target/connect_stress.sh@34 -- # kill -0 1718523 00:14:03.146 20:14:40 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:03.146 20:14:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:03.146 20:14:40 -- common/autotest_common.sh@10 -- # set +x 00:14:03.405 20:14:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:03.405 20:14:40 -- target/connect_stress.sh@34 -- # kill -0 1718523 00:14:03.405 20:14:40 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:03.405 20:14:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:03.405 20:14:40 -- common/autotest_common.sh@10 -- # set +x 00:14:03.664 20:14:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:03.664 20:14:40 -- target/connect_stress.sh@34 -- # kill -0 1718523 00:14:03.664 20:14:40 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:03.664 20:14:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:03.664 20:14:40 -- common/autotest_common.sh@10 -- # set +x 00:14:03.922 20:14:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:03.922 20:14:41 -- target/connect_stress.sh@34 -- # kill -0 1718523 00:14:03.922 20:14:41 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:03.922 20:14:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:03.922 20:14:41 -- common/autotest_common.sh@10 -- # set +x 00:14:04.490 20:14:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:04.490 20:14:41 -- target/connect_stress.sh@34 -- # kill -0 1718523 00:14:04.490 20:14:41 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:04.490 20:14:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:04.490 20:14:41 -- common/autotest_common.sh@10 -- # set +x 00:14:04.749 20:14:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:04.749 20:14:41 -- target/connect_stress.sh@34 -- # kill -0 1718523 00:14:04.749 20:14:41 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:04.749 20:14:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:04.749 20:14:41 -- common/autotest_common.sh@10 -- # set +x 00:14:05.008 20:14:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:05.008 20:14:42 -- target/connect_stress.sh@34 -- # kill -0 1718523 00:14:05.008 20:14:42 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:05.008 20:14:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:05.008 20:14:42 -- common/autotest_common.sh@10 -- # set +x 00:14:05.268 20:14:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:05.268 20:14:42 -- target/connect_stress.sh@34 -- # kill -0 1718523 00:14:05.268 20:14:42 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:05.268 20:14:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:05.268 20:14:42 -- common/autotest_common.sh@10 -- # set +x 00:14:05.526 20:14:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:05.526 20:14:42 -- target/connect_stress.sh@34 -- # kill -0 1718523 00:14:05.526 20:14:42 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:05.526 20:14:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:05.526 20:14:42 -- common/autotest_common.sh@10 -- # set +x 00:14:06.093 20:14:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:06.093 20:14:43 -- target/connect_stress.sh@34 -- # kill -0 1718523 00:14:06.093 20:14:43 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:06.093 20:14:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:06.093 20:14:43 -- common/autotest_common.sh@10 -- # set +x 00:14:06.353 20:14:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:06.353 20:14:43 -- target/connect_stress.sh@34 -- # kill -0 1718523 00:14:06.353 20:14:43 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:06.353 20:14:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:06.353 20:14:43 -- common/autotest_common.sh@10 -- # set +x 00:14:06.611 20:14:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:06.611 20:14:43 -- target/connect_stress.sh@34 -- # kill -0 1718523 00:14:06.611 20:14:43 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:06.611 20:14:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:06.611 20:14:43 -- common/autotest_common.sh@10 -- # set +x 00:14:06.870 20:14:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:06.870 20:14:44 -- target/connect_stress.sh@34 -- # kill -0 1718523 00:14:06.870 20:14:44 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:06.870 20:14:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:06.870 20:14:44 -- common/autotest_common.sh@10 -- # set +x 00:14:07.130 20:14:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:07.130 20:14:44 -- target/connect_stress.sh@34 -- # kill -0 1718523 00:14:07.130 20:14:44 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:07.130 20:14:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:07.130 20:14:44 -- common/autotest_common.sh@10 -- # set +x 00:14:07.701 20:14:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:07.701 20:14:44 -- target/connect_stress.sh@34 -- # kill -0 1718523 00:14:07.701 20:14:44 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:07.701 20:14:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:07.701 20:14:44 -- common/autotest_common.sh@10 -- # set +x 00:14:08.005 20:14:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:08.005 20:14:45 -- target/connect_stress.sh@34 -- # kill -0 1718523 00:14:08.005 20:14:45 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:08.005 20:14:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:08.005 20:14:45 -- common/autotest_common.sh@10 -- # set +x 00:14:08.265 20:14:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:08.265 20:14:45 -- target/connect_stress.sh@34 -- # kill -0 1718523 00:14:08.265 20:14:45 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:08.265 20:14:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:08.265 20:14:45 -- common/autotest_common.sh@10 -- # set +x 00:14:08.523 20:14:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:08.523 20:14:45 -- target/connect_stress.sh@34 -- # kill -0 1718523 00:14:08.523 20:14:45 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:08.523 20:14:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:08.524 20:14:45 -- common/autotest_common.sh@10 -- # set +x 00:14:08.782 20:14:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:08.782 20:14:46 -- target/connect_stress.sh@34 -- # kill -0 1718523 00:14:08.782 20:14:46 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:08.782 20:14:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:08.782 20:14:46 -- common/autotest_common.sh@10 -- # set +x 00:14:09.041 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:09.041 20:14:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:09.041 20:14:46 -- target/connect_stress.sh@34 -- # kill -0 1718523 00:14:09.041 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1718523) - No such process 00:14:09.041 20:14:46 -- target/connect_stress.sh@38 -- # wait 1718523 00:14:09.041 20:14:46 -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:09.041 20:14:46 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:09.041 20:14:46 -- target/connect_stress.sh@43 -- # nvmftestfini 00:14:09.041 20:14:46 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:09.041 20:14:46 -- nvmf/common.sh@116 -- # sync 00:14:09.041 20:14:46 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:09.041 20:14:46 -- nvmf/common.sh@119 -- # set +e 00:14:09.041 20:14:46 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:09.041 20:14:46 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:09.301 rmmod nvme_tcp 00:14:09.301 rmmod nvme_fabrics 00:14:09.301 rmmod nvme_keyring 00:14:09.301 20:14:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:09.301 20:14:46 -- nvmf/common.sh@123 -- # set -e 00:14:09.301 20:14:46 -- nvmf/common.sh@124 -- # return 0 00:14:09.301 20:14:46 -- nvmf/common.sh@477 -- # '[' -n 1718279 ']' 00:14:09.301 20:14:46 -- nvmf/common.sh@478 -- # killprocess 1718279 00:14:09.301 20:14:46 -- common/autotest_common.sh@924 -- # '[' -z 1718279 ']' 00:14:09.301 20:14:46 -- common/autotest_common.sh@928 -- # kill -0 1718279 00:14:09.301 20:14:46 -- common/autotest_common.sh@929 -- # uname 00:14:09.301 20:14:46 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:14:09.301 20:14:46 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 1718279 00:14:09.301 20:14:46 -- common/autotest_common.sh@930 -- # process_name=reactor_1 00:14:09.301 20:14:46 -- common/autotest_common.sh@934 -- # '[' reactor_1 = sudo ']' 00:14:09.301 20:14:46 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 1718279' 00:14:09.301 killing process with pid 1718279 00:14:09.301 20:14:46 -- common/autotest_common.sh@943 -- # kill 1718279 00:14:09.301 20:14:46 -- common/autotest_common.sh@948 -- # wait 1718279 00:14:09.560 20:14:46 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:09.560 20:14:46 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:09.560 20:14:46 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:09.560 20:14:46 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:09.560 20:14:46 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:09.560 20:14:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:09.560 20:14:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:09.560 20:14:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:11.466 20:14:48 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:11.466 00:14:11.466 real 0m19.163s 00:14:11.466 user 0m41.435s 00:14:11.466 sys 0m8.079s 00:14:11.466 20:14:48 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:11.466 20:14:48 -- common/autotest_common.sh@10 -- # set +x 00:14:11.466 ************************************ 00:14:11.466 END TEST nvmf_connect_stress 00:14:11.466 ************************************ 00:14:11.466 20:14:48 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:11.466 20:14:48 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:14:11.466 20:14:48 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:14:11.466 20:14:48 -- common/autotest_common.sh@10 -- # set +x 00:14:11.466 ************************************ 00:14:11.466 START TEST nvmf_fused_ordering 00:14:11.466 ************************************ 00:14:11.466 20:14:48 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:11.726 * Looking for test storage... 00:14:11.726 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:11.726 20:14:48 -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:11.726 20:14:48 -- nvmf/common.sh@7 -- # uname -s 00:14:11.726 20:14:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:11.726 20:14:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:11.726 20:14:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:11.726 20:14:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:11.726 20:14:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:11.726 20:14:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:11.726 20:14:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:11.726 20:14:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:11.726 20:14:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:11.726 20:14:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:11.726 20:14:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:14:11.726 20:14:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:14:11.726 20:14:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:11.726 20:14:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:11.726 20:14:48 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:11.726 20:14:48 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:11.726 20:14:48 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:11.726 20:14:48 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:11.726 20:14:48 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:11.726 20:14:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.726 20:14:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.726 20:14:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.726 20:14:48 -- paths/export.sh@5 -- # export PATH 00:14:11.726 20:14:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.726 20:14:48 -- nvmf/common.sh@46 -- # : 0 00:14:11.726 20:14:48 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:11.726 20:14:48 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:11.726 20:14:48 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:11.726 20:14:48 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:11.726 20:14:48 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:11.726 20:14:48 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:11.726 20:14:48 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:11.726 20:14:48 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:11.726 20:14:48 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:11.726 20:14:48 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:11.726 20:14:48 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:11.726 20:14:48 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:11.726 20:14:48 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:11.726 20:14:48 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:11.726 20:14:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:11.726 20:14:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:11.726 20:14:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:11.726 20:14:48 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:11.726 20:14:48 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:11.726 20:14:48 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:11.726 20:14:48 -- common/autotest_common.sh@10 -- # set +x 00:14:18.297 20:14:54 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:18.297 20:14:54 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:18.297 20:14:54 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:18.297 20:14:54 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:18.297 20:14:54 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:18.297 20:14:54 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:18.297 20:14:54 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:18.297 20:14:54 -- nvmf/common.sh@294 -- # net_devs=() 00:14:18.297 20:14:54 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:18.297 20:14:54 -- nvmf/common.sh@295 -- # e810=() 00:14:18.297 20:14:54 -- nvmf/common.sh@295 -- # local -ga e810 00:14:18.297 20:14:54 -- nvmf/common.sh@296 -- # x722=() 00:14:18.297 20:14:54 -- nvmf/common.sh@296 -- # local -ga x722 00:14:18.297 20:14:54 -- nvmf/common.sh@297 -- # mlx=() 00:14:18.297 20:14:54 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:18.297 20:14:54 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:18.297 20:14:54 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:18.297 20:14:54 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:18.297 20:14:54 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:18.297 20:14:54 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:18.297 20:14:54 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:18.297 20:14:54 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:18.297 20:14:54 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:18.297 20:14:54 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:18.297 20:14:54 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:18.297 20:14:54 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:18.297 20:14:54 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:18.297 20:14:54 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:18.297 20:14:54 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:14:18.297 20:14:54 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:14:18.297 20:14:54 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:14:18.297 20:14:54 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:18.297 20:14:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:18.297 20:14:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:18.297 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:18.297 20:14:54 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:18.297 20:14:54 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:18.297 20:14:54 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:18.297 20:14:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:18.297 20:14:54 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:18.297 20:14:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:18.297 20:14:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:18.297 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:18.297 20:14:54 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:18.297 20:14:54 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:18.297 20:14:54 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:18.297 20:14:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:18.297 20:14:54 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:18.297 20:14:54 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:18.297 20:14:54 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:14:18.297 20:14:54 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:14:18.297 20:14:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:18.297 20:14:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:18.297 20:14:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:18.297 20:14:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:18.297 20:14:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:18.297 Found net devices under 0000:af:00.0: cvl_0_0 00:14:18.297 20:14:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:18.297 20:14:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:18.297 20:14:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:18.297 20:14:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:18.297 20:14:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:18.297 20:14:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:18.297 Found net devices under 0000:af:00.1: cvl_0_1 00:14:18.297 20:14:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:18.297 20:14:54 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:18.297 20:14:54 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:18.297 20:14:54 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:18.297 20:14:54 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:18.297 20:14:54 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:18.297 20:14:54 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:18.297 20:14:54 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:18.297 20:14:54 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:18.297 20:14:54 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:18.297 20:14:54 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:18.297 20:14:54 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:18.297 20:14:54 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:18.297 20:14:54 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:18.297 20:14:54 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:18.297 20:14:54 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:18.297 20:14:54 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:18.298 20:14:54 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:18.298 20:14:54 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:18.298 20:14:54 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:18.298 20:14:54 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:18.298 20:14:54 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:18.298 20:14:54 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:18.298 20:14:54 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:18.298 20:14:54 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:18.298 20:14:54 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:18.298 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:18.298 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.239 ms 00:14:18.298 00:14:18.298 --- 10.0.0.2 ping statistics --- 00:14:18.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:18.298 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:14:18.298 20:14:54 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:18.298 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:18.298 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.370 ms 00:14:18.298 00:14:18.298 --- 10.0.0.1 ping statistics --- 00:14:18.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:18.298 rtt min/avg/max/mdev = 0.370/0.370/0.370/0.000 ms 00:14:18.298 20:14:54 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:18.298 20:14:54 -- nvmf/common.sh@410 -- # return 0 00:14:18.298 20:14:54 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:18.298 20:14:54 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:18.298 20:14:54 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:18.298 20:14:54 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:18.298 20:14:54 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:18.298 20:14:54 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:18.298 20:14:54 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:18.298 20:14:54 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:18.298 20:14:54 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:18.298 20:14:54 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:18.298 20:14:54 -- common/autotest_common.sh@10 -- # set +x 00:14:18.298 20:14:54 -- nvmf/common.sh@469 -- # nvmfpid=1723960 00:14:18.298 20:14:54 -- nvmf/common.sh@470 -- # waitforlisten 1723960 00:14:18.298 20:14:54 -- common/autotest_common.sh@817 -- # '[' -z 1723960 ']' 00:14:18.298 20:14:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:18.298 20:14:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:18.298 20:14:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:18.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:18.298 20:14:54 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:18.298 20:14:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:18.298 20:14:54 -- common/autotest_common.sh@10 -- # set +x 00:14:18.298 [2024-02-14 20:14:54.924709] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:14:18.298 [2024-02-14 20:14:54.924753] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:18.298 EAL: No free 2048 kB hugepages reported on node 1 00:14:18.298 [2024-02-14 20:14:54.986864] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:18.298 [2024-02-14 20:14:55.061335] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:18.298 [2024-02-14 20:14:55.061436] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:18.298 [2024-02-14 20:14:55.061443] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:18.298 [2024-02-14 20:14:55.061449] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:18.298 [2024-02-14 20:14:55.061468] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:18.298 20:14:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:18.298 20:14:55 -- common/autotest_common.sh@850 -- # return 0 00:14:18.298 20:14:55 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:18.298 20:14:55 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:18.298 20:14:55 -- common/autotest_common.sh@10 -- # set +x 00:14:18.558 20:14:55 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:18.558 20:14:55 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:18.558 20:14:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:18.558 20:14:55 -- common/autotest_common.sh@10 -- # set +x 00:14:18.558 [2024-02-14 20:14:55.731747] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:18.558 20:14:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:18.558 20:14:55 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:18.558 20:14:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:18.558 20:14:55 -- common/autotest_common.sh@10 -- # set +x 00:14:18.558 20:14:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:18.558 20:14:55 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:18.558 20:14:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:18.558 20:14:55 -- common/autotest_common.sh@10 -- # set +x 00:14:18.558 [2024-02-14 20:14:55.747892] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:18.558 20:14:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:18.558 20:14:55 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:18.558 20:14:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:18.558 20:14:55 -- common/autotest_common.sh@10 -- # set +x 00:14:18.558 NULL1 00:14:18.558 20:14:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:18.558 20:14:55 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:18.558 20:14:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:18.558 20:14:55 -- common/autotest_common.sh@10 -- # set +x 00:14:18.558 20:14:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:18.558 20:14:55 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:18.558 20:14:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:18.558 20:14:55 -- common/autotest_common.sh@10 -- # set +x 00:14:18.558 20:14:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:18.558 20:14:55 -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:18.558 [2024-02-14 20:14:55.798265] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:14:18.558 [2024-02-14 20:14:55.798293] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1724204 ] 00:14:18.558 EAL: No free 2048 kB hugepages reported on node 1 00:14:19.498 Attached to nqn.2016-06.io.spdk:cnode1 00:14:19.498 Namespace ID: 1 size: 1GB 00:14:19.498 fused_ordering(0) 00:14:19.498 fused_ordering(1) 00:14:19.498 fused_ordering(2) 00:14:19.498 fused_ordering(3) 00:14:19.498 fused_ordering(4) 00:14:19.498 fused_ordering(5) 00:14:19.498 fused_ordering(6) 00:14:19.498 fused_ordering(7) 00:14:19.498 fused_ordering(8) 00:14:19.498 fused_ordering(9) 00:14:19.498 fused_ordering(10) 00:14:19.498 fused_ordering(11) 00:14:19.498 fused_ordering(12) 00:14:19.498 fused_ordering(13) 00:14:19.498 fused_ordering(14) 00:14:19.498 fused_ordering(15) 00:14:19.498 fused_ordering(16) 00:14:19.498 fused_ordering(17) 00:14:19.498 fused_ordering(18) 00:14:19.498 fused_ordering(19) 00:14:19.498 fused_ordering(20) 00:14:19.498 fused_ordering(21) 00:14:19.498 fused_ordering(22) 00:14:19.498 fused_ordering(23) 00:14:19.498 fused_ordering(24) 00:14:19.498 fused_ordering(25) 00:14:19.498 fused_ordering(26) 00:14:19.498 fused_ordering(27) 00:14:19.498 fused_ordering(28) 00:14:19.498 fused_ordering(29) 00:14:19.498 fused_ordering(30) 00:14:19.498 fused_ordering(31) 00:14:19.498 fused_ordering(32) 00:14:19.498 fused_ordering(33) 00:14:19.498 fused_ordering(34) 00:14:19.498 fused_ordering(35) 00:14:19.498 fused_ordering(36) 00:14:19.498 fused_ordering(37) 00:14:19.498 fused_ordering(38) 00:14:19.498 fused_ordering(39) 00:14:19.498 fused_ordering(40) 00:14:19.498 fused_ordering(41) 00:14:19.498 fused_ordering(42) 00:14:19.498 fused_ordering(43) 00:14:19.498 fused_ordering(44) 00:14:19.498 fused_ordering(45) 00:14:19.498 fused_ordering(46) 00:14:19.498 fused_ordering(47) 00:14:19.498 fused_ordering(48) 00:14:19.498 fused_ordering(49) 00:14:19.498 fused_ordering(50) 00:14:19.498 fused_ordering(51) 00:14:19.498 fused_ordering(52) 00:14:19.498 fused_ordering(53) 00:14:19.498 fused_ordering(54) 00:14:19.498 fused_ordering(55) 00:14:19.498 fused_ordering(56) 00:14:19.498 fused_ordering(57) 00:14:19.498 fused_ordering(58) 00:14:19.498 fused_ordering(59) 00:14:19.498 fused_ordering(60) 00:14:19.498 fused_ordering(61) 00:14:19.498 fused_ordering(62) 00:14:19.498 fused_ordering(63) 00:14:19.498 fused_ordering(64) 00:14:19.498 fused_ordering(65) 00:14:19.498 fused_ordering(66) 00:14:19.498 fused_ordering(67) 00:14:19.498 fused_ordering(68) 00:14:19.498 fused_ordering(69) 00:14:19.498 fused_ordering(70) 00:14:19.498 fused_ordering(71) 00:14:19.498 fused_ordering(72) 00:14:19.498 fused_ordering(73) 00:14:19.498 fused_ordering(74) 00:14:19.498 fused_ordering(75) 00:14:19.498 fused_ordering(76) 00:14:19.498 fused_ordering(77) 00:14:19.498 fused_ordering(78) 00:14:19.498 fused_ordering(79) 00:14:19.498 fused_ordering(80) 00:14:19.498 fused_ordering(81) 00:14:19.498 fused_ordering(82) 00:14:19.498 fused_ordering(83) 00:14:19.498 fused_ordering(84) 00:14:19.498 fused_ordering(85) 00:14:19.498 fused_ordering(86) 00:14:19.498 fused_ordering(87) 00:14:19.498 fused_ordering(88) 00:14:19.498 fused_ordering(89) 00:14:19.498 fused_ordering(90) 00:14:19.498 fused_ordering(91) 00:14:19.498 fused_ordering(92) 00:14:19.498 fused_ordering(93) 00:14:19.498 fused_ordering(94) 00:14:19.498 fused_ordering(95) 00:14:19.498 fused_ordering(96) 00:14:19.498 fused_ordering(97) 00:14:19.498 fused_ordering(98) 00:14:19.498 fused_ordering(99) 00:14:19.498 fused_ordering(100) 00:14:19.498 fused_ordering(101) 00:14:19.498 fused_ordering(102) 00:14:19.498 fused_ordering(103) 00:14:19.498 fused_ordering(104) 00:14:19.498 fused_ordering(105) 00:14:19.498 fused_ordering(106) 00:14:19.498 fused_ordering(107) 00:14:19.498 fused_ordering(108) 00:14:19.498 fused_ordering(109) 00:14:19.498 fused_ordering(110) 00:14:19.498 fused_ordering(111) 00:14:19.498 fused_ordering(112) 00:14:19.498 fused_ordering(113) 00:14:19.498 fused_ordering(114) 00:14:19.498 fused_ordering(115) 00:14:19.498 fused_ordering(116) 00:14:19.498 fused_ordering(117) 00:14:19.498 fused_ordering(118) 00:14:19.498 fused_ordering(119) 00:14:19.498 fused_ordering(120) 00:14:19.498 fused_ordering(121) 00:14:19.498 fused_ordering(122) 00:14:19.498 fused_ordering(123) 00:14:19.498 fused_ordering(124) 00:14:19.498 fused_ordering(125) 00:14:19.498 fused_ordering(126) 00:14:19.498 fused_ordering(127) 00:14:19.498 fused_ordering(128) 00:14:19.498 fused_ordering(129) 00:14:19.498 fused_ordering(130) 00:14:19.498 fused_ordering(131) 00:14:19.498 fused_ordering(132) 00:14:19.498 fused_ordering(133) 00:14:19.498 fused_ordering(134) 00:14:19.498 fused_ordering(135) 00:14:19.498 fused_ordering(136) 00:14:19.498 fused_ordering(137) 00:14:19.498 fused_ordering(138) 00:14:19.498 fused_ordering(139) 00:14:19.498 fused_ordering(140) 00:14:19.498 fused_ordering(141) 00:14:19.498 fused_ordering(142) 00:14:19.498 fused_ordering(143) 00:14:19.498 fused_ordering(144) 00:14:19.498 fused_ordering(145) 00:14:19.498 fused_ordering(146) 00:14:19.498 fused_ordering(147) 00:14:19.498 fused_ordering(148) 00:14:19.498 fused_ordering(149) 00:14:19.498 fused_ordering(150) 00:14:19.498 fused_ordering(151) 00:14:19.498 fused_ordering(152) 00:14:19.498 fused_ordering(153) 00:14:19.498 fused_ordering(154) 00:14:19.498 fused_ordering(155) 00:14:19.498 fused_ordering(156) 00:14:19.498 fused_ordering(157) 00:14:19.498 fused_ordering(158) 00:14:19.498 fused_ordering(159) 00:14:19.498 fused_ordering(160) 00:14:19.498 fused_ordering(161) 00:14:19.498 fused_ordering(162) 00:14:19.498 fused_ordering(163) 00:14:19.498 fused_ordering(164) 00:14:19.498 fused_ordering(165) 00:14:19.498 fused_ordering(166) 00:14:19.498 fused_ordering(167) 00:14:19.498 fused_ordering(168) 00:14:19.498 fused_ordering(169) 00:14:19.498 fused_ordering(170) 00:14:19.498 fused_ordering(171) 00:14:19.498 fused_ordering(172) 00:14:19.498 fused_ordering(173) 00:14:19.498 fused_ordering(174) 00:14:19.498 fused_ordering(175) 00:14:19.498 fused_ordering(176) 00:14:19.498 fused_ordering(177) 00:14:19.498 fused_ordering(178) 00:14:19.498 fused_ordering(179) 00:14:19.498 fused_ordering(180) 00:14:19.498 fused_ordering(181) 00:14:19.498 fused_ordering(182) 00:14:19.498 fused_ordering(183) 00:14:19.498 fused_ordering(184) 00:14:19.498 fused_ordering(185) 00:14:19.498 fused_ordering(186) 00:14:19.498 fused_ordering(187) 00:14:19.498 fused_ordering(188) 00:14:19.498 fused_ordering(189) 00:14:19.498 fused_ordering(190) 00:14:19.498 fused_ordering(191) 00:14:19.498 fused_ordering(192) 00:14:19.498 fused_ordering(193) 00:14:19.498 fused_ordering(194) 00:14:19.498 fused_ordering(195) 00:14:19.498 fused_ordering(196) 00:14:19.498 fused_ordering(197) 00:14:19.498 fused_ordering(198) 00:14:19.498 fused_ordering(199) 00:14:19.498 fused_ordering(200) 00:14:19.498 fused_ordering(201) 00:14:19.498 fused_ordering(202) 00:14:19.498 fused_ordering(203) 00:14:19.498 fused_ordering(204) 00:14:19.498 fused_ordering(205) 00:14:20.067 fused_ordering(206) 00:14:20.067 fused_ordering(207) 00:14:20.067 fused_ordering(208) 00:14:20.067 fused_ordering(209) 00:14:20.067 fused_ordering(210) 00:14:20.067 fused_ordering(211) 00:14:20.067 fused_ordering(212) 00:14:20.067 fused_ordering(213) 00:14:20.067 fused_ordering(214) 00:14:20.067 fused_ordering(215) 00:14:20.068 fused_ordering(216) 00:14:20.068 fused_ordering(217) 00:14:20.068 fused_ordering(218) 00:14:20.068 fused_ordering(219) 00:14:20.068 fused_ordering(220) 00:14:20.068 fused_ordering(221) 00:14:20.068 fused_ordering(222) 00:14:20.068 fused_ordering(223) 00:14:20.068 fused_ordering(224) 00:14:20.068 fused_ordering(225) 00:14:20.068 fused_ordering(226) 00:14:20.068 fused_ordering(227) 00:14:20.068 fused_ordering(228) 00:14:20.068 fused_ordering(229) 00:14:20.068 fused_ordering(230) 00:14:20.068 fused_ordering(231) 00:14:20.068 fused_ordering(232) 00:14:20.068 fused_ordering(233) 00:14:20.068 fused_ordering(234) 00:14:20.068 fused_ordering(235) 00:14:20.068 fused_ordering(236) 00:14:20.068 fused_ordering(237) 00:14:20.068 fused_ordering(238) 00:14:20.068 fused_ordering(239) 00:14:20.068 fused_ordering(240) 00:14:20.068 fused_ordering(241) 00:14:20.068 fused_ordering(242) 00:14:20.068 fused_ordering(243) 00:14:20.068 fused_ordering(244) 00:14:20.068 fused_ordering(245) 00:14:20.068 fused_ordering(246) 00:14:20.068 fused_ordering(247) 00:14:20.068 fused_ordering(248) 00:14:20.068 fused_ordering(249) 00:14:20.068 fused_ordering(250) 00:14:20.068 fused_ordering(251) 00:14:20.068 fused_ordering(252) 00:14:20.068 fused_ordering(253) 00:14:20.068 fused_ordering(254) 00:14:20.068 fused_ordering(255) 00:14:20.068 fused_ordering(256) 00:14:20.068 fused_ordering(257) 00:14:20.068 fused_ordering(258) 00:14:20.068 fused_ordering(259) 00:14:20.068 fused_ordering(260) 00:14:20.068 fused_ordering(261) 00:14:20.068 fused_ordering(262) 00:14:20.068 fused_ordering(263) 00:14:20.068 fused_ordering(264) 00:14:20.068 fused_ordering(265) 00:14:20.068 fused_ordering(266) 00:14:20.068 fused_ordering(267) 00:14:20.068 fused_ordering(268) 00:14:20.068 fused_ordering(269) 00:14:20.068 fused_ordering(270) 00:14:20.068 fused_ordering(271) 00:14:20.068 fused_ordering(272) 00:14:20.068 fused_ordering(273) 00:14:20.068 fused_ordering(274) 00:14:20.068 fused_ordering(275) 00:14:20.068 fused_ordering(276) 00:14:20.068 fused_ordering(277) 00:14:20.068 fused_ordering(278) 00:14:20.068 fused_ordering(279) 00:14:20.068 fused_ordering(280) 00:14:20.068 fused_ordering(281) 00:14:20.068 fused_ordering(282) 00:14:20.068 fused_ordering(283) 00:14:20.068 fused_ordering(284) 00:14:20.068 fused_ordering(285) 00:14:20.068 fused_ordering(286) 00:14:20.068 fused_ordering(287) 00:14:20.068 fused_ordering(288) 00:14:20.068 fused_ordering(289) 00:14:20.068 fused_ordering(290) 00:14:20.068 fused_ordering(291) 00:14:20.068 fused_ordering(292) 00:14:20.068 fused_ordering(293) 00:14:20.068 fused_ordering(294) 00:14:20.068 fused_ordering(295) 00:14:20.068 fused_ordering(296) 00:14:20.068 fused_ordering(297) 00:14:20.068 fused_ordering(298) 00:14:20.068 fused_ordering(299) 00:14:20.068 fused_ordering(300) 00:14:20.068 fused_ordering(301) 00:14:20.068 fused_ordering(302) 00:14:20.068 fused_ordering(303) 00:14:20.068 fused_ordering(304) 00:14:20.068 fused_ordering(305) 00:14:20.068 fused_ordering(306) 00:14:20.068 fused_ordering(307) 00:14:20.068 fused_ordering(308) 00:14:20.068 fused_ordering(309) 00:14:20.068 fused_ordering(310) 00:14:20.068 fused_ordering(311) 00:14:20.068 fused_ordering(312) 00:14:20.068 fused_ordering(313) 00:14:20.068 fused_ordering(314) 00:14:20.068 fused_ordering(315) 00:14:20.068 fused_ordering(316) 00:14:20.068 fused_ordering(317) 00:14:20.068 fused_ordering(318) 00:14:20.068 fused_ordering(319) 00:14:20.068 fused_ordering(320) 00:14:20.068 fused_ordering(321) 00:14:20.068 fused_ordering(322) 00:14:20.068 fused_ordering(323) 00:14:20.068 fused_ordering(324) 00:14:20.068 fused_ordering(325) 00:14:20.068 fused_ordering(326) 00:14:20.068 fused_ordering(327) 00:14:20.068 fused_ordering(328) 00:14:20.068 fused_ordering(329) 00:14:20.068 fused_ordering(330) 00:14:20.068 fused_ordering(331) 00:14:20.068 fused_ordering(332) 00:14:20.068 fused_ordering(333) 00:14:20.068 fused_ordering(334) 00:14:20.068 fused_ordering(335) 00:14:20.068 fused_ordering(336) 00:14:20.068 fused_ordering(337) 00:14:20.068 fused_ordering(338) 00:14:20.068 fused_ordering(339) 00:14:20.068 fused_ordering(340) 00:14:20.068 fused_ordering(341) 00:14:20.068 fused_ordering(342) 00:14:20.068 fused_ordering(343) 00:14:20.068 fused_ordering(344) 00:14:20.068 fused_ordering(345) 00:14:20.068 fused_ordering(346) 00:14:20.068 fused_ordering(347) 00:14:20.068 fused_ordering(348) 00:14:20.068 fused_ordering(349) 00:14:20.068 fused_ordering(350) 00:14:20.068 fused_ordering(351) 00:14:20.068 fused_ordering(352) 00:14:20.068 fused_ordering(353) 00:14:20.068 fused_ordering(354) 00:14:20.068 fused_ordering(355) 00:14:20.068 fused_ordering(356) 00:14:20.068 fused_ordering(357) 00:14:20.068 fused_ordering(358) 00:14:20.068 fused_ordering(359) 00:14:20.068 fused_ordering(360) 00:14:20.068 fused_ordering(361) 00:14:20.068 fused_ordering(362) 00:14:20.068 fused_ordering(363) 00:14:20.068 fused_ordering(364) 00:14:20.068 fused_ordering(365) 00:14:20.068 fused_ordering(366) 00:14:20.068 fused_ordering(367) 00:14:20.068 fused_ordering(368) 00:14:20.068 fused_ordering(369) 00:14:20.068 fused_ordering(370) 00:14:20.068 fused_ordering(371) 00:14:20.068 fused_ordering(372) 00:14:20.068 fused_ordering(373) 00:14:20.068 fused_ordering(374) 00:14:20.068 fused_ordering(375) 00:14:20.068 fused_ordering(376) 00:14:20.068 fused_ordering(377) 00:14:20.068 fused_ordering(378) 00:14:20.068 fused_ordering(379) 00:14:20.068 fused_ordering(380) 00:14:20.068 fused_ordering(381) 00:14:20.068 fused_ordering(382) 00:14:20.068 fused_ordering(383) 00:14:20.068 fused_ordering(384) 00:14:20.068 fused_ordering(385) 00:14:20.068 fused_ordering(386) 00:14:20.068 fused_ordering(387) 00:14:20.068 fused_ordering(388) 00:14:20.068 fused_ordering(389) 00:14:20.068 fused_ordering(390) 00:14:20.068 fused_ordering(391) 00:14:20.068 fused_ordering(392) 00:14:20.068 fused_ordering(393) 00:14:20.068 fused_ordering(394) 00:14:20.068 fused_ordering(395) 00:14:20.068 fused_ordering(396) 00:14:20.068 fused_ordering(397) 00:14:20.068 fused_ordering(398) 00:14:20.068 fused_ordering(399) 00:14:20.068 fused_ordering(400) 00:14:20.068 fused_ordering(401) 00:14:20.068 fused_ordering(402) 00:14:20.068 fused_ordering(403) 00:14:20.068 fused_ordering(404) 00:14:20.068 fused_ordering(405) 00:14:20.068 fused_ordering(406) 00:14:20.068 fused_ordering(407) 00:14:20.068 fused_ordering(408) 00:14:20.068 fused_ordering(409) 00:14:20.068 fused_ordering(410) 00:14:21.008 fused_ordering(411) 00:14:21.008 fused_ordering(412) 00:14:21.008 fused_ordering(413) 00:14:21.008 fused_ordering(414) 00:14:21.008 fused_ordering(415) 00:14:21.008 fused_ordering(416) 00:14:21.008 fused_ordering(417) 00:14:21.008 fused_ordering(418) 00:14:21.008 fused_ordering(419) 00:14:21.008 fused_ordering(420) 00:14:21.008 fused_ordering(421) 00:14:21.008 fused_ordering(422) 00:14:21.008 fused_ordering(423) 00:14:21.008 fused_ordering(424) 00:14:21.008 fused_ordering(425) 00:14:21.008 fused_ordering(426) 00:14:21.008 fused_ordering(427) 00:14:21.008 fused_ordering(428) 00:14:21.008 fused_ordering(429) 00:14:21.008 fused_ordering(430) 00:14:21.008 fused_ordering(431) 00:14:21.008 fused_ordering(432) 00:14:21.008 fused_ordering(433) 00:14:21.008 fused_ordering(434) 00:14:21.008 fused_ordering(435) 00:14:21.008 fused_ordering(436) 00:14:21.008 fused_ordering(437) 00:14:21.008 fused_ordering(438) 00:14:21.008 fused_ordering(439) 00:14:21.008 fused_ordering(440) 00:14:21.008 fused_ordering(441) 00:14:21.008 fused_ordering(442) 00:14:21.008 fused_ordering(443) 00:14:21.008 fused_ordering(444) 00:14:21.008 fused_ordering(445) 00:14:21.008 fused_ordering(446) 00:14:21.008 fused_ordering(447) 00:14:21.008 fused_ordering(448) 00:14:21.008 fused_ordering(449) 00:14:21.008 fused_ordering(450) 00:14:21.008 fused_ordering(451) 00:14:21.008 fused_ordering(452) 00:14:21.008 fused_ordering(453) 00:14:21.008 fused_ordering(454) 00:14:21.008 fused_ordering(455) 00:14:21.008 fused_ordering(456) 00:14:21.008 fused_ordering(457) 00:14:21.008 fused_ordering(458) 00:14:21.008 fused_ordering(459) 00:14:21.008 fused_ordering(460) 00:14:21.008 fused_ordering(461) 00:14:21.008 fused_ordering(462) 00:14:21.008 fused_ordering(463) 00:14:21.008 fused_ordering(464) 00:14:21.008 fused_ordering(465) 00:14:21.008 fused_ordering(466) 00:14:21.008 fused_ordering(467) 00:14:21.008 fused_ordering(468) 00:14:21.008 fused_ordering(469) 00:14:21.008 fused_ordering(470) 00:14:21.008 fused_ordering(471) 00:14:21.008 fused_ordering(472) 00:14:21.008 fused_ordering(473) 00:14:21.008 fused_ordering(474) 00:14:21.008 fused_ordering(475) 00:14:21.008 fused_ordering(476) 00:14:21.008 fused_ordering(477) 00:14:21.008 fused_ordering(478) 00:14:21.008 fused_ordering(479) 00:14:21.008 fused_ordering(480) 00:14:21.008 fused_ordering(481) 00:14:21.008 fused_ordering(482) 00:14:21.008 fused_ordering(483) 00:14:21.008 fused_ordering(484) 00:14:21.008 fused_ordering(485) 00:14:21.008 fused_ordering(486) 00:14:21.008 fused_ordering(487) 00:14:21.008 fused_ordering(488) 00:14:21.008 fused_ordering(489) 00:14:21.008 fused_ordering(490) 00:14:21.008 fused_ordering(491) 00:14:21.008 fused_ordering(492) 00:14:21.008 fused_ordering(493) 00:14:21.008 fused_ordering(494) 00:14:21.008 fused_ordering(495) 00:14:21.008 fused_ordering(496) 00:14:21.008 fused_ordering(497) 00:14:21.008 fused_ordering(498) 00:14:21.008 fused_ordering(499) 00:14:21.008 fused_ordering(500) 00:14:21.008 fused_ordering(501) 00:14:21.008 fused_ordering(502) 00:14:21.008 fused_ordering(503) 00:14:21.008 fused_ordering(504) 00:14:21.008 fused_ordering(505) 00:14:21.008 fused_ordering(506) 00:14:21.008 fused_ordering(507) 00:14:21.008 fused_ordering(508) 00:14:21.008 fused_ordering(509) 00:14:21.008 fused_ordering(510) 00:14:21.008 fused_ordering(511) 00:14:21.008 fused_ordering(512) 00:14:21.008 fused_ordering(513) 00:14:21.008 fused_ordering(514) 00:14:21.008 fused_ordering(515) 00:14:21.008 fused_ordering(516) 00:14:21.008 fused_ordering(517) 00:14:21.008 fused_ordering(518) 00:14:21.008 fused_ordering(519) 00:14:21.008 fused_ordering(520) 00:14:21.008 fused_ordering(521) 00:14:21.008 fused_ordering(522) 00:14:21.008 fused_ordering(523) 00:14:21.008 fused_ordering(524) 00:14:21.008 fused_ordering(525) 00:14:21.008 fused_ordering(526) 00:14:21.008 fused_ordering(527) 00:14:21.008 fused_ordering(528) 00:14:21.008 fused_ordering(529) 00:14:21.008 fused_ordering(530) 00:14:21.008 fused_ordering(531) 00:14:21.008 fused_ordering(532) 00:14:21.008 fused_ordering(533) 00:14:21.008 fused_ordering(534) 00:14:21.008 fused_ordering(535) 00:14:21.008 fused_ordering(536) 00:14:21.008 fused_ordering(537) 00:14:21.008 fused_ordering(538) 00:14:21.008 fused_ordering(539) 00:14:21.008 fused_ordering(540) 00:14:21.008 fused_ordering(541) 00:14:21.008 fused_ordering(542) 00:14:21.008 fused_ordering(543) 00:14:21.008 fused_ordering(544) 00:14:21.008 fused_ordering(545) 00:14:21.008 fused_ordering(546) 00:14:21.008 fused_ordering(547) 00:14:21.008 fused_ordering(548) 00:14:21.008 fused_ordering(549) 00:14:21.008 fused_ordering(550) 00:14:21.008 fused_ordering(551) 00:14:21.008 fused_ordering(552) 00:14:21.008 fused_ordering(553) 00:14:21.008 fused_ordering(554) 00:14:21.008 fused_ordering(555) 00:14:21.008 fused_ordering(556) 00:14:21.008 fused_ordering(557) 00:14:21.008 fused_ordering(558) 00:14:21.008 fused_ordering(559) 00:14:21.008 fused_ordering(560) 00:14:21.008 fused_ordering(561) 00:14:21.008 fused_ordering(562) 00:14:21.008 fused_ordering(563) 00:14:21.008 fused_ordering(564) 00:14:21.008 fused_ordering(565) 00:14:21.008 fused_ordering(566) 00:14:21.008 fused_ordering(567) 00:14:21.008 fused_ordering(568) 00:14:21.008 fused_ordering(569) 00:14:21.008 fused_ordering(570) 00:14:21.008 fused_ordering(571) 00:14:21.008 fused_ordering(572) 00:14:21.008 fused_ordering(573) 00:14:21.008 fused_ordering(574) 00:14:21.008 fused_ordering(575) 00:14:21.008 fused_ordering(576) 00:14:21.008 fused_ordering(577) 00:14:21.008 fused_ordering(578) 00:14:21.008 fused_ordering(579) 00:14:21.008 fused_ordering(580) 00:14:21.008 fused_ordering(581) 00:14:21.008 fused_ordering(582) 00:14:21.008 fused_ordering(583) 00:14:21.008 fused_ordering(584) 00:14:21.008 fused_ordering(585) 00:14:21.008 fused_ordering(586) 00:14:21.009 fused_ordering(587) 00:14:21.009 fused_ordering(588) 00:14:21.009 fused_ordering(589) 00:14:21.009 fused_ordering(590) 00:14:21.009 fused_ordering(591) 00:14:21.009 fused_ordering(592) 00:14:21.009 fused_ordering(593) 00:14:21.009 fused_ordering(594) 00:14:21.009 fused_ordering(595) 00:14:21.009 fused_ordering(596) 00:14:21.009 fused_ordering(597) 00:14:21.009 fused_ordering(598) 00:14:21.009 fused_ordering(599) 00:14:21.009 fused_ordering(600) 00:14:21.009 fused_ordering(601) 00:14:21.009 fused_ordering(602) 00:14:21.009 fused_ordering(603) 00:14:21.009 fused_ordering(604) 00:14:21.009 fused_ordering(605) 00:14:21.009 fused_ordering(606) 00:14:21.009 fused_ordering(607) 00:14:21.009 fused_ordering(608) 00:14:21.009 fused_ordering(609) 00:14:21.009 fused_ordering(610) 00:14:21.009 fused_ordering(611) 00:14:21.009 fused_ordering(612) 00:14:21.009 fused_ordering(613) 00:14:21.009 fused_ordering(614) 00:14:21.009 fused_ordering(615) 00:14:21.578 fused_ordering(616) 00:14:21.578 fused_ordering(617) 00:14:21.578 fused_ordering(618) 00:14:21.578 fused_ordering(619) 00:14:21.578 fused_ordering(620) 00:14:21.578 fused_ordering(621) 00:14:21.578 fused_ordering(622) 00:14:21.578 fused_ordering(623) 00:14:21.578 fused_ordering(624) 00:14:21.578 fused_ordering(625) 00:14:21.578 fused_ordering(626) 00:14:21.578 fused_ordering(627) 00:14:21.578 fused_ordering(628) 00:14:21.578 fused_ordering(629) 00:14:21.578 fused_ordering(630) 00:14:21.578 fused_ordering(631) 00:14:21.578 fused_ordering(632) 00:14:21.578 fused_ordering(633) 00:14:21.578 fused_ordering(634) 00:14:21.578 fused_ordering(635) 00:14:21.578 fused_ordering(636) 00:14:21.578 fused_ordering(637) 00:14:21.578 fused_ordering(638) 00:14:21.578 fused_ordering(639) 00:14:21.578 fused_ordering(640) 00:14:21.578 fused_ordering(641) 00:14:21.578 fused_ordering(642) 00:14:21.578 fused_ordering(643) 00:14:21.578 fused_ordering(644) 00:14:21.578 fused_ordering(645) 00:14:21.578 fused_ordering(646) 00:14:21.578 fused_ordering(647) 00:14:21.578 fused_ordering(648) 00:14:21.578 fused_ordering(649) 00:14:21.578 fused_ordering(650) 00:14:21.578 fused_ordering(651) 00:14:21.578 fused_ordering(652) 00:14:21.578 fused_ordering(653) 00:14:21.578 fused_ordering(654) 00:14:21.578 fused_ordering(655) 00:14:21.578 fused_ordering(656) 00:14:21.579 fused_ordering(657) 00:14:21.579 fused_ordering(658) 00:14:21.579 fused_ordering(659) 00:14:21.579 fused_ordering(660) 00:14:21.579 fused_ordering(661) 00:14:21.579 fused_ordering(662) 00:14:21.579 fused_ordering(663) 00:14:21.579 fused_ordering(664) 00:14:21.579 fused_ordering(665) 00:14:21.579 fused_ordering(666) 00:14:21.579 fused_ordering(667) 00:14:21.579 fused_ordering(668) 00:14:21.579 fused_ordering(669) 00:14:21.579 fused_ordering(670) 00:14:21.579 fused_ordering(671) 00:14:21.579 fused_ordering(672) 00:14:21.579 fused_ordering(673) 00:14:21.579 fused_ordering(674) 00:14:21.579 fused_ordering(675) 00:14:21.579 fused_ordering(676) 00:14:21.579 fused_ordering(677) 00:14:21.579 fused_ordering(678) 00:14:21.579 fused_ordering(679) 00:14:21.579 fused_ordering(680) 00:14:21.579 fused_ordering(681) 00:14:21.579 fused_ordering(682) 00:14:21.579 fused_ordering(683) 00:14:21.579 fused_ordering(684) 00:14:21.579 fused_ordering(685) 00:14:21.579 fused_ordering(686) 00:14:21.579 fused_ordering(687) 00:14:21.579 fused_ordering(688) 00:14:21.579 fused_ordering(689) 00:14:21.579 fused_ordering(690) 00:14:21.579 fused_ordering(691) 00:14:21.579 fused_ordering(692) 00:14:21.579 fused_ordering(693) 00:14:21.579 fused_ordering(694) 00:14:21.579 fused_ordering(695) 00:14:21.579 fused_ordering(696) 00:14:21.579 fused_ordering(697) 00:14:21.579 fused_ordering(698) 00:14:21.579 fused_ordering(699) 00:14:21.579 fused_ordering(700) 00:14:21.579 fused_ordering(701) 00:14:21.579 fused_ordering(702) 00:14:21.579 fused_ordering(703) 00:14:21.579 fused_ordering(704) 00:14:21.579 fused_ordering(705) 00:14:21.579 fused_ordering(706) 00:14:21.579 fused_ordering(707) 00:14:21.579 fused_ordering(708) 00:14:21.579 fused_ordering(709) 00:14:21.579 fused_ordering(710) 00:14:21.579 fused_ordering(711) 00:14:21.579 fused_ordering(712) 00:14:21.579 fused_ordering(713) 00:14:21.579 fused_ordering(714) 00:14:21.579 fused_ordering(715) 00:14:21.579 fused_ordering(716) 00:14:21.579 fused_ordering(717) 00:14:21.579 fused_ordering(718) 00:14:21.579 fused_ordering(719) 00:14:21.579 fused_ordering(720) 00:14:21.579 fused_ordering(721) 00:14:21.579 fused_ordering(722) 00:14:21.579 fused_ordering(723) 00:14:21.579 fused_ordering(724) 00:14:21.579 fused_ordering(725) 00:14:21.579 fused_ordering(726) 00:14:21.579 fused_ordering(727) 00:14:21.579 fused_ordering(728) 00:14:21.579 fused_ordering(729) 00:14:21.579 fused_ordering(730) 00:14:21.579 fused_ordering(731) 00:14:21.579 fused_ordering(732) 00:14:21.579 fused_ordering(733) 00:14:21.579 fused_ordering(734) 00:14:21.579 fused_ordering(735) 00:14:21.579 fused_ordering(736) 00:14:21.579 fused_ordering(737) 00:14:21.579 fused_ordering(738) 00:14:21.579 fused_ordering(739) 00:14:21.579 fused_ordering(740) 00:14:21.579 fused_ordering(741) 00:14:21.579 fused_ordering(742) 00:14:21.579 fused_ordering(743) 00:14:21.579 fused_ordering(744) 00:14:21.579 fused_ordering(745) 00:14:21.579 fused_ordering(746) 00:14:21.579 fused_ordering(747) 00:14:21.579 fused_ordering(748) 00:14:21.579 fused_ordering(749) 00:14:21.579 fused_ordering(750) 00:14:21.579 fused_ordering(751) 00:14:21.579 fused_ordering(752) 00:14:21.579 fused_ordering(753) 00:14:21.579 fused_ordering(754) 00:14:21.579 fused_ordering(755) 00:14:21.579 fused_ordering(756) 00:14:21.579 fused_ordering(757) 00:14:21.579 fused_ordering(758) 00:14:21.579 fused_ordering(759) 00:14:21.579 fused_ordering(760) 00:14:21.579 fused_ordering(761) 00:14:21.579 fused_ordering(762) 00:14:21.579 fused_ordering(763) 00:14:21.579 fused_ordering(764) 00:14:21.579 fused_ordering(765) 00:14:21.579 fused_ordering(766) 00:14:21.579 fused_ordering(767) 00:14:21.579 fused_ordering(768) 00:14:21.579 fused_ordering(769) 00:14:21.579 fused_ordering(770) 00:14:21.579 fused_ordering(771) 00:14:21.579 fused_ordering(772) 00:14:21.579 fused_ordering(773) 00:14:21.579 fused_ordering(774) 00:14:21.579 fused_ordering(775) 00:14:21.579 fused_ordering(776) 00:14:21.579 fused_ordering(777) 00:14:21.579 fused_ordering(778) 00:14:21.579 fused_ordering(779) 00:14:21.579 fused_ordering(780) 00:14:21.579 fused_ordering(781) 00:14:21.579 fused_ordering(782) 00:14:21.579 fused_ordering(783) 00:14:21.579 fused_ordering(784) 00:14:21.579 fused_ordering(785) 00:14:21.579 fused_ordering(786) 00:14:21.579 fused_ordering(787) 00:14:21.579 fused_ordering(788) 00:14:21.579 fused_ordering(789) 00:14:21.579 fused_ordering(790) 00:14:21.579 fused_ordering(791) 00:14:21.579 fused_ordering(792) 00:14:21.579 fused_ordering(793) 00:14:21.579 fused_ordering(794) 00:14:21.579 fused_ordering(795) 00:14:21.579 fused_ordering(796) 00:14:21.579 fused_ordering(797) 00:14:21.579 fused_ordering(798) 00:14:21.579 fused_ordering(799) 00:14:21.579 fused_ordering(800) 00:14:21.579 fused_ordering(801) 00:14:21.579 fused_ordering(802) 00:14:21.579 fused_ordering(803) 00:14:21.579 fused_ordering(804) 00:14:21.579 fused_ordering(805) 00:14:21.579 fused_ordering(806) 00:14:21.579 fused_ordering(807) 00:14:21.579 fused_ordering(808) 00:14:21.579 fused_ordering(809) 00:14:21.579 fused_ordering(810) 00:14:21.579 fused_ordering(811) 00:14:21.579 fused_ordering(812) 00:14:21.579 fused_ordering(813) 00:14:21.579 fused_ordering(814) 00:14:21.579 fused_ordering(815) 00:14:21.579 fused_ordering(816) 00:14:21.579 fused_ordering(817) 00:14:21.579 fused_ordering(818) 00:14:21.579 fused_ordering(819) 00:14:21.579 fused_ordering(820) 00:14:22.518 fused_ordering(821) 00:14:22.518 fused_ordering(822) 00:14:22.518 fused_ordering(823) 00:14:22.518 fused_ordering(824) 00:14:22.518 fused_ordering(825) 00:14:22.519 fused_ordering(826) 00:14:22.519 fused_ordering(827) 00:14:22.519 fused_ordering(828) 00:14:22.519 fused_ordering(829) 00:14:22.519 fused_ordering(830) 00:14:22.519 fused_ordering(831) 00:14:22.519 fused_ordering(832) 00:14:22.519 fused_ordering(833) 00:14:22.519 fused_ordering(834) 00:14:22.519 fused_ordering(835) 00:14:22.519 fused_ordering(836) 00:14:22.519 fused_ordering(837) 00:14:22.519 fused_ordering(838) 00:14:22.519 fused_ordering(839) 00:14:22.519 fused_ordering(840) 00:14:22.519 fused_ordering(841) 00:14:22.519 fused_ordering(842) 00:14:22.519 fused_ordering(843) 00:14:22.519 fused_ordering(844) 00:14:22.519 fused_ordering(845) 00:14:22.519 fused_ordering(846) 00:14:22.519 fused_ordering(847) 00:14:22.519 fused_ordering(848) 00:14:22.519 fused_ordering(849) 00:14:22.519 fused_ordering(850) 00:14:22.519 fused_ordering(851) 00:14:22.519 fused_ordering(852) 00:14:22.519 fused_ordering(853) 00:14:22.519 fused_ordering(854) 00:14:22.519 fused_ordering(855) 00:14:22.519 fused_ordering(856) 00:14:22.519 fused_ordering(857) 00:14:22.519 fused_ordering(858) 00:14:22.519 fused_ordering(859) 00:14:22.519 fused_ordering(860) 00:14:22.519 fused_ordering(861) 00:14:22.519 fused_ordering(862) 00:14:22.519 fused_ordering(863) 00:14:22.519 fused_ordering(864) 00:14:22.519 fused_ordering(865) 00:14:22.519 fused_ordering(866) 00:14:22.519 fused_ordering(867) 00:14:22.519 fused_ordering(868) 00:14:22.519 fused_ordering(869) 00:14:22.519 fused_ordering(870) 00:14:22.519 fused_ordering(871) 00:14:22.519 fused_ordering(872) 00:14:22.519 fused_ordering(873) 00:14:22.519 fused_ordering(874) 00:14:22.519 fused_ordering(875) 00:14:22.519 fused_ordering(876) 00:14:22.519 fused_ordering(877) 00:14:22.519 fused_ordering(878) 00:14:22.519 fused_ordering(879) 00:14:22.519 fused_ordering(880) 00:14:22.519 fused_ordering(881) 00:14:22.519 fused_ordering(882) 00:14:22.519 fused_ordering(883) 00:14:22.519 fused_ordering(884) 00:14:22.519 fused_ordering(885) 00:14:22.519 fused_ordering(886) 00:14:22.519 fused_ordering(887) 00:14:22.519 fused_ordering(888) 00:14:22.519 fused_ordering(889) 00:14:22.519 fused_ordering(890) 00:14:22.519 fused_ordering(891) 00:14:22.519 fused_ordering(892) 00:14:22.519 fused_ordering(893) 00:14:22.519 fused_ordering(894) 00:14:22.519 fused_ordering(895) 00:14:22.519 fused_ordering(896) 00:14:22.519 fused_ordering(897) 00:14:22.519 fused_ordering(898) 00:14:22.519 fused_ordering(899) 00:14:22.519 fused_ordering(900) 00:14:22.519 fused_ordering(901) 00:14:22.519 fused_ordering(902) 00:14:22.519 fused_ordering(903) 00:14:22.519 fused_ordering(904) 00:14:22.519 fused_ordering(905) 00:14:22.519 fused_ordering(906) 00:14:22.519 fused_ordering(907) 00:14:22.519 fused_ordering(908) 00:14:22.519 fused_ordering(909) 00:14:22.519 fused_ordering(910) 00:14:22.519 fused_ordering(911) 00:14:22.519 fused_ordering(912) 00:14:22.519 fused_ordering(913) 00:14:22.519 fused_ordering(914) 00:14:22.519 fused_ordering(915) 00:14:22.519 fused_ordering(916) 00:14:22.519 fused_ordering(917) 00:14:22.519 fused_ordering(918) 00:14:22.519 fused_ordering(919) 00:14:22.519 fused_ordering(920) 00:14:22.519 fused_ordering(921) 00:14:22.519 fused_ordering(922) 00:14:22.519 fused_ordering(923) 00:14:22.519 fused_ordering(924) 00:14:22.519 fused_ordering(925) 00:14:22.519 fused_ordering(926) 00:14:22.519 fused_ordering(927) 00:14:22.519 fused_ordering(928) 00:14:22.519 fused_ordering(929) 00:14:22.519 fused_ordering(930) 00:14:22.519 fused_ordering(931) 00:14:22.519 fused_ordering(932) 00:14:22.519 fused_ordering(933) 00:14:22.519 fused_ordering(934) 00:14:22.519 fused_ordering(935) 00:14:22.519 fused_ordering(936) 00:14:22.519 fused_ordering(937) 00:14:22.519 fused_ordering(938) 00:14:22.519 fused_ordering(939) 00:14:22.519 fused_ordering(940) 00:14:22.519 fused_ordering(941) 00:14:22.519 fused_ordering(942) 00:14:22.519 fused_ordering(943) 00:14:22.519 fused_ordering(944) 00:14:22.519 fused_ordering(945) 00:14:22.519 fused_ordering(946) 00:14:22.519 fused_ordering(947) 00:14:22.519 fused_ordering(948) 00:14:22.519 fused_ordering(949) 00:14:22.519 fused_ordering(950) 00:14:22.519 fused_ordering(951) 00:14:22.519 fused_ordering(952) 00:14:22.519 fused_ordering(953) 00:14:22.519 fused_ordering(954) 00:14:22.519 fused_ordering(955) 00:14:22.519 fused_ordering(956) 00:14:22.519 fused_ordering(957) 00:14:22.519 fused_ordering(958) 00:14:22.519 fused_ordering(959) 00:14:22.519 fused_ordering(960) 00:14:22.519 fused_ordering(961) 00:14:22.519 fused_ordering(962) 00:14:22.519 fused_ordering(963) 00:14:22.519 fused_ordering(964) 00:14:22.519 fused_ordering(965) 00:14:22.519 fused_ordering(966) 00:14:22.519 fused_ordering(967) 00:14:22.519 fused_ordering(968) 00:14:22.519 fused_ordering(969) 00:14:22.519 fused_ordering(970) 00:14:22.519 fused_ordering(971) 00:14:22.519 fused_ordering(972) 00:14:22.519 fused_ordering(973) 00:14:22.519 fused_ordering(974) 00:14:22.519 fused_ordering(975) 00:14:22.519 fused_ordering(976) 00:14:22.519 fused_ordering(977) 00:14:22.519 fused_ordering(978) 00:14:22.519 fused_ordering(979) 00:14:22.519 fused_ordering(980) 00:14:22.519 fused_ordering(981) 00:14:22.519 fused_ordering(982) 00:14:22.519 fused_ordering(983) 00:14:22.519 fused_ordering(984) 00:14:22.519 fused_ordering(985) 00:14:22.519 fused_ordering(986) 00:14:22.519 fused_ordering(987) 00:14:22.519 fused_ordering(988) 00:14:22.519 fused_ordering(989) 00:14:22.519 fused_ordering(990) 00:14:22.519 fused_ordering(991) 00:14:22.519 fused_ordering(992) 00:14:22.519 fused_ordering(993) 00:14:22.519 fused_ordering(994) 00:14:22.519 fused_ordering(995) 00:14:22.519 fused_ordering(996) 00:14:22.519 fused_ordering(997) 00:14:22.519 fused_ordering(998) 00:14:22.519 fused_ordering(999) 00:14:22.519 fused_ordering(1000) 00:14:22.519 fused_ordering(1001) 00:14:22.519 fused_ordering(1002) 00:14:22.519 fused_ordering(1003) 00:14:22.519 fused_ordering(1004) 00:14:22.519 fused_ordering(1005) 00:14:22.519 fused_ordering(1006) 00:14:22.519 fused_ordering(1007) 00:14:22.519 fused_ordering(1008) 00:14:22.519 fused_ordering(1009) 00:14:22.519 fused_ordering(1010) 00:14:22.519 fused_ordering(1011) 00:14:22.519 fused_ordering(1012) 00:14:22.519 fused_ordering(1013) 00:14:22.519 fused_ordering(1014) 00:14:22.519 fused_ordering(1015) 00:14:22.519 fused_ordering(1016) 00:14:22.519 fused_ordering(1017) 00:14:22.519 fused_ordering(1018) 00:14:22.519 fused_ordering(1019) 00:14:22.519 fused_ordering(1020) 00:14:22.519 fused_ordering(1021) 00:14:22.519 fused_ordering(1022) 00:14:22.519 fused_ordering(1023) 00:14:22.519 20:14:59 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:22.519 20:14:59 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:22.519 20:14:59 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:22.519 20:14:59 -- nvmf/common.sh@116 -- # sync 00:14:22.519 20:14:59 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:22.519 20:14:59 -- nvmf/common.sh@119 -- # set +e 00:14:22.519 20:14:59 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:22.519 20:14:59 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:22.519 rmmod nvme_tcp 00:14:22.519 rmmod nvme_fabrics 00:14:22.519 rmmod nvme_keyring 00:14:22.519 20:14:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:22.519 20:14:59 -- nvmf/common.sh@123 -- # set -e 00:14:22.519 20:14:59 -- nvmf/common.sh@124 -- # return 0 00:14:22.519 20:14:59 -- nvmf/common.sh@477 -- # '[' -n 1723960 ']' 00:14:22.519 20:14:59 -- nvmf/common.sh@478 -- # killprocess 1723960 00:14:22.519 20:14:59 -- common/autotest_common.sh@924 -- # '[' -z 1723960 ']' 00:14:22.519 20:14:59 -- common/autotest_common.sh@928 -- # kill -0 1723960 00:14:22.519 20:14:59 -- common/autotest_common.sh@929 -- # uname 00:14:22.519 20:14:59 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:14:22.519 20:14:59 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 1723960 00:14:22.519 20:14:59 -- common/autotest_common.sh@930 -- # process_name=reactor_1 00:14:22.519 20:14:59 -- common/autotest_common.sh@934 -- # '[' reactor_1 = sudo ']' 00:14:22.519 20:14:59 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 1723960' 00:14:22.519 killing process with pid 1723960 00:14:22.519 20:14:59 -- common/autotest_common.sh@943 -- # kill 1723960 00:14:22.519 20:14:59 -- common/autotest_common.sh@948 -- # wait 1723960 00:14:22.779 20:15:00 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:22.779 20:15:00 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:22.779 20:15:00 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:22.779 20:15:00 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:22.779 20:15:00 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:22.779 20:15:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:22.779 20:15:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:22.779 20:15:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:24.695 20:15:02 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:24.695 00:14:24.695 real 0m13.198s 00:14:24.695 user 0m8.278s 00:14:24.695 sys 0m7.193s 00:14:24.695 20:15:02 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:24.695 20:15:02 -- common/autotest_common.sh@10 -- # set +x 00:14:24.695 ************************************ 00:14:24.695 END TEST nvmf_fused_ordering 00:14:24.695 ************************************ 00:14:24.695 20:15:02 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:24.695 20:15:02 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:14:24.695 20:15:02 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:14:24.695 20:15:02 -- common/autotest_common.sh@10 -- # set +x 00:14:24.955 ************************************ 00:14:24.955 START TEST nvmf_delete_subsystem 00:14:24.955 ************************************ 00:14:24.955 20:15:02 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:24.955 * Looking for test storage... 00:14:24.955 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:24.955 20:15:02 -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:24.955 20:15:02 -- nvmf/common.sh@7 -- # uname -s 00:14:24.955 20:15:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:24.955 20:15:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:24.955 20:15:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:24.955 20:15:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:24.955 20:15:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:24.955 20:15:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:24.955 20:15:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:24.955 20:15:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:24.955 20:15:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:24.955 20:15:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:24.955 20:15:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:14:24.955 20:15:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:14:24.955 20:15:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:24.955 20:15:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:24.955 20:15:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:24.955 20:15:02 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:24.955 20:15:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:24.955 20:15:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:24.955 20:15:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:24.955 20:15:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.955 20:15:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.955 20:15:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.955 20:15:02 -- paths/export.sh@5 -- # export PATH 00:14:24.955 20:15:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.955 20:15:02 -- nvmf/common.sh@46 -- # : 0 00:14:24.955 20:15:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:24.955 20:15:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:24.955 20:15:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:24.955 20:15:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:24.955 20:15:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:24.955 20:15:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:24.955 20:15:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:24.955 20:15:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:24.955 20:15:02 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:14:24.955 20:15:02 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:24.955 20:15:02 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:24.955 20:15:02 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:24.955 20:15:02 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:24.956 20:15:02 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:24.956 20:15:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:24.956 20:15:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:24.956 20:15:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:24.956 20:15:02 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:24.956 20:15:02 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:24.956 20:15:02 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:24.956 20:15:02 -- common/autotest_common.sh@10 -- # set +x 00:14:31.530 20:15:08 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:31.530 20:15:08 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:31.530 20:15:08 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:31.530 20:15:08 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:31.530 20:15:08 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:31.530 20:15:08 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:31.530 20:15:08 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:31.530 20:15:08 -- nvmf/common.sh@294 -- # net_devs=() 00:14:31.530 20:15:08 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:31.530 20:15:08 -- nvmf/common.sh@295 -- # e810=() 00:14:31.530 20:15:08 -- nvmf/common.sh@295 -- # local -ga e810 00:14:31.530 20:15:08 -- nvmf/common.sh@296 -- # x722=() 00:14:31.531 20:15:08 -- nvmf/common.sh@296 -- # local -ga x722 00:14:31.531 20:15:08 -- nvmf/common.sh@297 -- # mlx=() 00:14:31.531 20:15:08 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:31.531 20:15:08 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:31.531 20:15:08 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:31.531 20:15:08 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:31.531 20:15:08 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:31.531 20:15:08 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:31.531 20:15:08 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:31.531 20:15:08 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:31.531 20:15:08 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:31.531 20:15:08 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:31.531 20:15:08 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:31.531 20:15:08 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:31.531 20:15:08 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:31.531 20:15:08 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:31.531 20:15:08 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:14:31.531 20:15:08 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:14:31.531 20:15:08 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:14:31.531 20:15:08 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:31.531 20:15:08 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:31.531 20:15:08 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:31.531 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:31.531 20:15:08 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:31.531 20:15:08 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:31.531 20:15:08 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:31.531 20:15:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:31.531 20:15:08 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:31.531 20:15:08 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:31.531 20:15:08 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:31.531 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:31.531 20:15:08 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:31.531 20:15:08 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:31.531 20:15:08 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:31.531 20:15:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:31.531 20:15:08 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:31.531 20:15:08 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:31.531 20:15:08 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:14:31.531 20:15:08 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:14:31.531 20:15:08 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:31.531 20:15:08 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:31.531 20:15:08 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:31.531 20:15:08 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:31.531 20:15:08 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:31.531 Found net devices under 0000:af:00.0: cvl_0_0 00:14:31.531 20:15:08 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:31.531 20:15:08 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:31.531 20:15:08 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:31.531 20:15:08 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:31.531 20:15:08 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:31.531 20:15:08 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:31.531 Found net devices under 0000:af:00.1: cvl_0_1 00:14:31.531 20:15:08 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:31.531 20:15:08 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:31.531 20:15:08 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:31.531 20:15:08 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:31.531 20:15:08 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:31.531 20:15:08 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:31.531 20:15:08 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:31.531 20:15:08 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:31.531 20:15:08 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:31.531 20:15:08 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:31.531 20:15:08 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:31.531 20:15:08 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:31.531 20:15:08 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:31.531 20:15:08 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:31.531 20:15:08 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:31.531 20:15:08 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:31.531 20:15:08 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:31.531 20:15:08 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:31.531 20:15:08 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:31.531 20:15:08 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:31.531 20:15:08 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:31.531 20:15:08 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:31.531 20:15:08 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:31.531 20:15:08 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:31.531 20:15:08 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:31.531 20:15:08 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:31.531 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:31.531 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.165 ms 00:14:31.531 00:14:31.531 --- 10.0.0.2 ping statistics --- 00:14:31.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:31.531 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:14:31.531 20:15:08 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:31.531 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:31.531 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.265 ms 00:14:31.531 00:14:31.531 --- 10.0.0.1 ping statistics --- 00:14:31.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:31.531 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:14:31.531 20:15:08 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:31.531 20:15:08 -- nvmf/common.sh@410 -- # return 0 00:14:31.531 20:15:08 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:31.531 20:15:08 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:31.531 20:15:08 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:31.531 20:15:08 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:31.531 20:15:08 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:31.531 20:15:08 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:31.531 20:15:08 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:31.531 20:15:08 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:14:31.531 20:15:08 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:31.531 20:15:08 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:31.531 20:15:08 -- common/autotest_common.sh@10 -- # set +x 00:14:31.531 20:15:08 -- nvmf/common.sh@469 -- # nvmfpid=1729198 00:14:31.531 20:15:08 -- nvmf/common.sh@470 -- # waitforlisten 1729198 00:14:31.531 20:15:08 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:14:31.531 20:15:08 -- common/autotest_common.sh@817 -- # '[' -z 1729198 ']' 00:14:31.531 20:15:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:31.531 20:15:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:31.531 20:15:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:31.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:31.531 20:15:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:31.531 20:15:08 -- common/autotest_common.sh@10 -- # set +x 00:14:31.531 [2024-02-14 20:15:08.398563] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:14:31.531 [2024-02-14 20:15:08.398603] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:31.531 EAL: No free 2048 kB hugepages reported on node 1 00:14:31.531 [2024-02-14 20:15:08.462276] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:31.531 [2024-02-14 20:15:08.531570] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:31.531 [2024-02-14 20:15:08.531688] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:31.531 [2024-02-14 20:15:08.531696] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:31.531 [2024-02-14 20:15:08.531701] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:31.531 [2024-02-14 20:15:08.531789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:31.531 [2024-02-14 20:15:08.531793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:31.790 20:15:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:31.790 20:15:09 -- common/autotest_common.sh@850 -- # return 0 00:14:31.790 20:15:09 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:31.790 20:15:09 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:31.790 20:15:09 -- common/autotest_common.sh@10 -- # set +x 00:14:32.052 20:15:09 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:32.052 20:15:09 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:32.052 20:15:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:32.052 20:15:09 -- common/autotest_common.sh@10 -- # set +x 00:14:32.052 [2024-02-14 20:15:09.235808] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:32.052 20:15:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:32.052 20:15:09 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:32.052 20:15:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:32.052 20:15:09 -- common/autotest_common.sh@10 -- # set +x 00:14:32.052 20:15:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:32.052 20:15:09 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:32.052 20:15:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:32.052 20:15:09 -- common/autotest_common.sh@10 -- # set +x 00:14:32.052 [2024-02-14 20:15:09.251964] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:32.052 20:15:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:32.052 20:15:09 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:32.052 20:15:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:32.052 20:15:09 -- common/autotest_common.sh@10 -- # set +x 00:14:32.052 NULL1 00:14:32.052 20:15:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:32.052 20:15:09 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:32.052 20:15:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:32.052 20:15:09 -- common/autotest_common.sh@10 -- # set +x 00:14:32.052 Delay0 00:14:32.052 20:15:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:32.052 20:15:09 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:32.052 20:15:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:32.052 20:15:09 -- common/autotest_common.sh@10 -- # set +x 00:14:32.052 20:15:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:32.052 20:15:09 -- target/delete_subsystem.sh@28 -- # perf_pid=1729445 00:14:32.052 20:15:09 -- target/delete_subsystem.sh@30 -- # sleep 2 00:14:32.052 20:15:09 -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:32.052 EAL: No free 2048 kB hugepages reported on node 1 00:14:32.052 [2024-02-14 20:15:09.326561] subsystem.c:1304:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:33.958 20:15:11 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:33.958 20:15:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:33.958 20:15:11 -- common/autotest_common.sh@10 -- # set +x 00:14:34.217 Read completed with error (sct=0, sc=8) 00:14:34.217 starting I/O failed: -6 00:14:34.217 Read completed with error (sct=0, sc=8) 00:14:34.217 Write completed with error (sct=0, sc=8) 00:14:34.217 Write completed with error (sct=0, sc=8) 00:14:34.217 Write completed with error (sct=0, sc=8) 00:14:34.217 starting I/O failed: -6 00:14:34.217 Write completed with error (sct=0, sc=8) 00:14:34.217 Read completed with error (sct=0, sc=8) 00:14:34.217 Write completed with error (sct=0, sc=8) 00:14:34.217 Read completed with error (sct=0, sc=8) 00:14:34.217 starting I/O failed: -6 00:14:34.217 Read completed with error (sct=0, sc=8) 00:14:34.217 Read completed with error (sct=0, sc=8) 00:14:34.217 Read completed with error (sct=0, sc=8) 00:14:34.217 Read completed with error (sct=0, sc=8) 00:14:34.217 starting I/O failed: -6 00:14:34.217 Write completed with error (sct=0, sc=8) 00:14:34.217 Read completed with error (sct=0, sc=8) 00:14:34.217 Read completed with error (sct=0, sc=8) 00:14:34.217 Write completed with error (sct=0, sc=8) 00:14:34.217 starting I/O failed: -6 00:14:34.217 Read completed with error (sct=0, sc=8) 00:14:34.217 Read completed with error (sct=0, sc=8) 00:14:34.217 Read completed with error (sct=0, sc=8) 00:14:34.217 Write completed with error (sct=0, sc=8) 00:14:34.217 starting I/O failed: -6 00:14:34.217 Read completed with error (sct=0, sc=8) 00:14:34.217 Write completed with error (sct=0, sc=8) 00:14:34.217 Read completed with error (sct=0, sc=8) 00:14:34.217 Write completed with error (sct=0, sc=8) 00:14:34.217 starting I/O failed: -6 00:14:34.217 Read completed with error (sct=0, sc=8) 00:14:34.217 Write completed with error (sct=0, sc=8) 00:14:34.217 Read completed with error (sct=0, sc=8) 00:14:34.217 Read completed with error (sct=0, sc=8) 00:14:34.217 starting I/O failed: -6 00:14:34.217 Write completed with error (sct=0, sc=8) 00:14:34.217 Read completed with error (sct=0, sc=8) 00:14:34.217 Write completed with error (sct=0, sc=8) 00:14:34.217 Write completed with error (sct=0, sc=8) 00:14:34.217 starting I/O failed: -6 00:14:34.217 Write completed with error (sct=0, sc=8) 00:14:34.217 Read completed with error (sct=0, sc=8) 00:14:34.217 [2024-02-14 20:15:11.415748] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f16a000bf80 is same with the state(5) to be set 00:14:34.217 Write completed with error (sct=0, sc=8) 00:14:34.217 Read completed with error (sct=0, sc=8) 00:14:34.217 Read completed with error (sct=0, sc=8) 00:14:34.217 Write completed with error (sct=0, sc=8) 00:14:34.217 Write completed with error (sct=0, sc=8) 00:14:34.217 Read completed with error (sct=0, sc=8) 00:14:34.218 Read completed with error (sct=0, sc=8) 00:14:34.218 Read completed with error (sct=0, sc=8) 00:14:34.218 Write completed with error (sct=0, sc=8) 00:14:34.218 Read completed with error (sct=0, sc=8) 00:14:34.218 Write completed with error (sct=0, sc=8) 00:14:34.218 Write completed with error (sct=0, sc=8) 00:14:34.218 Write completed with error (sct=0, sc=8) 00:14:34.218 Write completed with error (sct=0, sc=8) 00:14:34.218 Read completed with error (sct=0, sc=8) 00:14:34.218 Write completed with error (sct=0, sc=8) 00:14:34.218 Read completed with error (sct=0, sc=8) 00:14:34.218 Write completed with error (sct=0, sc=8) 00:14:34.218 Write completed with error (sct=0, sc=8) 00:14:34.218 Read completed with error (sct=0, sc=8) 00:14:34.218 Read completed with error (sct=0, sc=8) 00:14:34.218 Read completed with error (sct=0, sc=8) 00:14:34.218 Read completed with error (sct=0, sc=8) 00:14:34.218 Read completed with error (sct=0, sc=8) 00:14:34.218 Write completed with error (sct=0, sc=8) 00:14:34.218 Write completed with error (sct=0, sc=8) 00:14:34.218 Write completed with error (sct=0, sc=8) 00:14:34.218 Write completed with error (sct=0, sc=8) 00:14:34.218 Read completed with error (sct=0, sc=8) 00:14:34.218 Write completed with error (sct=0, sc=8) 00:14:34.218 Write completed with error (sct=0, sc=8) 00:14:34.218 Read completed with error (sct=0, sc=8) 00:14:34.218 Read completed with error (sct=0, sc=8) 00:14:34.218 Read completed with error (sct=0, sc=8) 00:14:34.218 Read completed with error (sct=0, sc=8) 00:14:34.218 Read completed with error (sct=0, sc=8) 00:14:34.218 Write completed with error (sct=0, sc=8) 00:14:34.218 Read completed with error (sct=0, sc=8) 00:14:34.218 Read completed with error (sct=0, sc=8) 00:14:34.218 Read completed with error (sct=0, sc=8) 00:14:34.218 Read completed with error (sct=0, sc=8) 00:14:34.218 Write completed with error (sct=0, sc=8) 00:14:34.218 Read completed with error (sct=0, sc=8) 00:14:34.218 Read completed with error (sct=0, sc=8) 00:14:34.218 [2024-02-14 20:15:11.417457] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f16a000c4e0 is same with the state(5) to be set 00:14:34.218 Read completed with error (sct=0, sc=8) 00:14:34.218 Read completed with error (sct=0, sc=8) 00:14:34.218 Read completed with error (sct=0, sc=8) 00:14:34.218 Read completed with error (sct=0, sc=8) 00:14:34.218 Read completed with error (sct=0, sc=8) 00:14:34.218 Read completed with error (sct=0, sc=8) 00:14:34.218 Write completed with error (sct=0, sc=8) 00:14:34.218 Write completed with error (sct=0, sc=8) 00:14:34.218 Read completed with error (sct=0, sc=8) 00:14:34.218 Read completed with error (sct=0, sc=8) 00:14:34.218 Write completed with error (sct=0, sc=8) 00:14:34.218 Read completed with error (sct=0, sc=8) 00:14:34.218 Read completed with error (sct=0, sc=8) 00:14:34.218 Read completed with error (sct=0, sc=8) 00:14:34.218 Read completed with error (sct=0, sc=8) 00:14:34.218 Write completed with error (sct=0, sc=8) 00:14:34.218 Read completed with error (sct=0, sc=8) 00:14:34.218 Read completed with error (sct=0, sc=8) 00:14:34.218 Write completed with error (sct=0, sc=8) 00:14:34.218 Read completed with error (sct=0, sc=8) 00:14:34.218 Read completed with error (sct=0, sc=8) 00:14:34.218 Read completed with error (sct=0, sc=8) 00:14:34.218 Write completed with error (sct=0, sc=8) 00:14:34.218 Read completed with error (sct=0, sc=8) 00:14:34.218 Read completed with error (sct=0, sc=8) 00:14:34.218 Write completed with error (sct=0, sc=8) 00:14:34.218 Read completed with error (sct=0, sc=8) 00:14:34.218 Read completed with error (sct=0, sc=8) 00:14:34.218 Write completed with error (sct=0, sc=8) 00:14:34.218 Read completed with error (sct=0, sc=8) 00:14:34.218 Read completed with error (sct=0, sc=8) 00:14:34.218 Write completed with error (sct=0, sc=8) 00:14:34.218 Read completed with error (sct=0, sc=8) 00:14:34.218 Read completed with error (sct=0, sc=8) 00:14:34.218 Write completed with error (sct=0, sc=8) 00:14:34.218 Read completed with error (sct=0, sc=8) 00:14:34.218 Read completed with error (sct=0, sc=8) 00:14:34.218 Read completed with error (sct=0, sc=8) 00:14:34.218 Write completed with error (sct=0, sc=8) 00:14:34.218 Read completed with error (sct=0, sc=8) 00:14:34.218 Read completed with error (sct=0, sc=8) 00:14:34.218 Read completed with error (sct=0, sc=8) 00:14:34.218 Read completed with error (sct=0, sc=8) 00:14:34.218 Read completed with error (sct=0, sc=8) 00:14:34.218 [2024-02-14 20:15:11.417733] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f16a0000c00 is same with the state(5) to be set 00:14:34.218 Write completed with error (sct=0, sc=8) 00:14:34.218 Read completed with error (sct=0, sc=8) 00:14:34.218 Write completed with error (sct=0, sc=8) 00:14:34.218 starting I/O failed: -6 00:14:34.218 Write completed with error (sct=0, sc=8) 00:14:34.218 Read completed with error (sct=0, sc=8) 00:14:34.218 Read completed with error (sct=0, sc=8) 00:14:34.218 Read completed with error (sct=0, sc=8) 00:14:34.218 starting I/O failed: -6 00:14:34.218 Read completed with error (sct=0, sc=8) 00:14:34.218 Read completed with error (sct=0, sc=8) 00:14:34.218 Read completed with error (sct=0, sc=8) 00:14:34.218 Read completed with error (sct=0, sc=8) 00:14:34.218 starting I/O failed: -6 00:14:34.218 Write completed with error (sct=0, sc=8) 00:14:34.218 Read completed with error (sct=0, sc=8) 00:14:34.218 Read completed with error (sct=0, sc=8) 00:14:34.218 Write completed with error (sct=0, sc=8) 00:14:34.218 starting I/O failed: -6 00:14:34.218 Read completed with error (sct=0, sc=8) 00:14:34.218 Read completed with error (sct=0, sc=8) 00:14:34.218 Read completed with error (sct=0, sc=8) 00:14:34.218 Read completed with error (sct=0, sc=8) 00:14:34.218 starting I/O failed: -6 00:14:34.218 Write completed with error (sct=0, sc=8) 00:14:34.218 Read completed with error (sct=0, sc=8) 00:14:34.218 Read completed with error (sct=0, sc=8) 00:14:34.218 Write completed with error (sct=0, sc=8) 00:14:34.218 starting I/O failed: -6 00:14:34.218 Read completed with error (sct=0, sc=8) 00:14:34.218 Read completed with error (sct=0, sc=8) 00:14:34.218 Write completed with error (sct=0, sc=8) 00:14:34.218 Write completed with error (sct=0, sc=8) 00:14:34.218 starting I/O failed: -6 00:14:34.218 Read completed with error (sct=0, sc=8) 00:14:34.218 Write completed with error (sct=0, sc=8) 00:14:34.218 Read completed with error (sct=0, sc=8) 00:14:34.218 Read completed with error (sct=0, sc=8) 00:14:34.218 starting I/O failed: -6 00:14:34.218 Read completed with error (sct=0, sc=8) 00:14:34.218 Read completed with error (sct=0, sc=8) 00:14:34.218 Read completed with error (sct=0, sc=8) 00:14:34.218 Write completed with error (sct=0, sc=8) 00:14:34.218 starting I/O failed: -6 00:14:34.218 Read completed with error (sct=0, sc=8) 00:14:34.218 Read completed with error (sct=0, sc=8) 00:14:34.218 Read completed with error (sct=0, sc=8) 00:14:34.218 Read completed with error (sct=0, sc=8) 00:14:34.218 starting I/O failed: -6 00:14:34.218 Read completed with error (sct=0, sc=8) 00:14:34.218 Read completed with error (sct=0, sc=8) 00:14:34.218 Read completed with error (sct=0, sc=8) 00:14:34.218 Read completed with error (sct=0, sc=8) 00:14:34.218 starting I/O failed: -6 00:14:34.218 Read completed with error (sct=0, sc=8) 00:14:34.218 Write completed with error (sct=0, sc=8) 00:14:34.218 Read completed with error (sct=0, sc=8) 00:14:34.218 Write completed with error (sct=0, sc=8) 00:14:34.218 starting I/O failed: -6 00:14:34.218 Read completed with error (sct=0, sc=8) 00:14:34.218 Write completed with error (sct=0, sc=8) 00:14:34.218 Read completed with error (sct=0, sc=8) 00:14:34.218 starting I/O failed: -6 00:14:34.218 starting I/O failed: -6 00:14:34.218 starting I/O failed: -6 00:14:34.218 starting I/O failed: -6 00:14:35.154 [2024-02-14 20:15:12.382960] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111bab0 is same with the state(5) to be set 00:14:35.154 Write completed with error (sct=0, sc=8) 00:14:35.154 Write completed with error (sct=0, sc=8) 00:14:35.154 Read completed with error (sct=0, sc=8) 00:14:35.154 Write completed with error (sct=0, sc=8) 00:14:35.154 Read completed with error (sct=0, sc=8) 00:14:35.154 Write completed with error (sct=0, sc=8) 00:14:35.154 Write completed with error (sct=0, sc=8) 00:14:35.154 Write completed with error (sct=0, sc=8) 00:14:35.154 Read completed with error (sct=0, sc=8) 00:14:35.154 Read completed with error (sct=0, sc=8) 00:14:35.154 Read completed with error (sct=0, sc=8) 00:14:35.154 Write completed with error (sct=0, sc=8) 00:14:35.154 Write completed with error (sct=0, sc=8) 00:14:35.154 Write completed with error (sct=0, sc=8) 00:14:35.154 Write completed with error (sct=0, sc=8) 00:14:35.154 Read completed with error (sct=0, sc=8) 00:14:35.154 Read completed with error (sct=0, sc=8) 00:14:35.154 Read completed with error (sct=0, sc=8) 00:14:35.154 Read completed with error (sct=0, sc=8) 00:14:35.154 Write completed with error (sct=0, sc=8) 00:14:35.154 Read completed with error (sct=0, sc=8) 00:14:35.154 Write completed with error (sct=0, sc=8) 00:14:35.154 Read completed with error (sct=0, sc=8) 00:14:35.154 Read completed with error (sct=0, sc=8) 00:14:35.154 Write completed with error (sct=0, sc=8) 00:14:35.154 Read completed with error (sct=0, sc=8) 00:14:35.154 Read completed with error (sct=0, sc=8) 00:14:35.154 Read completed with error (sct=0, sc=8) 00:14:35.154 Write completed with error (sct=0, sc=8) 00:14:35.154 Read completed with error (sct=0, sc=8) 00:14:35.154 Write completed with error (sct=0, sc=8) 00:14:35.154 Read completed with error (sct=0, sc=8) 00:14:35.154 Read completed with error (sct=0, sc=8) 00:14:35.154 Read completed with error (sct=0, sc=8) 00:14:35.154 [2024-02-14 20:15:12.419377] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112cfe0 is same with the state(5) to be set 00:14:35.154 Read completed with error (sct=0, sc=8) 00:14:35.154 Write completed with error (sct=0, sc=8) 00:14:35.154 Read completed with error (sct=0, sc=8) 00:14:35.154 Read completed with error (sct=0, sc=8) 00:14:35.154 Write completed with error (sct=0, sc=8) 00:14:35.154 Write completed with error (sct=0, sc=8) 00:14:35.154 Write completed with error (sct=0, sc=8) 00:14:35.155 Write completed with error (sct=0, sc=8) 00:14:35.155 Read completed with error (sct=0, sc=8) 00:14:35.155 Read completed with error (sct=0, sc=8) 00:14:35.155 Read completed with error (sct=0, sc=8) 00:14:35.155 Write completed with error (sct=0, sc=8) 00:14:35.155 [2024-02-14 20:15:12.419490] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f16a000c230 is same with the state(5) to be set 00:14:35.155 Read completed with error (sct=0, sc=8) 00:14:35.155 Read completed with error (sct=0, sc=8) 00:14:35.155 Read completed with error (sct=0, sc=8) 00:14:35.155 Read completed with error (sct=0, sc=8) 00:14:35.155 Read completed with error (sct=0, sc=8) 00:14:35.155 Write completed with error (sct=0, sc=8) 00:14:35.155 Read completed with error (sct=0, sc=8) 00:14:35.155 Write completed with error (sct=0, sc=8) 00:14:35.155 Write completed with error (sct=0, sc=8) 00:14:35.155 Write completed with error (sct=0, sc=8) 00:14:35.155 Read completed with error (sct=0, sc=8) 00:14:35.155 Write completed with error (sct=0, sc=8) 00:14:35.155 Read completed with error (sct=0, sc=8) 00:14:35.155 Write completed with error (sct=0, sc=8) 00:14:35.155 Read completed with error (sct=0, sc=8) 00:14:35.155 Read completed with error (sct=0, sc=8) 00:14:35.155 Write completed with error (sct=0, sc=8) 00:14:35.155 Write completed with error (sct=0, sc=8) 00:14:35.155 Read completed with error (sct=0, sc=8) 00:14:35.155 Write completed with error (sct=0, sc=8) 00:14:35.155 Read completed with error (sct=0, sc=8) 00:14:35.155 Read completed with error (sct=0, sc=8) 00:14:35.155 Read completed with error (sct=0, sc=8) 00:14:35.155 Read completed with error (sct=0, sc=8) 00:14:35.155 Read completed with error (sct=0, sc=8) 00:14:35.155 Write completed with error (sct=0, sc=8) 00:14:35.155 Read completed with error (sct=0, sc=8) 00:14:35.155 Write completed with error (sct=0, sc=8) 00:14:35.155 Write completed with error (sct=0, sc=8) 00:14:35.155 Read completed with error (sct=0, sc=8) 00:14:35.155 Read completed with error (sct=0, sc=8) 00:14:35.155 Read completed with error (sct=0, sc=8) 00:14:35.155 Read completed with error (sct=0, sc=8) 00:14:35.155 Read completed with error (sct=0, sc=8) 00:14:35.155 [2024-02-14 20:15:12.420150] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1124d10 is same with the state(5) to be set 00:14:35.155 Read completed with error (sct=0, sc=8) 00:14:35.155 Read completed with error (sct=0, sc=8) 00:14:35.155 Read completed with error (sct=0, sc=8) 00:14:35.155 Read completed with error (sct=0, sc=8) 00:14:35.155 Read completed with error (sct=0, sc=8) 00:14:35.155 Read completed with error (sct=0, sc=8) 00:14:35.155 Read completed with error (sct=0, sc=8) 00:14:35.155 Read completed with error (sct=0, sc=8) 00:14:35.155 Read completed with error (sct=0, sc=8) 00:14:35.155 Read completed with error (sct=0, sc=8) 00:14:35.155 Read completed with error (sct=0, sc=8) 00:14:35.155 Read completed with error (sct=0, sc=8) 00:14:35.155 Read completed with error (sct=0, sc=8) 00:14:35.155 Read completed with error (sct=0, sc=8) 00:14:35.155 Read completed with error (sct=0, sc=8) 00:14:35.155 Read completed with error (sct=0, sc=8) 00:14:35.155 Write completed with error (sct=0, sc=8) 00:14:35.155 Read completed with error (sct=0, sc=8) 00:14:35.155 Read completed with error (sct=0, sc=8) 00:14:35.155 Read completed with error (sct=0, sc=8) 00:14:35.155 Read completed with error (sct=0, sc=8) 00:14:35.155 Read completed with error (sct=0, sc=8) 00:14:35.155 Write completed with error (sct=0, sc=8) 00:14:35.155 Read completed with error (sct=0, sc=8) 00:14:35.155 Read completed with error (sct=0, sc=8) 00:14:35.155 Write completed with error (sct=0, sc=8) 00:14:35.155 Read completed with error (sct=0, sc=8) 00:14:35.155 Read completed with error (sct=0, sc=8) 00:14:35.155 Read completed with error (sct=0, sc=8) 00:14:35.155 Read completed with error (sct=0, sc=8) 00:14:35.155 Write completed with error (sct=0, sc=8) 00:14:35.155 Read completed with error (sct=0, sc=8) 00:14:35.155 Read completed with error (sct=0, sc=8) 00:14:35.155 Read completed with error (sct=0, sc=8) 00:14:35.155 [2024-02-14 20:15:12.420285] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1125270 is same with the state(5) to be set 00:14:35.155 [2024-02-14 20:15:12.420950] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x111bab0 (9): Bad file descriptor 00:14:35.155 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:14:35.155 20:15:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:35.155 20:15:12 -- target/delete_subsystem.sh@34 -- # delay=0 00:14:35.155 20:15:12 -- target/delete_subsystem.sh@35 -- # kill -0 1729445 00:14:35.155 20:15:12 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:14:35.155 Initializing NVMe Controllers 00:14:35.155 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:35.155 Controller IO queue size 128, less than required. 00:14:35.155 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:35.155 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:35.155 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:35.155 Initialization complete. Launching workers. 00:14:35.155 ======================================================== 00:14:35.155 Latency(us) 00:14:35.155 Device Information : IOPS MiB/s Average min max 00:14:35.155 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 186.54 0.09 957190.77 367.06 1012694.44 00:14:35.155 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 146.35 0.07 914731.46 596.75 1011151.65 00:14:35.155 ======================================================== 00:14:35.155 Total : 332.89 0.16 938523.86 367.06 1012694.44 00:14:35.155 00:14:35.723 20:15:12 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:14:35.723 20:15:12 -- target/delete_subsystem.sh@35 -- # kill -0 1729445 00:14:35.723 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1729445) - No such process 00:14:35.723 20:15:12 -- target/delete_subsystem.sh@45 -- # NOT wait 1729445 00:14:35.723 20:15:12 -- common/autotest_common.sh@638 -- # local es=0 00:14:35.723 20:15:12 -- common/autotest_common.sh@640 -- # valid_exec_arg wait 1729445 00:14:35.723 20:15:12 -- common/autotest_common.sh@626 -- # local arg=wait 00:14:35.723 20:15:12 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:35.723 20:15:12 -- common/autotest_common.sh@630 -- # type -t wait 00:14:35.723 20:15:12 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:35.723 20:15:12 -- common/autotest_common.sh@641 -- # wait 1729445 00:14:35.723 20:15:12 -- common/autotest_common.sh@641 -- # es=1 00:14:35.723 20:15:12 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:35.723 20:15:12 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:35.723 20:15:12 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:35.723 20:15:12 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:35.723 20:15:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:35.723 20:15:12 -- common/autotest_common.sh@10 -- # set +x 00:14:35.723 20:15:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:35.723 20:15:12 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:35.723 20:15:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:35.723 20:15:12 -- common/autotest_common.sh@10 -- # set +x 00:14:35.723 [2024-02-14 20:15:12.948213] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:35.723 20:15:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:35.723 20:15:12 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:35.723 20:15:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:35.723 20:15:12 -- common/autotest_common.sh@10 -- # set +x 00:14:35.723 20:15:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:35.723 20:15:12 -- target/delete_subsystem.sh@54 -- # perf_pid=1729946 00:14:35.723 20:15:12 -- target/delete_subsystem.sh@56 -- # delay=0 00:14:35.723 20:15:12 -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:35.723 20:15:12 -- target/delete_subsystem.sh@57 -- # kill -0 1729946 00:14:35.723 20:15:12 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:35.723 EAL: No free 2048 kB hugepages reported on node 1 00:14:35.723 [2024-02-14 20:15:13.007936] subsystem.c:1304:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:36.291 20:15:13 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:36.291 20:15:13 -- target/delete_subsystem.sh@57 -- # kill -0 1729946 00:14:36.291 20:15:13 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:36.858 20:15:13 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:36.858 20:15:13 -- target/delete_subsystem.sh@57 -- # kill -0 1729946 00:14:36.858 20:15:13 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:37.118 20:15:14 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:37.118 20:15:14 -- target/delete_subsystem.sh@57 -- # kill -0 1729946 00:14:37.118 20:15:14 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:37.686 20:15:14 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:37.686 20:15:14 -- target/delete_subsystem.sh@57 -- # kill -0 1729946 00:14:37.686 20:15:14 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:38.254 20:15:15 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:38.254 20:15:15 -- target/delete_subsystem.sh@57 -- # kill -0 1729946 00:14:38.254 20:15:15 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:38.822 20:15:15 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:38.822 20:15:15 -- target/delete_subsystem.sh@57 -- # kill -0 1729946 00:14:38.822 20:15:15 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:38.822 Initializing NVMe Controllers 00:14:38.822 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:38.822 Controller IO queue size 128, less than required. 00:14:38.822 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:38.822 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:38.822 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:38.822 Initialization complete. Launching workers. 00:14:38.822 ======================================================== 00:14:38.822 Latency(us) 00:14:38.822 Device Information : IOPS MiB/s Average min max 00:14:38.822 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003668.72 1000315.06 1041998.97 00:14:38.822 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005188.13 1000506.03 1012249.71 00:14:38.822 ======================================================== 00:14:38.822 Total : 256.00 0.12 1004428.42 1000315.06 1041998.97 00:14:38.822 00:14:39.081 20:15:16 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:39.081 20:15:16 -- target/delete_subsystem.sh@57 -- # kill -0 1729946 00:14:39.081 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1729946) - No such process 00:14:39.081 20:15:16 -- target/delete_subsystem.sh@67 -- # wait 1729946 00:14:39.081 20:15:16 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:14:39.081 20:15:16 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:14:39.081 20:15:16 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:39.081 20:15:16 -- nvmf/common.sh@116 -- # sync 00:14:39.081 20:15:16 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:39.081 20:15:16 -- nvmf/common.sh@119 -- # set +e 00:14:39.341 20:15:16 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:39.341 20:15:16 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:39.341 rmmod nvme_tcp 00:14:39.341 rmmod nvme_fabrics 00:14:39.341 rmmod nvme_keyring 00:14:39.341 20:15:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:39.341 20:15:16 -- nvmf/common.sh@123 -- # set -e 00:14:39.341 20:15:16 -- nvmf/common.sh@124 -- # return 0 00:14:39.341 20:15:16 -- nvmf/common.sh@477 -- # '[' -n 1729198 ']' 00:14:39.341 20:15:16 -- nvmf/common.sh@478 -- # killprocess 1729198 00:14:39.341 20:15:16 -- common/autotest_common.sh@924 -- # '[' -z 1729198 ']' 00:14:39.341 20:15:16 -- common/autotest_common.sh@928 -- # kill -0 1729198 00:14:39.341 20:15:16 -- common/autotest_common.sh@929 -- # uname 00:14:39.341 20:15:16 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:14:39.341 20:15:16 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 1729198 00:14:39.341 20:15:16 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:14:39.341 20:15:16 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:14:39.341 20:15:16 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 1729198' 00:14:39.341 killing process with pid 1729198 00:14:39.341 20:15:16 -- common/autotest_common.sh@943 -- # kill 1729198 00:14:39.341 20:15:16 -- common/autotest_common.sh@948 -- # wait 1729198 00:14:39.601 20:15:16 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:39.601 20:15:16 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:39.601 20:15:16 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:39.601 20:15:16 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:39.601 20:15:16 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:39.601 20:15:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:39.601 20:15:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:39.601 20:15:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:41.509 20:15:18 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:41.509 00:14:41.509 real 0m16.768s 00:14:41.509 user 0m30.381s 00:14:41.509 sys 0m5.459s 00:14:41.509 20:15:18 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:41.509 20:15:18 -- common/autotest_common.sh@10 -- # set +x 00:14:41.509 ************************************ 00:14:41.509 END TEST nvmf_delete_subsystem 00:14:41.509 ************************************ 00:14:41.509 20:15:18 -- nvmf/nvmf.sh@36 -- # [[ 1 -eq 1 ]] 00:14:41.509 20:15:18 -- nvmf/nvmf.sh@37 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:41.509 20:15:18 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:14:41.509 20:15:18 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:14:41.509 20:15:18 -- common/autotest_common.sh@10 -- # set +x 00:14:41.770 ************************************ 00:14:41.770 START TEST nvmf_nvme_cli 00:14:41.770 ************************************ 00:14:41.770 20:15:18 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:41.770 * Looking for test storage... 00:14:41.770 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:41.770 20:15:19 -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:41.770 20:15:19 -- nvmf/common.sh@7 -- # uname -s 00:14:41.770 20:15:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:41.770 20:15:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:41.770 20:15:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:41.770 20:15:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:41.770 20:15:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:41.770 20:15:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:41.770 20:15:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:41.770 20:15:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:41.770 20:15:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:41.770 20:15:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:41.770 20:15:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:14:41.770 20:15:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:14:41.770 20:15:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:41.770 20:15:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:41.770 20:15:19 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:41.770 20:15:19 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:41.770 20:15:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:41.770 20:15:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:41.770 20:15:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:41.770 20:15:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.770 20:15:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.770 20:15:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.770 20:15:19 -- paths/export.sh@5 -- # export PATH 00:14:41.770 20:15:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.770 20:15:19 -- nvmf/common.sh@46 -- # : 0 00:14:41.770 20:15:19 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:41.770 20:15:19 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:41.770 20:15:19 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:41.770 20:15:19 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:41.770 20:15:19 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:41.770 20:15:19 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:41.770 20:15:19 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:41.770 20:15:19 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:41.770 20:15:19 -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:41.770 20:15:19 -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:41.770 20:15:19 -- target/nvme_cli.sh@14 -- # devs=() 00:14:41.770 20:15:19 -- target/nvme_cli.sh@16 -- # nvmftestinit 00:14:41.770 20:15:19 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:41.770 20:15:19 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:41.770 20:15:19 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:41.770 20:15:19 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:41.770 20:15:19 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:41.770 20:15:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:41.770 20:15:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:41.770 20:15:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:41.770 20:15:19 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:41.770 20:15:19 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:41.770 20:15:19 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:41.770 20:15:19 -- common/autotest_common.sh@10 -- # set +x 00:14:48.352 20:15:24 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:48.352 20:15:24 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:48.352 20:15:24 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:48.352 20:15:24 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:48.352 20:15:24 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:48.352 20:15:24 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:48.352 20:15:24 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:48.352 20:15:24 -- nvmf/common.sh@294 -- # net_devs=() 00:14:48.352 20:15:24 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:48.352 20:15:24 -- nvmf/common.sh@295 -- # e810=() 00:14:48.352 20:15:24 -- nvmf/common.sh@295 -- # local -ga e810 00:14:48.352 20:15:24 -- nvmf/common.sh@296 -- # x722=() 00:14:48.352 20:15:24 -- nvmf/common.sh@296 -- # local -ga x722 00:14:48.352 20:15:24 -- nvmf/common.sh@297 -- # mlx=() 00:14:48.352 20:15:24 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:48.352 20:15:24 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:48.352 20:15:24 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:48.352 20:15:24 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:48.352 20:15:24 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:48.352 20:15:24 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:48.352 20:15:24 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:48.352 20:15:24 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:48.352 20:15:24 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:48.352 20:15:24 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:48.352 20:15:24 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:48.352 20:15:24 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:48.352 20:15:24 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:48.352 20:15:24 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:48.352 20:15:24 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:14:48.352 20:15:24 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:14:48.352 20:15:24 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:14:48.352 20:15:24 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:48.353 20:15:24 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:48.353 20:15:24 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:48.353 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:48.353 20:15:24 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:48.353 20:15:24 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:48.353 20:15:24 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:48.353 20:15:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:48.353 20:15:24 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:48.353 20:15:24 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:48.353 20:15:24 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:48.353 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:48.353 20:15:24 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:48.353 20:15:24 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:48.353 20:15:24 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:48.353 20:15:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:48.353 20:15:24 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:48.353 20:15:24 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:48.353 20:15:24 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:14:48.353 20:15:24 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:14:48.353 20:15:24 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:48.353 20:15:24 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:48.353 20:15:24 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:48.353 20:15:24 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:48.353 20:15:24 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:48.353 Found net devices under 0000:af:00.0: cvl_0_0 00:14:48.353 20:15:24 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:48.353 20:15:24 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:48.353 20:15:24 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:48.353 20:15:24 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:48.353 20:15:24 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:48.353 20:15:24 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:48.353 Found net devices under 0000:af:00.1: cvl_0_1 00:14:48.353 20:15:24 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:48.353 20:15:24 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:48.353 20:15:24 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:48.353 20:15:24 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:48.353 20:15:24 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:48.353 20:15:24 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:48.353 20:15:24 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:48.353 20:15:24 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:48.353 20:15:24 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:48.353 20:15:24 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:48.353 20:15:24 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:48.353 20:15:24 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:48.353 20:15:24 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:48.353 20:15:24 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:48.353 20:15:24 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:48.353 20:15:24 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:48.353 20:15:24 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:48.353 20:15:24 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:48.353 20:15:24 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:48.353 20:15:24 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:48.353 20:15:24 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:48.353 20:15:24 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:48.353 20:15:24 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:48.353 20:15:25 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:48.353 20:15:25 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:48.353 20:15:25 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:48.353 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:48.353 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:14:48.353 00:14:48.353 --- 10.0.0.2 ping statistics --- 00:14:48.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:48.353 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:14:48.353 20:15:25 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:48.353 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:48.353 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.249 ms 00:14:48.353 00:14:48.353 --- 10.0.0.1 ping statistics --- 00:14:48.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:48.353 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:14:48.353 20:15:25 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:48.353 20:15:25 -- nvmf/common.sh@410 -- # return 0 00:14:48.353 20:15:25 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:48.353 20:15:25 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:48.353 20:15:25 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:48.353 20:15:25 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:48.353 20:15:25 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:48.353 20:15:25 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:48.353 20:15:25 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:48.353 20:15:25 -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:48.353 20:15:25 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:48.353 20:15:25 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:48.353 20:15:25 -- common/autotest_common.sh@10 -- # set +x 00:14:48.353 20:15:25 -- nvmf/common.sh@469 -- # nvmfpid=1734406 00:14:48.353 20:15:25 -- nvmf/common.sh@470 -- # waitforlisten 1734406 00:14:48.353 20:15:25 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:48.353 20:15:25 -- common/autotest_common.sh@817 -- # '[' -z 1734406 ']' 00:14:48.353 20:15:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:48.353 20:15:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:48.353 20:15:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:48.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:48.353 20:15:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:48.353 20:15:25 -- common/autotest_common.sh@10 -- # set +x 00:14:48.353 [2024-02-14 20:15:25.136711] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:14:48.353 [2024-02-14 20:15:25.136748] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:48.353 EAL: No free 2048 kB hugepages reported on node 1 00:14:48.353 [2024-02-14 20:15:25.200218] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:48.353 [2024-02-14 20:15:25.270924] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:48.353 [2024-02-14 20:15:25.271047] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:48.353 [2024-02-14 20:15:25.271055] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:48.353 [2024-02-14 20:15:25.271061] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:48.353 [2024-02-14 20:15:25.271112] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:48.353 [2024-02-14 20:15:25.271210] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:48.353 [2024-02-14 20:15:25.271296] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:48.353 [2024-02-14 20:15:25.271297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:48.612 20:15:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:48.612 20:15:25 -- common/autotest_common.sh@850 -- # return 0 00:14:48.612 20:15:25 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:48.612 20:15:25 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:48.612 20:15:25 -- common/autotest_common.sh@10 -- # set +x 00:14:48.612 20:15:25 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:48.612 20:15:25 -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:48.612 20:15:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:48.612 20:15:25 -- common/autotest_common.sh@10 -- # set +x 00:14:48.612 [2024-02-14 20:15:25.976912] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:48.612 20:15:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:48.612 20:15:25 -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:48.612 20:15:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:48.612 20:15:25 -- common/autotest_common.sh@10 -- # set +x 00:14:48.612 Malloc0 00:14:48.612 20:15:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:48.612 20:15:26 -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:48.612 20:15:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:48.612 20:15:26 -- common/autotest_common.sh@10 -- # set +x 00:14:48.612 Malloc1 00:14:48.612 20:15:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:48.612 20:15:26 -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:48.612 20:15:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:48.612 20:15:26 -- common/autotest_common.sh@10 -- # set +x 00:14:48.871 20:15:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:48.871 20:15:26 -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:48.871 20:15:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:48.871 20:15:26 -- common/autotest_common.sh@10 -- # set +x 00:14:48.871 20:15:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:48.871 20:15:26 -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:48.871 20:15:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:48.871 20:15:26 -- common/autotest_common.sh@10 -- # set +x 00:14:48.871 20:15:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:48.871 20:15:26 -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:48.871 20:15:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:48.871 20:15:26 -- common/autotest_common.sh@10 -- # set +x 00:14:48.871 [2024-02-14 20:15:26.053400] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:48.871 20:15:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:48.871 20:15:26 -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:48.871 20:15:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:48.871 20:15:26 -- common/autotest_common.sh@10 -- # set +x 00:14:48.871 20:15:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:48.871 20:15:26 -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:14:48.871 00:14:48.871 Discovery Log Number of Records 2, Generation counter 2 00:14:48.871 =====Discovery Log Entry 0====== 00:14:48.871 trtype: tcp 00:14:48.871 adrfam: ipv4 00:14:48.871 subtype: current discovery subsystem 00:14:48.871 treq: not required 00:14:48.871 portid: 0 00:14:48.871 trsvcid: 4420 00:14:48.871 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:48.871 traddr: 10.0.0.2 00:14:48.871 eflags: explicit discovery connections, duplicate discovery information 00:14:48.871 sectype: none 00:14:48.871 =====Discovery Log Entry 1====== 00:14:48.871 trtype: tcp 00:14:48.871 adrfam: ipv4 00:14:48.871 subtype: nvme subsystem 00:14:48.871 treq: not required 00:14:48.871 portid: 0 00:14:48.871 trsvcid: 4420 00:14:48.871 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:48.871 traddr: 10.0.0.2 00:14:48.871 eflags: none 00:14:48.871 sectype: none 00:14:48.871 20:15:26 -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:48.871 20:15:26 -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:48.871 20:15:26 -- nvmf/common.sh@510 -- # local dev _ 00:14:48.871 20:15:26 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:48.871 20:15:26 -- nvmf/common.sh@509 -- # nvme list 00:14:48.871 20:15:26 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:14:48.871 20:15:26 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:48.871 20:15:26 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:14:48.871 20:15:26 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:48.871 20:15:26 -- nvmf/common.sh@513 -- # [[ /dev/nvme1n2 == /dev/nvme* ]] 00:14:48.871 20:15:26 -- nvmf/common.sh@514 -- # echo /dev/nvme1n2 00:14:48.871 20:15:26 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:48.871 20:15:26 -- nvmf/common.sh@513 -- # [[ /dev/nvme1n1 == /dev/nvme* ]] 00:14:48.871 20:15:26 -- nvmf/common.sh@514 -- # echo /dev/nvme1n1 00:14:48.871 20:15:26 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:48.871 20:15:26 -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=2 00:14:48.871 20:15:26 -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:50.246 20:15:27 -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:50.246 20:15:27 -- common/autotest_common.sh@1175 -- # local i=0 00:14:50.246 20:15:27 -- common/autotest_common.sh@1176 -- # local nvme_device_counter=1 nvme_devices=0 00:14:50.247 20:15:27 -- common/autotest_common.sh@1177 -- # [[ -n 2 ]] 00:14:50.247 20:15:27 -- common/autotest_common.sh@1178 -- # nvme_device_counter=2 00:14:50.247 20:15:27 -- common/autotest_common.sh@1182 -- # sleep 2 00:14:52.159 20:15:29 -- common/autotest_common.sh@1183 -- # (( i++ <= 15 )) 00:14:52.159 20:15:29 -- common/autotest_common.sh@1184 -- # lsblk -l -o NAME,SERIAL 00:14:52.159 20:15:29 -- common/autotest_common.sh@1184 -- # grep -c SPDKISFASTANDAWESOME 00:14:52.159 20:15:29 -- common/autotest_common.sh@1184 -- # nvme_devices=2 00:14:52.159 20:15:29 -- common/autotest_common.sh@1185 -- # (( nvme_devices == nvme_device_counter )) 00:14:52.159 20:15:29 -- common/autotest_common.sh@1185 -- # return 0 00:14:52.159 20:15:29 -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:14:52.159 20:15:29 -- nvmf/common.sh@510 -- # local dev _ 00:14:52.159 20:15:29 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:52.159 20:15:29 -- nvmf/common.sh@509 -- # nvme list 00:14:52.159 20:15:29 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:14:52.159 20:15:29 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:52.159 20:15:29 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:14:52.159 20:15:29 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:52.159 20:15:29 -- nvmf/common.sh@513 -- # [[ /dev/nvme1n2 == /dev/nvme* ]] 00:14:52.159 20:15:29 -- nvmf/common.sh@514 -- # echo /dev/nvme1n2 00:14:52.159 20:15:29 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:52.159 20:15:29 -- nvmf/common.sh@513 -- # [[ /dev/nvme1n1 == /dev/nvme* ]] 00:14:52.159 20:15:29 -- nvmf/common.sh@514 -- # echo /dev/nvme1n1 00:14:52.159 20:15:29 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:52.159 20:15:29 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:52.160 20:15:29 -- nvmf/common.sh@514 -- # echo /dev/nvme0n2 00:14:52.160 20:15:29 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:52.160 20:15:29 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:52.160 20:15:29 -- nvmf/common.sh@514 -- # echo /dev/nvme0n1 00:14:52.160 20:15:29 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:52.160 20:15:29 -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme1n2 00:14:52.160 /dev/nvme1n1 00:14:52.160 /dev/nvme0n2 00:14:52.160 /dev/nvme0n1 ]] 00:14:52.160 20:15:29 -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:14:52.160 20:15:29 -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:14:52.160 20:15:29 -- nvmf/common.sh@510 -- # local dev _ 00:14:52.160 20:15:29 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:52.160 20:15:29 -- nvmf/common.sh@509 -- # nvme list 00:14:52.468 20:15:29 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:14:52.468 20:15:29 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:52.468 20:15:29 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:14:52.468 20:15:29 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:52.468 20:15:29 -- nvmf/common.sh@513 -- # [[ /dev/nvme1n2 == /dev/nvme* ]] 00:14:52.468 20:15:29 -- nvmf/common.sh@514 -- # echo /dev/nvme1n2 00:14:52.468 20:15:29 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:52.468 20:15:29 -- nvmf/common.sh@513 -- # [[ /dev/nvme1n1 == /dev/nvme* ]] 00:14:52.468 20:15:29 -- nvmf/common.sh@514 -- # echo /dev/nvme1n1 00:14:52.468 20:15:29 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:52.468 20:15:29 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:52.468 20:15:29 -- nvmf/common.sh@514 -- # echo /dev/nvme0n2 00:14:52.468 20:15:29 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:52.468 20:15:29 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:52.468 20:15:29 -- nvmf/common.sh@514 -- # echo /dev/nvme0n1 00:14:52.468 20:15:29 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:52.468 20:15:29 -- target/nvme_cli.sh@59 -- # nvme_num=4 00:14:52.468 20:15:29 -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:52.727 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:52.727 20:15:29 -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:52.727 20:15:29 -- common/autotest_common.sh@1196 -- # local i=0 00:14:52.727 20:15:29 -- common/autotest_common.sh@1197 -- # lsblk -o NAME,SERIAL 00:14:52.727 20:15:29 -- common/autotest_common.sh@1197 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:52.727 20:15:29 -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:14:52.727 20:15:29 -- common/autotest_common.sh@1204 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:52.727 20:15:29 -- common/autotest_common.sh@1208 -- # return 0 00:14:52.727 20:15:29 -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:52.727 20:15:29 -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:52.727 20:15:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:52.727 20:15:29 -- common/autotest_common.sh@10 -- # set +x 00:14:52.727 20:15:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:52.727 20:15:30 -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:52.727 20:15:30 -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:52.727 20:15:30 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:52.727 20:15:30 -- nvmf/common.sh@116 -- # sync 00:14:52.727 20:15:30 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:52.727 20:15:30 -- nvmf/common.sh@119 -- # set +e 00:14:52.727 20:15:30 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:52.727 20:15:30 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:52.727 rmmod nvme_tcp 00:14:52.727 rmmod nvme_fabrics 00:14:52.727 rmmod nvme_keyring 00:14:52.727 20:15:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:52.727 20:15:30 -- nvmf/common.sh@123 -- # set -e 00:14:52.727 20:15:30 -- nvmf/common.sh@124 -- # return 0 00:14:52.727 20:15:30 -- nvmf/common.sh@477 -- # '[' -n 1734406 ']' 00:14:52.727 20:15:30 -- nvmf/common.sh@478 -- # killprocess 1734406 00:14:52.727 20:15:30 -- common/autotest_common.sh@924 -- # '[' -z 1734406 ']' 00:14:52.727 20:15:30 -- common/autotest_common.sh@928 -- # kill -0 1734406 00:14:52.727 20:15:30 -- common/autotest_common.sh@929 -- # uname 00:14:52.727 20:15:30 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:14:52.727 20:15:30 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 1734406 00:14:52.727 20:15:30 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:14:52.727 20:15:30 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:14:52.727 20:15:30 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 1734406' 00:14:52.727 killing process with pid 1734406 00:14:52.727 20:15:30 -- common/autotest_common.sh@943 -- # kill 1734406 00:14:52.727 20:15:30 -- common/autotest_common.sh@948 -- # wait 1734406 00:14:52.986 20:15:30 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:52.986 20:15:30 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:52.986 20:15:30 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:52.986 20:15:30 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:52.986 20:15:30 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:52.986 20:15:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:52.986 20:15:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:52.986 20:15:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:55.523 20:15:32 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:55.523 00:14:55.523 real 0m13.517s 00:14:55.523 user 0m22.008s 00:14:55.523 sys 0m5.100s 00:14:55.523 20:15:32 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:55.523 20:15:32 -- common/autotest_common.sh@10 -- # set +x 00:14:55.523 ************************************ 00:14:55.523 END TEST nvmf_nvme_cli 00:14:55.523 ************************************ 00:14:55.523 20:15:32 -- nvmf/nvmf.sh@39 -- # [[ 0 -eq 1 ]] 00:14:55.523 20:15:32 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:14:55.523 20:15:32 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:14:55.523 20:15:32 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:14:55.523 20:15:32 -- common/autotest_common.sh@10 -- # set +x 00:14:55.523 ************************************ 00:14:55.523 START TEST nvmf_host_management 00:14:55.523 ************************************ 00:14:55.523 20:15:32 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:14:55.523 * Looking for test storage... 00:14:55.523 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:55.523 20:15:32 -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:55.523 20:15:32 -- nvmf/common.sh@7 -- # uname -s 00:14:55.523 20:15:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:55.523 20:15:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:55.523 20:15:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:55.523 20:15:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:55.523 20:15:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:55.523 20:15:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:55.523 20:15:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:55.523 20:15:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:55.523 20:15:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:55.523 20:15:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:55.523 20:15:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:14:55.523 20:15:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:14:55.523 20:15:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:55.523 20:15:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:55.523 20:15:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:55.523 20:15:32 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:55.523 20:15:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:55.523 20:15:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:55.523 20:15:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:55.523 20:15:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.523 20:15:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.523 20:15:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.523 20:15:32 -- paths/export.sh@5 -- # export PATH 00:14:55.523 20:15:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.523 20:15:32 -- nvmf/common.sh@46 -- # : 0 00:14:55.523 20:15:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:55.523 20:15:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:55.523 20:15:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:55.523 20:15:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:55.523 20:15:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:55.523 20:15:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:55.523 20:15:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:55.523 20:15:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:55.523 20:15:32 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:55.523 20:15:32 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:55.523 20:15:32 -- target/host_management.sh@104 -- # nvmftestinit 00:14:55.523 20:15:32 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:55.523 20:15:32 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:55.523 20:15:32 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:55.523 20:15:32 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:55.523 20:15:32 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:55.523 20:15:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:55.523 20:15:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:55.523 20:15:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:55.523 20:15:32 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:55.523 20:15:32 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:55.523 20:15:32 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:55.523 20:15:32 -- common/autotest_common.sh@10 -- # set +x 00:15:02.098 20:15:38 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:02.098 20:15:38 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:02.098 20:15:38 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:02.098 20:15:38 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:02.098 20:15:38 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:02.098 20:15:38 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:02.098 20:15:38 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:02.098 20:15:38 -- nvmf/common.sh@294 -- # net_devs=() 00:15:02.098 20:15:38 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:02.098 20:15:38 -- nvmf/common.sh@295 -- # e810=() 00:15:02.098 20:15:38 -- nvmf/common.sh@295 -- # local -ga e810 00:15:02.098 20:15:38 -- nvmf/common.sh@296 -- # x722=() 00:15:02.098 20:15:38 -- nvmf/common.sh@296 -- # local -ga x722 00:15:02.098 20:15:38 -- nvmf/common.sh@297 -- # mlx=() 00:15:02.098 20:15:38 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:02.098 20:15:38 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:02.098 20:15:38 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:02.098 20:15:38 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:02.098 20:15:38 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:02.098 20:15:38 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:02.098 20:15:38 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:02.098 20:15:38 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:02.098 20:15:38 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:02.098 20:15:38 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:02.098 20:15:38 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:02.098 20:15:38 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:02.098 20:15:38 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:02.098 20:15:38 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:15:02.098 20:15:38 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:15:02.098 20:15:38 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:15:02.098 20:15:38 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:15:02.098 20:15:38 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:02.098 20:15:38 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:02.098 20:15:38 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:02.098 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:02.098 20:15:38 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:02.098 20:15:38 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:02.098 20:15:38 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:02.098 20:15:38 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:02.098 20:15:38 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:02.098 20:15:38 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:02.098 20:15:38 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:02.098 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:02.098 20:15:38 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:02.098 20:15:38 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:02.098 20:15:38 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:02.098 20:15:38 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:02.098 20:15:38 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:02.098 20:15:38 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:02.098 20:15:38 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:15:02.098 20:15:38 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:15:02.098 20:15:38 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:02.098 20:15:38 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:02.098 20:15:38 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:02.098 20:15:38 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:02.098 20:15:38 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:02.098 Found net devices under 0000:af:00.0: cvl_0_0 00:15:02.098 20:15:38 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:02.098 20:15:38 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:02.098 20:15:38 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:02.098 20:15:38 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:02.098 20:15:38 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:02.098 20:15:38 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:02.098 Found net devices under 0000:af:00.1: cvl_0_1 00:15:02.098 20:15:38 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:02.098 20:15:38 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:02.098 20:15:38 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:02.098 20:15:38 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:02.098 20:15:38 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:15:02.098 20:15:38 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:15:02.098 20:15:38 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:02.098 20:15:38 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:02.098 20:15:38 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:02.098 20:15:38 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:15:02.098 20:15:38 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:02.098 20:15:38 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:02.098 20:15:38 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:15:02.098 20:15:38 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:02.098 20:15:38 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:02.098 20:15:38 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:15:02.098 20:15:38 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:15:02.098 20:15:38 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:15:02.098 20:15:38 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:02.098 20:15:38 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:02.098 20:15:38 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:02.098 20:15:38 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:15:02.098 20:15:38 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:02.098 20:15:38 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:02.098 20:15:38 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:02.098 20:15:38 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:15:02.098 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:02.098 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:15:02.098 00:15:02.098 --- 10.0.0.2 ping statistics --- 00:15:02.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:02.098 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:15:02.098 20:15:38 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:02.098 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:02.098 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.245 ms 00:15:02.098 00:15:02.098 --- 10.0.0.1 ping statistics --- 00:15:02.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:02.098 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:15:02.098 20:15:38 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:02.098 20:15:38 -- nvmf/common.sh@410 -- # return 0 00:15:02.098 20:15:38 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:02.098 20:15:38 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:02.098 20:15:38 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:02.098 20:15:38 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:02.099 20:15:38 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:02.099 20:15:38 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:02.099 20:15:38 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:02.099 20:15:38 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:15:02.099 20:15:38 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:15:02.099 20:15:38 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:15:02.099 20:15:38 -- common/autotest_common.sh@10 -- # set +x 00:15:02.099 ************************************ 00:15:02.099 START TEST nvmf_host_management 00:15:02.099 ************************************ 00:15:02.099 20:15:38 -- common/autotest_common.sh@1102 -- # nvmf_host_management 00:15:02.099 20:15:38 -- target/host_management.sh@69 -- # starttarget 00:15:02.099 20:15:38 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:15:02.099 20:15:38 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:02.099 20:15:38 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:02.099 20:15:38 -- common/autotest_common.sh@10 -- # set +x 00:15:02.099 20:15:38 -- nvmf/common.sh@469 -- # nvmfpid=1739172 00:15:02.099 20:15:38 -- nvmf/common.sh@470 -- # waitforlisten 1739172 00:15:02.099 20:15:38 -- common/autotest_common.sh@817 -- # '[' -z 1739172 ']' 00:15:02.099 20:15:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:02.099 20:15:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:02.099 20:15:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:02.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:02.099 20:15:38 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:15:02.099 20:15:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:02.099 20:15:38 -- common/autotest_common.sh@10 -- # set +x 00:15:02.099 [2024-02-14 20:15:38.974192] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:15:02.099 [2024-02-14 20:15:38.974234] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:02.099 EAL: No free 2048 kB hugepages reported on node 1 00:15:02.099 [2024-02-14 20:15:39.037049] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:02.099 [2024-02-14 20:15:39.117854] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:02.099 [2024-02-14 20:15:39.117962] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:02.099 [2024-02-14 20:15:39.117972] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:02.099 [2024-02-14 20:15:39.117983] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:02.099 [2024-02-14 20:15:39.118019] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:02.099 [2024-02-14 20:15:39.118040] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:02.099 [2024-02-14 20:15:39.118154] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:02.099 [2024-02-14 20:15:39.118155] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:15:02.357 20:15:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:02.357 20:15:39 -- common/autotest_common.sh@850 -- # return 0 00:15:02.357 20:15:39 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:02.357 20:15:39 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:02.357 20:15:39 -- common/autotest_common.sh@10 -- # set +x 00:15:02.616 20:15:39 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:02.616 20:15:39 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:02.616 20:15:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:02.616 20:15:39 -- common/autotest_common.sh@10 -- # set +x 00:15:02.616 [2024-02-14 20:15:39.803851] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:02.616 20:15:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:02.616 20:15:39 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:15:02.616 20:15:39 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:02.616 20:15:39 -- common/autotest_common.sh@10 -- # set +x 00:15:02.616 20:15:39 -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:15:02.616 20:15:39 -- target/host_management.sh@23 -- # cat 00:15:02.616 20:15:39 -- target/host_management.sh@30 -- # rpc_cmd 00:15:02.616 20:15:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:02.616 20:15:39 -- common/autotest_common.sh@10 -- # set +x 00:15:02.616 Malloc0 00:15:02.616 [2024-02-14 20:15:39.863222] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:02.616 20:15:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:02.616 20:15:39 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:15:02.616 20:15:39 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:02.616 20:15:39 -- common/autotest_common.sh@10 -- # set +x 00:15:02.616 20:15:39 -- target/host_management.sh@73 -- # perfpid=1739437 00:15:02.616 20:15:39 -- target/host_management.sh@74 -- # waitforlisten 1739437 /var/tmp/bdevperf.sock 00:15:02.616 20:15:39 -- common/autotest_common.sh@817 -- # '[' -z 1739437 ']' 00:15:02.616 20:15:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:02.616 20:15:39 -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:15:02.616 20:15:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:02.616 20:15:39 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:15:02.616 20:15:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:02.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:02.616 20:15:39 -- nvmf/common.sh@520 -- # config=() 00:15:02.616 20:15:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:02.616 20:15:39 -- nvmf/common.sh@520 -- # local subsystem config 00:15:02.616 20:15:39 -- common/autotest_common.sh@10 -- # set +x 00:15:02.616 20:15:39 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:02.616 20:15:39 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:02.616 { 00:15:02.616 "params": { 00:15:02.616 "name": "Nvme$subsystem", 00:15:02.616 "trtype": "$TEST_TRANSPORT", 00:15:02.616 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:02.616 "adrfam": "ipv4", 00:15:02.616 "trsvcid": "$NVMF_PORT", 00:15:02.616 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:02.616 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:02.616 "hdgst": ${hdgst:-false}, 00:15:02.616 "ddgst": ${ddgst:-false} 00:15:02.616 }, 00:15:02.616 "method": "bdev_nvme_attach_controller" 00:15:02.616 } 00:15:02.616 EOF 00:15:02.616 )") 00:15:02.616 20:15:39 -- nvmf/common.sh@542 -- # cat 00:15:02.616 20:15:39 -- nvmf/common.sh@544 -- # jq . 00:15:02.616 20:15:39 -- nvmf/common.sh@545 -- # IFS=, 00:15:02.616 20:15:39 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:02.616 "params": { 00:15:02.616 "name": "Nvme0", 00:15:02.616 "trtype": "tcp", 00:15:02.616 "traddr": "10.0.0.2", 00:15:02.616 "adrfam": "ipv4", 00:15:02.616 "trsvcid": "4420", 00:15:02.616 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:02.616 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:15:02.616 "hdgst": false, 00:15:02.616 "ddgst": false 00:15:02.616 }, 00:15:02.616 "method": "bdev_nvme_attach_controller" 00:15:02.616 }' 00:15:02.616 [2024-02-14 20:15:39.952633] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:15:02.616 [2024-02-14 20:15:39.952682] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1739437 ] 00:15:02.616 EAL: No free 2048 kB hugepages reported on node 1 00:15:02.616 [2024-02-14 20:15:40.013307] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:02.875 [2024-02-14 20:15:40.094684] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:02.875 [2024-02-14 20:15:40.094740] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:15:02.875 Running I/O for 10 seconds... 00:15:03.446 20:15:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:03.446 20:15:40 -- common/autotest_common.sh@850 -- # return 0 00:15:03.446 20:15:40 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:15:03.446 20:15:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:03.446 20:15:40 -- common/autotest_common.sh@10 -- # set +x 00:15:03.446 20:15:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:03.446 20:15:40 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:03.446 20:15:40 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:15:03.446 20:15:40 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:15:03.446 20:15:40 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:15:03.446 20:15:40 -- target/host_management.sh@52 -- # local ret=1 00:15:03.446 20:15:40 -- target/host_management.sh@53 -- # local i 00:15:03.446 20:15:40 -- target/host_management.sh@54 -- # (( i = 10 )) 00:15:03.446 20:15:40 -- target/host_management.sh@54 -- # (( i != 0 )) 00:15:03.446 20:15:40 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:15:03.446 20:15:40 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:15:03.446 20:15:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:03.446 20:15:40 -- common/autotest_common.sh@10 -- # set +x 00:15:03.446 20:15:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:03.446 20:15:40 -- target/host_management.sh@55 -- # read_io_count=1174 00:15:03.446 20:15:40 -- target/host_management.sh@58 -- # '[' 1174 -ge 100 ']' 00:15:03.446 20:15:40 -- target/host_management.sh@59 -- # ret=0 00:15:03.446 20:15:40 -- target/host_management.sh@60 -- # break 00:15:03.446 20:15:40 -- target/host_management.sh@64 -- # return 0 00:15:03.446 20:15:40 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:15:03.446 20:15:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:03.446 20:15:40 -- common/autotest_common.sh@10 -- # set +x 00:15:03.446 [2024-02-14 20:15:40.830543] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e5290 is same with the state(5) to be set 00:15:03.446 [2024-02-14 20:15:40.830587] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e5290 is same with the state(5) to be set 00:15:03.446 [2024-02-14 20:15:40.830595] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e5290 is same with the state(5) to be set 00:15:03.446 [2024-02-14 20:15:40.830601] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e5290 is same with the state(5) to be set 00:15:03.446 [2024-02-14 20:15:40.830607] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e5290 is same with the state(5) to be set 00:15:03.446 [2024-02-14 20:15:40.830613] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e5290 is same with the state(5) to be set 00:15:03.446 [2024-02-14 20:15:40.830619] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e5290 is same with the state(5) to be set 00:15:03.446 [2024-02-14 20:15:40.830630] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e5290 is same with the state(5) to be set 00:15:03.446 [2024-02-14 20:15:40.830636] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e5290 is same with the state(5) to be set 00:15:03.446 [2024-02-14 20:15:40.830642] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e5290 is same with the state(5) to be set 00:15:03.446 [2024-02-14 20:15:40.830652] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e5290 is same with the state(5) to be set 00:15:03.446 [2024-02-14 20:15:40.830658] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e5290 is same with the state(5) to be set 00:15:03.446 [2024-02-14 20:15:40.830664] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e5290 is same with the state(5) to be set 00:15:03.446 [2024-02-14 20:15:40.830670] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e5290 is same with the state(5) to be set 00:15:03.446 [2024-02-14 20:15:40.830676] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e5290 is same with the state(5) to be set 00:15:03.446 [2024-02-14 20:15:40.830681] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e5290 is same with the state(5) to be set 00:15:03.446 [2024-02-14 20:15:40.830687] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e5290 is same with the state(5) to be set 00:15:03.446 [2024-02-14 20:15:40.830692] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e5290 is same with the state(5) to be set 00:15:03.446 [2024-02-14 20:15:40.830698] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e5290 is same with the state(5) to be set 00:15:03.446 [2024-02-14 20:15:40.830704] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e5290 is same with the state(5) to be set 00:15:03.446 [2024-02-14 20:15:40.830709] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e5290 is same with the state(5) to be set 00:15:03.446 [2024-02-14 20:15:40.830715] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e5290 is same with the state(5) to be set 00:15:03.446 [2024-02-14 20:15:40.830720] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e5290 is same with the state(5) to be set 00:15:03.446 [2024-02-14 20:15:40.830727] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e5290 is same with the state(5) to be set 00:15:03.446 [2024-02-14 20:15:40.830733] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e5290 is same with the state(5) to be set 00:15:03.446 [2024-02-14 20:15:40.830739] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e5290 is same with the state(5) to be set 00:15:03.446 [2024-02-14 20:15:40.830745] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e5290 is same with the state(5) to be set 00:15:03.446 [2024-02-14 20:15:40.830750] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e5290 is same with the state(5) to be set 00:15:03.446 [2024-02-14 20:15:40.830757] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e5290 is same with the state(5) to be set 00:15:03.446 [2024-02-14 20:15:40.830763] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e5290 is same with the state(5) to be set 00:15:03.446 [2024-02-14 20:15:40.830769] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e5290 is same with the state(5) to be set 00:15:03.446 [2024-02-14 20:15:40.830776] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e5290 is same with the state(5) to be set 00:15:03.446 [2024-02-14 20:15:40.830781] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e5290 is same with the state(5) to be set 00:15:03.446 [2024-02-14 20:15:40.832532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.446 [2024-02-14 20:15:40.832567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.446 [2024-02-14 20:15:40.832583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.446 [2024-02-14 20:15:40.832590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.446 [2024-02-14 20:15:40.832599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.446 [2024-02-14 20:15:40.832605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.446 [2024-02-14 20:15:40.832613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.446 [2024-02-14 20:15:40.832620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.446 [2024-02-14 20:15:40.832628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.446 [2024-02-14 20:15:40.832635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.446 [2024-02-14 20:15:40.832643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.446 [2024-02-14 20:15:40.832657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.446 [2024-02-14 20:15:40.832665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.446 [2024-02-14 20:15:40.832671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.446 [2024-02-14 20:15:40.832679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.446 [2024-02-14 20:15:40.832685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.446 [2024-02-14 20:15:40.832693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.446 [2024-02-14 20:15:40.832700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.446 [2024-02-14 20:15:40.832707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.446 [2024-02-14 20:15:40.832713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.447 [2024-02-14 20:15:40.832721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.447 [2024-02-14 20:15:40.832727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.447 [2024-02-14 20:15:40.832736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.447 [2024-02-14 20:15:40.832742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.447 [2024-02-14 20:15:40.832750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.447 [2024-02-14 20:15:40.832756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.447 [2024-02-14 20:15:40.832766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.447 [2024-02-14 20:15:40.832772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.447 [2024-02-14 20:15:40.832780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.447 [2024-02-14 20:15:40.832786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.447 [2024-02-14 20:15:40.832795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.447 [2024-02-14 20:15:40.832801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.447 [2024-02-14 20:15:40.832809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.447 [2024-02-14 20:15:40.832816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.447 [2024-02-14 20:15:40.832824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.447 [2024-02-14 20:15:40.832830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.447 [2024-02-14 20:15:40.832838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.447 [2024-02-14 20:15:40.832845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.447 [2024-02-14 20:15:40.832852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.447 [2024-02-14 20:15:40.832859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.447 [2024-02-14 20:15:40.832866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.447 [2024-02-14 20:15:40.832872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.447 [2024-02-14 20:15:40.832880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.447 [2024-02-14 20:15:40.832886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.447 [2024-02-14 20:15:40.832894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.447 [2024-02-14 20:15:40.832900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.447 [2024-02-14 20:15:40.832908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.447 [2024-02-14 20:15:40.832914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.447 [2024-02-14 20:15:40.832922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.447 [2024-02-14 20:15:40.832928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.447 [2024-02-14 20:15:40.832936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.447 [2024-02-14 20:15:40.832943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.447 [2024-02-14 20:15:40.832951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.447 [2024-02-14 20:15:40.832957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.447 [2024-02-14 20:15:40.832965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.447 [2024-02-14 20:15:40.832971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.447 [2024-02-14 20:15:40.832979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.447 [2024-02-14 20:15:40.832985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.447 [2024-02-14 20:15:40.832992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.447 [2024-02-14 20:15:40.832999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.447 [2024-02-14 20:15:40.833007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.447 [2024-02-14 20:15:40.833014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.447 [2024-02-14 20:15:40.833022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.447 [2024-02-14 20:15:40.833028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.447 [2024-02-14 20:15:40.833035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.447 [2024-02-14 20:15:40.833042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.447 [2024-02-14 20:15:40.833050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.447 [2024-02-14 20:15:40.833056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.447 [2024-02-14 20:15:40.833063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.447 [2024-02-14 20:15:40.833070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.447 [2024-02-14 20:15:40.833078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.447 [2024-02-14 20:15:40.833084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.447 [2024-02-14 20:15:40.833092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.447 [2024-02-14 20:15:40.833098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.447 [2024-02-14 20:15:40.833106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.447 [2024-02-14 20:15:40.833111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.447 [2024-02-14 20:15:40.833120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:40960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.447 [2024-02-14 20:15:40.833127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.447 [2024-02-14 20:15:40.833134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:41088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.447 [2024-02-14 20:15:40.833141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.447 [2024-02-14 20:15:40.833149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:41216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.447 [2024-02-14 20:15:40.833155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.447 [2024-02-14 20:15:40.833163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:41344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.447 [2024-02-14 20:15:40.833169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.447 [2024-02-14 20:15:40.833176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.447 [2024-02-14 20:15:40.833182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.447 [2024-02-14 20:15:40.833190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:41472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.447 [2024-02-14 20:15:40.833197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.447 [2024-02-14 20:15:40.833205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.447 [2024-02-14 20:15:40.833211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.447 [2024-02-14 20:15:40.833219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.447 [2024-02-14 20:15:40.833225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.447 [2024-02-14 20:15:40.833233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:41600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.447 [2024-02-14 20:15:40.833239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.447 [2024-02-14 20:15:40.833247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.447 [2024-02-14 20:15:40.833254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.447 [2024-02-14 20:15:40.833261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:41728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.448 [2024-02-14 20:15:40.833268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.448 [2024-02-14 20:15:40.833276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:41856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.448 [2024-02-14 20:15:40.833282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.448 [2024-02-14 20:15:40.833290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.448 [2024-02-14 20:15:40.833297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.448 [2024-02-14 20:15:40.833305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.448 [2024-02-14 20:15:40.833312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.448 [2024-02-14 20:15:40.833321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:41984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.448 [2024-02-14 20:15:40.833327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.448 [2024-02-14 20:15:40.833335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.448 [2024-02-14 20:15:40.833341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.448 [2024-02-14 20:15:40.833348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.448 [2024-02-14 20:15:40.833354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.448 [2024-02-14 20:15:40.833362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:42112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.448 [2024-02-14 20:15:40.833369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.448 [2024-02-14 20:15:40.833376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:42240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.448 [2024-02-14 20:15:40.833382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.448 [2024-02-14 20:15:40.833390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:42368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.448 [2024-02-14 20:15:40.833397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.448 [2024-02-14 20:15:40.833405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:42496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.448 [2024-02-14 20:15:40.833411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.448 [2024-02-14 20:15:40.833419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:42624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.448 [2024-02-14 20:15:40.833426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.448 [2024-02-14 20:15:40.833433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:42752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.448 [2024-02-14 20:15:40.833440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.448 [2024-02-14 20:15:40.833447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:42880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.448 [2024-02-14 20:15:40.833453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.448 [2024-02-14 20:15:40.833461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:43008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.448 [2024-02-14 20:15:40.833468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.448 [2024-02-14 20:15:40.833478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:43136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.448 [2024-02-14 20:15:40.833485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.448 [2024-02-14 20:15:40.833492] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a89f0 is same with the state(5) to be set 00:15:03.448 [2024-02-14 20:15:40.833543] bdev_nvme.c:1576:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x16a89f0 was disconnected and freed. reset controller. 00:15:03.448 [2024-02-14 20:15:40.834440] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:15:03.448 20:15:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:03.448 20:15:40 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:15:03.448 task offset: 37760 on job bdev=Nvme0n1 fails 00:15:03.448 00:15:03.448 Latency(us) 00:15:03.448 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:03.448 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:03.448 Job: Nvme0n1 ended in about 0.55 seconds with error 00:15:03.448 Verification LBA range: start 0x0 length 0x400 00:15:03.448 Nvme0n1 : 0.55 2379.45 148.72 117.32 0.00 25345.71 1646.20 50181.85 00:15:03.448 =================================================================================================================== 00:15:03.448 Total : 2379.45 148.72 117.32 0.00 25345.71 1646.20 50181.85 00:15:03.448 [2024-02-14 20:15:40.835989] app.c: 908:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:03.448 [2024-02-14 20:15:40.836004] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168e630 (9): Bad file descriptor 00:15:03.448 [2024-02-14 20:15:40.836030] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:15:03.448 20:15:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:03.448 20:15:40 -- common/autotest_common.sh@10 -- # set +x 00:15:03.448 20:15:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:03.448 20:15:40 -- target/host_management.sh@87 -- # sleep 1 00:15:03.448 [2024-02-14 20:15:40.850155] bdev_nvme.c:2026:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:04.827 20:15:41 -- target/host_management.sh@91 -- # kill -9 1739437 00:15:04.827 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1739437) - No such process 00:15:04.827 20:15:41 -- target/host_management.sh@91 -- # true 00:15:04.827 20:15:41 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:15:04.827 20:15:41 -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:15:04.827 20:15:41 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:15:04.827 20:15:41 -- nvmf/common.sh@520 -- # config=() 00:15:04.827 20:15:41 -- nvmf/common.sh@520 -- # local subsystem config 00:15:04.827 20:15:41 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:04.827 20:15:41 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:04.827 { 00:15:04.827 "params": { 00:15:04.827 "name": "Nvme$subsystem", 00:15:04.827 "trtype": "$TEST_TRANSPORT", 00:15:04.827 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:04.827 "adrfam": "ipv4", 00:15:04.827 "trsvcid": "$NVMF_PORT", 00:15:04.827 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:04.828 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:04.828 "hdgst": ${hdgst:-false}, 00:15:04.828 "ddgst": ${ddgst:-false} 00:15:04.828 }, 00:15:04.828 "method": "bdev_nvme_attach_controller" 00:15:04.828 } 00:15:04.828 EOF 00:15:04.828 )") 00:15:04.828 20:15:41 -- nvmf/common.sh@542 -- # cat 00:15:04.828 20:15:41 -- nvmf/common.sh@544 -- # jq . 00:15:04.828 20:15:41 -- nvmf/common.sh@545 -- # IFS=, 00:15:04.828 20:15:41 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:04.828 "params": { 00:15:04.828 "name": "Nvme0", 00:15:04.828 "trtype": "tcp", 00:15:04.828 "traddr": "10.0.0.2", 00:15:04.828 "adrfam": "ipv4", 00:15:04.828 "trsvcid": "4420", 00:15:04.828 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:04.828 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:15:04.828 "hdgst": false, 00:15:04.828 "ddgst": false 00:15:04.828 }, 00:15:04.828 "method": "bdev_nvme_attach_controller" 00:15:04.828 }' 00:15:04.828 [2024-02-14 20:15:41.893323] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:15:04.828 [2024-02-14 20:15:41.893370] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1739707 ] 00:15:04.828 EAL: No free 2048 kB hugepages reported on node 1 00:15:04.828 [2024-02-14 20:15:41.955054] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:04.828 [2024-02-14 20:15:42.022389] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:04.828 [2024-02-14 20:15:42.022445] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:15:05.087 Running I/O for 1 seconds... 00:15:06.026 00:15:06.026 Latency(us) 00:15:06.026 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:06.026 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:06.026 Verification LBA range: start 0x0 length 0x400 00:15:06.026 Nvme0n1 : 1.01 2957.06 184.82 0.00 0.00 21389.69 1201.49 48683.89 00:15:06.026 =================================================================================================================== 00:15:06.026 Total : 2957.06 184.82 0.00 0.00 21389.69 1201.49 48683.89 00:15:06.026 [2024-02-14 20:15:43.346680] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:15:06.287 20:15:43 -- target/host_management.sh@101 -- # stoptarget 00:15:06.287 20:15:43 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:15:06.287 20:15:43 -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:15:06.287 20:15:43 -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:15:06.287 20:15:43 -- target/host_management.sh@40 -- # nvmftestfini 00:15:06.287 20:15:43 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:06.287 20:15:43 -- nvmf/common.sh@116 -- # sync 00:15:06.287 20:15:43 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:06.287 20:15:43 -- nvmf/common.sh@119 -- # set +e 00:15:06.287 20:15:43 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:06.287 20:15:43 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:06.287 rmmod nvme_tcp 00:15:06.287 rmmod nvme_fabrics 00:15:06.287 rmmod nvme_keyring 00:15:06.287 20:15:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:06.287 20:15:43 -- nvmf/common.sh@123 -- # set -e 00:15:06.287 20:15:43 -- nvmf/common.sh@124 -- # return 0 00:15:06.287 20:15:43 -- nvmf/common.sh@477 -- # '[' -n 1739172 ']' 00:15:06.287 20:15:43 -- nvmf/common.sh@478 -- # killprocess 1739172 00:15:06.287 20:15:43 -- common/autotest_common.sh@924 -- # '[' -z 1739172 ']' 00:15:06.287 20:15:43 -- common/autotest_common.sh@928 -- # kill -0 1739172 00:15:06.287 20:15:43 -- common/autotest_common.sh@929 -- # uname 00:15:06.287 20:15:43 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:15:06.287 20:15:43 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 1739172 00:15:06.287 20:15:43 -- common/autotest_common.sh@930 -- # process_name=reactor_1 00:15:06.287 20:15:43 -- common/autotest_common.sh@934 -- # '[' reactor_1 = sudo ']' 00:15:06.287 20:15:43 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 1739172' 00:15:06.287 killing process with pid 1739172 00:15:06.287 20:15:43 -- common/autotest_common.sh@943 -- # kill 1739172 00:15:06.287 20:15:43 -- common/autotest_common.sh@948 -- # wait 1739172 00:15:06.547 [2024-02-14 20:15:43.877463] app.c: 603:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:15:06.547 20:15:43 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:06.547 20:15:43 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:06.547 20:15:43 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:06.547 20:15:43 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:06.547 20:15:43 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:06.547 20:15:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:06.547 20:15:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:06.547 20:15:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:09.086 20:15:45 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:09.086 00:15:09.086 real 0m7.038s 00:15:09.086 user 0m21.467s 00:15:09.086 sys 0m1.235s 00:15:09.086 20:15:45 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:09.086 20:15:45 -- common/autotest_common.sh@10 -- # set +x 00:15:09.086 ************************************ 00:15:09.086 END TEST nvmf_host_management 00:15:09.086 ************************************ 00:15:09.086 20:15:45 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:15:09.086 00:15:09.086 real 0m13.512s 00:15:09.086 user 0m23.213s 00:15:09.086 sys 0m5.995s 00:15:09.086 20:15:45 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:09.086 20:15:45 -- common/autotest_common.sh@10 -- # set +x 00:15:09.086 ************************************ 00:15:09.086 END TEST nvmf_host_management 00:15:09.086 ************************************ 00:15:09.086 20:15:46 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:15:09.086 20:15:46 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:15:09.086 20:15:46 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:15:09.086 20:15:46 -- common/autotest_common.sh@10 -- # set +x 00:15:09.086 ************************************ 00:15:09.086 START TEST nvmf_lvol 00:15:09.086 ************************************ 00:15:09.086 20:15:46 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:15:09.086 * Looking for test storage... 00:15:09.086 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:09.086 20:15:46 -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:09.086 20:15:46 -- nvmf/common.sh@7 -- # uname -s 00:15:09.086 20:15:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:09.086 20:15:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:09.086 20:15:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:09.086 20:15:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:09.086 20:15:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:09.086 20:15:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:09.086 20:15:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:09.086 20:15:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:09.086 20:15:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:09.086 20:15:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:09.086 20:15:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:15:09.086 20:15:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:15:09.086 20:15:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:09.086 20:15:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:09.086 20:15:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:09.087 20:15:46 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:09.087 20:15:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:09.087 20:15:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:09.087 20:15:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:09.087 20:15:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.087 20:15:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.087 20:15:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.087 20:15:46 -- paths/export.sh@5 -- # export PATH 00:15:09.087 20:15:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.087 20:15:46 -- nvmf/common.sh@46 -- # : 0 00:15:09.087 20:15:46 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:09.087 20:15:46 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:09.087 20:15:46 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:09.087 20:15:46 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:09.087 20:15:46 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:09.087 20:15:46 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:09.087 20:15:46 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:09.087 20:15:46 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:09.087 20:15:46 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:09.087 20:15:46 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:09.087 20:15:46 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:15:09.087 20:15:46 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:15:09.087 20:15:46 -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:09.087 20:15:46 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:15:09.087 20:15:46 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:09.087 20:15:46 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:09.087 20:15:46 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:09.087 20:15:46 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:09.087 20:15:46 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:09.087 20:15:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:09.087 20:15:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:09.087 20:15:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:09.087 20:15:46 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:09.087 20:15:46 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:09.087 20:15:46 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:09.087 20:15:46 -- common/autotest_common.sh@10 -- # set +x 00:15:15.662 20:15:52 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:15.662 20:15:52 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:15.662 20:15:52 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:15.662 20:15:52 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:15.662 20:15:52 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:15.662 20:15:52 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:15.662 20:15:52 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:15.662 20:15:52 -- nvmf/common.sh@294 -- # net_devs=() 00:15:15.662 20:15:52 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:15.662 20:15:52 -- nvmf/common.sh@295 -- # e810=() 00:15:15.662 20:15:52 -- nvmf/common.sh@295 -- # local -ga e810 00:15:15.662 20:15:52 -- nvmf/common.sh@296 -- # x722=() 00:15:15.662 20:15:52 -- nvmf/common.sh@296 -- # local -ga x722 00:15:15.662 20:15:52 -- nvmf/common.sh@297 -- # mlx=() 00:15:15.662 20:15:52 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:15.662 20:15:52 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:15.662 20:15:52 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:15.662 20:15:52 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:15.662 20:15:52 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:15.662 20:15:52 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:15.662 20:15:52 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:15.662 20:15:52 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:15.662 20:15:52 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:15.662 20:15:52 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:15.662 20:15:52 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:15.662 20:15:52 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:15.662 20:15:52 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:15.662 20:15:52 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:15:15.662 20:15:52 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:15:15.662 20:15:52 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:15:15.662 20:15:52 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:15:15.662 20:15:52 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:15.662 20:15:52 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:15.662 20:15:52 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:15.662 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:15.662 20:15:52 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:15.662 20:15:52 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:15.662 20:15:52 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:15.662 20:15:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:15.662 20:15:52 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:15.662 20:15:52 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:15.662 20:15:52 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:15.662 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:15.662 20:15:52 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:15.662 20:15:52 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:15.662 20:15:52 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:15.662 20:15:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:15.662 20:15:52 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:15.662 20:15:52 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:15.662 20:15:52 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:15:15.662 20:15:52 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:15:15.662 20:15:52 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:15.662 20:15:52 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:15.662 20:15:52 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:15.662 20:15:52 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:15.662 20:15:52 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:15.662 Found net devices under 0000:af:00.0: cvl_0_0 00:15:15.662 20:15:52 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:15.662 20:15:52 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:15.662 20:15:52 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:15.662 20:15:52 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:15.662 20:15:52 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:15.662 20:15:52 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:15.662 Found net devices under 0000:af:00.1: cvl_0_1 00:15:15.662 20:15:52 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:15.662 20:15:52 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:15.662 20:15:52 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:15.662 20:15:52 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:15.662 20:15:52 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:15:15.662 20:15:52 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:15:15.662 20:15:52 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:15.662 20:15:52 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:15.662 20:15:52 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:15.662 20:15:52 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:15:15.662 20:15:52 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:15.662 20:15:52 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:15.662 20:15:52 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:15:15.663 20:15:52 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:15.663 20:15:52 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:15.663 20:15:52 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:15:15.663 20:15:52 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:15:15.663 20:15:52 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:15:15.663 20:15:52 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:15.663 20:15:52 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:15.663 20:15:52 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:15.663 20:15:52 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:15:15.663 20:15:52 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:15.663 20:15:52 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:15.663 20:15:52 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:15.663 20:15:52 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:15:15.663 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:15.663 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:15:15.663 00:15:15.663 --- 10.0.0.2 ping statistics --- 00:15:15.663 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:15.663 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:15:15.663 20:15:52 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:15.663 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:15.663 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.366 ms 00:15:15.663 00:15:15.663 --- 10.0.0.1 ping statistics --- 00:15:15.663 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:15.663 rtt min/avg/max/mdev = 0.366/0.366/0.366/0.000 ms 00:15:15.663 20:15:52 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:15.663 20:15:52 -- nvmf/common.sh@410 -- # return 0 00:15:15.663 20:15:52 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:15.663 20:15:52 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:15.663 20:15:52 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:15.663 20:15:52 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:15.663 20:15:52 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:15.663 20:15:52 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:15.663 20:15:52 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:15.663 20:15:52 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:15:15.663 20:15:52 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:15.663 20:15:52 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:15.663 20:15:52 -- common/autotest_common.sh@10 -- # set +x 00:15:15.663 20:15:52 -- nvmf/common.sh@469 -- # nvmfpid=1743957 00:15:15.663 20:15:52 -- nvmf/common.sh@470 -- # waitforlisten 1743957 00:15:15.663 20:15:52 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:15.663 20:15:52 -- common/autotest_common.sh@817 -- # '[' -z 1743957 ']' 00:15:15.663 20:15:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:15.663 20:15:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:15.663 20:15:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:15.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:15.663 20:15:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:15.663 20:15:52 -- common/autotest_common.sh@10 -- # set +x 00:15:15.663 [2024-02-14 20:15:52.442687] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:15:15.663 [2024-02-14 20:15:52.442727] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:15.663 EAL: No free 2048 kB hugepages reported on node 1 00:15:15.663 [2024-02-14 20:15:52.509773] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:15.663 [2024-02-14 20:15:52.583397] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:15.663 [2024-02-14 20:15:52.583509] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:15.663 [2024-02-14 20:15:52.583517] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:15.663 [2024-02-14 20:15:52.583524] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:15.663 [2024-02-14 20:15:52.583569] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:15.663 [2024-02-14 20:15:52.583593] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:15.663 [2024-02-14 20:15:52.583594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:15.922 20:15:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:15.922 20:15:53 -- common/autotest_common.sh@850 -- # return 0 00:15:15.922 20:15:53 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:15.922 20:15:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:15.922 20:15:53 -- common/autotest_common.sh@10 -- # set +x 00:15:15.922 20:15:53 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:15.922 20:15:53 -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:16.181 [2024-02-14 20:15:53.424254] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:16.181 20:15:53 -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:16.441 20:15:53 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:15:16.441 20:15:53 -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:16.441 20:15:53 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:15:16.441 20:15:53 -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:15:16.703 20:15:54 -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:15:16.963 20:15:54 -- target/nvmf_lvol.sh@29 -- # lvs=d8093fbf-5069-4d58-90c0-d099fb27a8b4 00:15:16.963 20:15:54 -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d8093fbf-5069-4d58-90c0-d099fb27a8b4 lvol 20 00:15:16.963 20:15:54 -- target/nvmf_lvol.sh@32 -- # lvol=c1dca2b2-fd56-4cbb-afe9-738f5fa01587 00:15:16.963 20:15:54 -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:17.223 20:15:54 -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c1dca2b2-fd56-4cbb-afe9-738f5fa01587 00:15:17.482 20:15:54 -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:17.482 [2024-02-14 20:15:54.842645] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:17.482 20:15:54 -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:17.742 20:15:55 -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:15:17.742 20:15:55 -- target/nvmf_lvol.sh@42 -- # perf_pid=1744449 00:15:17.742 20:15:55 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:15:17.742 EAL: No free 2048 kB hugepages reported on node 1 00:15:18.680 20:15:56 -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot c1dca2b2-fd56-4cbb-afe9-738f5fa01587 MY_SNAPSHOT 00:15:18.939 20:15:56 -- target/nvmf_lvol.sh@47 -- # snapshot=ed4fb995-c50d-41c8-95e1-a9c29fba2288 00:15:18.939 20:15:56 -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize c1dca2b2-fd56-4cbb-afe9-738f5fa01587 30 00:15:19.199 20:15:56 -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone ed4fb995-c50d-41c8-95e1-a9c29fba2288 MY_CLONE 00:15:19.458 20:15:56 -- target/nvmf_lvol.sh@49 -- # clone=525665e3-43ba-4422-8e41-8daa3a6eb0ef 00:15:19.458 20:15:56 -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 525665e3-43ba-4422-8e41-8daa3a6eb0ef 00:15:19.718 20:15:57 -- target/nvmf_lvol.sh@53 -- # wait 1744449 00:15:29.707 Initializing NVMe Controllers 00:15:29.707 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:15:29.707 Controller IO queue size 128, less than required. 00:15:29.707 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:29.707 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:15:29.707 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:15:29.707 Initialization complete. Launching workers. 00:15:29.707 ======================================================== 00:15:29.707 Latency(us) 00:15:29.707 Device Information : IOPS MiB/s Average min max 00:15:29.707 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12292.10 48.02 10417.76 600.62 74595.65 00:15:29.707 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12218.70 47.73 10478.37 2988.52 59394.47 00:15:29.707 ======================================================== 00:15:29.707 Total : 24510.80 95.75 10447.97 600.62 74595.65 00:15:29.707 00:15:29.707 20:16:05 -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:29.707 20:16:05 -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c1dca2b2-fd56-4cbb-afe9-738f5fa01587 00:15:29.707 20:16:05 -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d8093fbf-5069-4d58-90c0-d099fb27a8b4 00:15:29.707 20:16:06 -- target/nvmf_lvol.sh@60 -- # rm -f 00:15:29.707 20:16:06 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:15:29.707 20:16:06 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:15:29.707 20:16:06 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:29.708 20:16:06 -- nvmf/common.sh@116 -- # sync 00:15:29.708 20:16:06 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:29.708 20:16:06 -- nvmf/common.sh@119 -- # set +e 00:15:29.708 20:16:06 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:29.708 20:16:06 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:29.708 rmmod nvme_tcp 00:15:29.708 rmmod nvme_fabrics 00:15:29.708 rmmod nvme_keyring 00:15:29.708 20:16:06 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:29.708 20:16:06 -- nvmf/common.sh@123 -- # set -e 00:15:29.708 20:16:06 -- nvmf/common.sh@124 -- # return 0 00:15:29.708 20:16:06 -- nvmf/common.sh@477 -- # '[' -n 1743957 ']' 00:15:29.708 20:16:06 -- nvmf/common.sh@478 -- # killprocess 1743957 00:15:29.708 20:16:06 -- common/autotest_common.sh@924 -- # '[' -z 1743957 ']' 00:15:29.708 20:16:06 -- common/autotest_common.sh@928 -- # kill -0 1743957 00:15:29.708 20:16:06 -- common/autotest_common.sh@929 -- # uname 00:15:29.708 20:16:06 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:15:29.708 20:16:06 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 1743957 00:15:29.708 20:16:06 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:15:29.708 20:16:06 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:15:29.708 20:16:06 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 1743957' 00:15:29.708 killing process with pid 1743957 00:15:29.708 20:16:06 -- common/autotest_common.sh@943 -- # kill 1743957 00:15:29.708 20:16:06 -- common/autotest_common.sh@948 -- # wait 1743957 00:15:29.708 20:16:06 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:29.708 20:16:06 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:29.708 20:16:06 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:29.708 20:16:06 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:29.708 20:16:06 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:29.708 20:16:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:29.708 20:16:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:29.708 20:16:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:31.089 20:16:08 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:31.089 00:15:31.089 real 0m22.420s 00:15:31.089 user 1m3.880s 00:15:31.089 sys 0m7.520s 00:15:31.089 20:16:08 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:31.089 20:16:08 -- common/autotest_common.sh@10 -- # set +x 00:15:31.089 ************************************ 00:15:31.089 END TEST nvmf_lvol 00:15:31.089 ************************************ 00:15:31.089 20:16:08 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:15:31.089 20:16:08 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:15:31.089 20:16:08 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:15:31.089 20:16:08 -- common/autotest_common.sh@10 -- # set +x 00:15:31.089 ************************************ 00:15:31.089 START TEST nvmf_lvs_grow 00:15:31.089 ************************************ 00:15:31.089 20:16:08 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:15:31.349 * Looking for test storage... 00:15:31.349 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:31.349 20:16:08 -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:31.349 20:16:08 -- nvmf/common.sh@7 -- # uname -s 00:15:31.349 20:16:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:31.349 20:16:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:31.349 20:16:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:31.349 20:16:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:31.349 20:16:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:31.349 20:16:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:31.349 20:16:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:31.349 20:16:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:31.349 20:16:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:31.349 20:16:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:31.349 20:16:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:15:31.349 20:16:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:15:31.349 20:16:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:31.349 20:16:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:31.349 20:16:08 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:31.349 20:16:08 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:31.349 20:16:08 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:31.349 20:16:08 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:31.349 20:16:08 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:31.349 20:16:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.349 20:16:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.349 20:16:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.349 20:16:08 -- paths/export.sh@5 -- # export PATH 00:15:31.349 20:16:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.349 20:16:08 -- nvmf/common.sh@46 -- # : 0 00:15:31.349 20:16:08 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:31.349 20:16:08 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:31.349 20:16:08 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:31.349 20:16:08 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:31.349 20:16:08 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:31.349 20:16:08 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:31.349 20:16:08 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:31.349 20:16:08 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:31.349 20:16:08 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:31.349 20:16:08 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:31.349 20:16:08 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:15:31.349 20:16:08 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:31.349 20:16:08 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:31.349 20:16:08 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:31.349 20:16:08 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:31.349 20:16:08 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:31.349 20:16:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:31.349 20:16:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:31.349 20:16:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:31.349 20:16:08 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:31.349 20:16:08 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:31.349 20:16:08 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:31.349 20:16:08 -- common/autotest_common.sh@10 -- # set +x 00:15:37.925 20:16:14 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:37.925 20:16:14 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:37.925 20:16:14 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:37.925 20:16:14 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:37.925 20:16:14 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:37.925 20:16:14 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:37.925 20:16:14 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:37.925 20:16:14 -- nvmf/common.sh@294 -- # net_devs=() 00:15:37.925 20:16:14 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:37.925 20:16:14 -- nvmf/common.sh@295 -- # e810=() 00:15:37.925 20:16:14 -- nvmf/common.sh@295 -- # local -ga e810 00:15:37.925 20:16:14 -- nvmf/common.sh@296 -- # x722=() 00:15:37.925 20:16:14 -- nvmf/common.sh@296 -- # local -ga x722 00:15:37.925 20:16:14 -- nvmf/common.sh@297 -- # mlx=() 00:15:37.925 20:16:14 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:37.925 20:16:14 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:37.925 20:16:14 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:37.925 20:16:14 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:37.925 20:16:14 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:37.925 20:16:14 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:37.925 20:16:14 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:37.925 20:16:14 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:37.925 20:16:14 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:37.925 20:16:14 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:37.925 20:16:14 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:37.925 20:16:14 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:37.925 20:16:14 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:37.925 20:16:14 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:15:37.925 20:16:14 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:15:37.925 20:16:14 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:15:37.925 20:16:14 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:15:37.925 20:16:14 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:37.925 20:16:14 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:37.925 20:16:14 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:37.925 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:37.925 20:16:14 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:37.925 20:16:14 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:37.925 20:16:14 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:37.925 20:16:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:37.925 20:16:14 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:37.925 20:16:14 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:37.925 20:16:14 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:37.925 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:37.925 20:16:14 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:37.925 20:16:14 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:37.925 20:16:14 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:37.925 20:16:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:37.925 20:16:14 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:37.925 20:16:14 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:37.925 20:16:14 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:15:37.925 20:16:14 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:15:37.925 20:16:14 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:37.925 20:16:14 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:37.925 20:16:14 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:37.925 20:16:14 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:37.925 20:16:14 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:37.925 Found net devices under 0000:af:00.0: cvl_0_0 00:15:37.925 20:16:14 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:37.925 20:16:14 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:37.925 20:16:14 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:37.926 20:16:14 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:37.926 20:16:14 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:37.926 20:16:14 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:37.926 Found net devices under 0000:af:00.1: cvl_0_1 00:15:37.926 20:16:14 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:37.926 20:16:14 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:37.926 20:16:14 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:37.926 20:16:14 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:37.926 20:16:14 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:15:37.926 20:16:14 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:15:37.926 20:16:14 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:37.926 20:16:14 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:37.926 20:16:14 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:37.926 20:16:14 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:15:37.926 20:16:14 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:37.926 20:16:14 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:37.926 20:16:14 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:15:37.926 20:16:14 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:37.926 20:16:14 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:37.926 20:16:14 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:15:37.926 20:16:14 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:15:37.926 20:16:14 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:15:37.926 20:16:14 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:37.926 20:16:14 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:37.926 20:16:14 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:37.926 20:16:14 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:15:37.926 20:16:14 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:37.926 20:16:14 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:37.926 20:16:14 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:37.926 20:16:14 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:15:37.926 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:37.926 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:15:37.926 00:15:37.926 --- 10.0.0.2 ping statistics --- 00:15:37.926 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:37.926 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:15:37.926 20:16:14 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:37.926 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:37.926 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.376 ms 00:15:37.926 00:15:37.926 --- 10.0.0.1 ping statistics --- 00:15:37.926 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:37.926 rtt min/avg/max/mdev = 0.376/0.376/0.376/0.000 ms 00:15:37.926 20:16:14 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:37.926 20:16:14 -- nvmf/common.sh@410 -- # return 0 00:15:37.926 20:16:14 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:37.926 20:16:14 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:37.926 20:16:14 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:37.926 20:16:14 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:37.926 20:16:14 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:37.926 20:16:14 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:37.926 20:16:14 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:37.926 20:16:14 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:15:37.926 20:16:14 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:37.926 20:16:14 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:37.926 20:16:14 -- common/autotest_common.sh@10 -- # set +x 00:15:37.926 20:16:14 -- nvmf/common.sh@469 -- # nvmfpid=1750096 00:15:37.926 20:16:14 -- nvmf/common.sh@470 -- # waitforlisten 1750096 00:15:37.926 20:16:14 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:37.926 20:16:14 -- common/autotest_common.sh@817 -- # '[' -z 1750096 ']' 00:15:37.926 20:16:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:37.926 20:16:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:37.926 20:16:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:37.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:37.926 20:16:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:37.926 20:16:14 -- common/autotest_common.sh@10 -- # set +x 00:15:37.926 [2024-02-14 20:16:14.898632] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:15:37.926 [2024-02-14 20:16:14.898679] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:37.926 EAL: No free 2048 kB hugepages reported on node 1 00:15:37.926 [2024-02-14 20:16:14.962253] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:37.926 [2024-02-14 20:16:15.039392] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:37.926 [2024-02-14 20:16:15.039494] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:37.926 [2024-02-14 20:16:15.039502] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:37.926 [2024-02-14 20:16:15.039509] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:37.926 [2024-02-14 20:16:15.039525] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:38.495 20:16:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:38.495 20:16:15 -- common/autotest_common.sh@850 -- # return 0 00:15:38.495 20:16:15 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:38.495 20:16:15 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:38.495 20:16:15 -- common/autotest_common.sh@10 -- # set +x 00:15:38.495 20:16:15 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:38.495 20:16:15 -- target/nvmf_lvs_grow.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:38.495 [2024-02-14 20:16:15.871270] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:38.495 20:16:15 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:15:38.495 20:16:15 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:15:38.495 20:16:15 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:15:38.495 20:16:15 -- common/autotest_common.sh@10 -- # set +x 00:15:38.495 ************************************ 00:15:38.495 START TEST lvs_grow_clean 00:15:38.495 ************************************ 00:15:38.495 20:16:15 -- common/autotest_common.sh@1102 -- # lvs_grow 00:15:38.495 20:16:15 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:15:38.495 20:16:15 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:15:38.495 20:16:15 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:15:38.495 20:16:15 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:15:38.495 20:16:15 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:15:38.495 20:16:15 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:15:38.495 20:16:15 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:38.495 20:16:15 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:38.495 20:16:15 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:38.754 20:16:16 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:15:38.754 20:16:16 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:15:39.014 20:16:16 -- target/nvmf_lvs_grow.sh@28 -- # lvs=bb83d3b6-074d-456e-b3fb-24675d26ace3 00:15:39.014 20:16:16 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bb83d3b6-074d-456e-b3fb-24675d26ace3 00:15:39.014 20:16:16 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:15:39.014 20:16:16 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:15:39.015 20:16:16 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:15:39.015 20:16:16 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u bb83d3b6-074d-456e-b3fb-24675d26ace3 lvol 150 00:15:39.274 20:16:16 -- target/nvmf_lvs_grow.sh@33 -- # lvol=093b46d3-1fca-44f2-aa9f-9246c74940ce 00:15:39.274 20:16:16 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:39.274 20:16:16 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:15:39.534 [2024-02-14 20:16:16.745225] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:15:39.534 [2024-02-14 20:16:16.745280] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:15:39.534 true 00:15:39.534 20:16:16 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bb83d3b6-074d-456e-b3fb-24675d26ace3 00:15:39.534 20:16:16 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:15:39.534 20:16:16 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:15:39.534 20:16:16 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:39.794 20:16:17 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 093b46d3-1fca-44f2-aa9f-9246c74940ce 00:15:40.054 20:16:17 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:40.054 [2024-02-14 20:16:17.403207] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:40.054 20:16:17 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:40.313 20:16:17 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:15:40.313 20:16:17 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1750596 00:15:40.313 20:16:17 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:40.314 20:16:17 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1750596 /var/tmp/bdevperf.sock 00:15:40.314 20:16:17 -- common/autotest_common.sh@817 -- # '[' -z 1750596 ']' 00:15:40.314 20:16:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:40.314 20:16:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:40.314 20:16:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:40.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:40.314 20:16:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:40.314 20:16:17 -- common/autotest_common.sh@10 -- # set +x 00:15:40.314 [2024-02-14 20:16:17.589773] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:15:40.314 [2024-02-14 20:16:17.589818] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1750596 ] 00:15:40.314 EAL: No free 2048 kB hugepages reported on node 1 00:15:40.314 [2024-02-14 20:16:17.647509] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:40.314 [2024-02-14 20:16:17.722122] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:41.250 20:16:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:41.250 20:16:18 -- common/autotest_common.sh@850 -- # return 0 00:15:41.250 20:16:18 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:15:41.509 Nvme0n1 00:15:41.509 20:16:18 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:15:41.768 [ 00:15:41.768 { 00:15:41.768 "name": "Nvme0n1", 00:15:41.768 "aliases": [ 00:15:41.768 "093b46d3-1fca-44f2-aa9f-9246c74940ce" 00:15:41.768 ], 00:15:41.768 "product_name": "NVMe disk", 00:15:41.768 "block_size": 4096, 00:15:41.768 "num_blocks": 38912, 00:15:41.768 "uuid": "093b46d3-1fca-44f2-aa9f-9246c74940ce", 00:15:41.768 "assigned_rate_limits": { 00:15:41.768 "rw_ios_per_sec": 0, 00:15:41.768 "rw_mbytes_per_sec": 0, 00:15:41.768 "r_mbytes_per_sec": 0, 00:15:41.768 "w_mbytes_per_sec": 0 00:15:41.768 }, 00:15:41.768 "claimed": false, 00:15:41.768 "zoned": false, 00:15:41.768 "supported_io_types": { 00:15:41.768 "read": true, 00:15:41.768 "write": true, 00:15:41.768 "unmap": true, 00:15:41.768 "write_zeroes": true, 00:15:41.768 "flush": true, 00:15:41.768 "reset": true, 00:15:41.768 "compare": true, 00:15:41.768 "compare_and_write": true, 00:15:41.768 "abort": true, 00:15:41.768 "nvme_admin": true, 00:15:41.768 "nvme_io": true 00:15:41.768 }, 00:15:41.768 "driver_specific": { 00:15:41.768 "nvme": [ 00:15:41.768 { 00:15:41.768 "trid": { 00:15:41.768 "trtype": "TCP", 00:15:41.768 "adrfam": "IPv4", 00:15:41.768 "traddr": "10.0.0.2", 00:15:41.768 "trsvcid": "4420", 00:15:41.768 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:15:41.768 }, 00:15:41.768 "ctrlr_data": { 00:15:41.768 "cntlid": 1, 00:15:41.768 "vendor_id": "0x8086", 00:15:41.768 "model_number": "SPDK bdev Controller", 00:15:41.768 "serial_number": "SPDK0", 00:15:41.768 "firmware_revision": "24.05", 00:15:41.768 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:41.768 "oacs": { 00:15:41.768 "security": 0, 00:15:41.768 "format": 0, 00:15:41.768 "firmware": 0, 00:15:41.768 "ns_manage": 0 00:15:41.768 }, 00:15:41.768 "multi_ctrlr": true, 00:15:41.768 "ana_reporting": false 00:15:41.768 }, 00:15:41.768 "vs": { 00:15:41.768 "nvme_version": "1.3" 00:15:41.768 }, 00:15:41.768 "ns_data": { 00:15:41.768 "id": 1, 00:15:41.768 "can_share": true 00:15:41.768 } 00:15:41.768 } 00:15:41.768 ], 00:15:41.768 "mp_policy": "active_passive" 00:15:41.768 } 00:15:41.768 } 00:15:41.768 ] 00:15:41.768 20:16:18 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1750831 00:15:41.768 20:16:18 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:15:41.768 20:16:18 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:41.768 Running I/O for 10 seconds... 00:15:42.764 Latency(us) 00:15:42.764 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:42.764 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:42.764 Nvme0n1 : 1.00 22761.00 88.91 0.00 0.00 0.00 0.00 0.00 00:15:42.764 =================================================================================================================== 00:15:42.764 Total : 22761.00 88.91 0.00 0.00 0.00 0.00 0.00 00:15:42.764 00:15:43.701 20:16:20 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u bb83d3b6-074d-456e-b3fb-24675d26ace3 00:15:43.701 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:43.701 Nvme0n1 : 2.00 22856.50 89.28 0.00 0.00 0.00 0.00 0.00 00:15:43.701 =================================================================================================================== 00:15:43.701 Total : 22856.50 89.28 0.00 0.00 0.00 0.00 0.00 00:15:43.701 00:15:43.960 true 00:15:43.960 20:16:21 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bb83d3b6-074d-456e-b3fb-24675d26ace3 00:15:43.960 20:16:21 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:15:43.960 20:16:21 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:15:43.960 20:16:21 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:15:43.960 20:16:21 -- target/nvmf_lvs_grow.sh@65 -- # wait 1750831 00:15:44.898 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:44.898 Nvme0n1 : 3.00 23091.67 90.20 0.00 0.00 0.00 0.00 0.00 00:15:44.898 =================================================================================================================== 00:15:44.898 Total : 23091.67 90.20 0.00 0.00 0.00 0.00 0.00 00:15:44.898 00:15:45.836 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:45.836 Nvme0n1 : 4.00 23295.75 91.00 0.00 0.00 0.00 0.00 0.00 00:15:45.836 =================================================================================================================== 00:15:45.836 Total : 23295.75 91.00 0.00 0.00 0.00 0.00 0.00 00:15:45.836 00:15:46.774 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:46.774 Nvme0n1 : 5.00 23359.80 91.25 0.00 0.00 0.00 0.00 0.00 00:15:46.774 =================================================================================================================== 00:15:46.774 Total : 23359.80 91.25 0.00 0.00 0.00 0.00 0.00 00:15:46.774 00:15:47.714 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:47.714 Nvme0n1 : 6.00 23413.17 91.46 0.00 0.00 0.00 0.00 0.00 00:15:47.714 =================================================================================================================== 00:15:47.714 Total : 23413.17 91.46 0.00 0.00 0.00 0.00 0.00 00:15:47.714 00:15:48.653 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:48.653 Nvme0n1 : 7.00 23455.86 91.62 0.00 0.00 0.00 0.00 0.00 00:15:48.653 =================================================================================================================== 00:15:48.653 Total : 23455.86 91.62 0.00 0.00 0.00 0.00 0.00 00:15:48.653 00:15:50.034 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:50.034 Nvme0n1 : 8.00 23481.88 91.73 0.00 0.00 0.00 0.00 0.00 00:15:50.034 =================================================================================================================== 00:15:50.034 Total : 23481.88 91.73 0.00 0.00 0.00 0.00 0.00 00:15:50.034 00:15:50.974 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:50.974 Nvme0n1 : 9.00 23503.89 91.81 0.00 0.00 0.00 0.00 0.00 00:15:50.974 =================================================================================================================== 00:15:50.974 Total : 23503.89 91.81 0.00 0.00 0.00 0.00 0.00 00:15:50.974 00:15:51.915 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:51.915 Nvme0n1 : 10.00 23546.10 91.98 0.00 0.00 0.00 0.00 0.00 00:15:51.915 =================================================================================================================== 00:15:51.915 Total : 23546.10 91.98 0.00 0.00 0.00 0.00 0.00 00:15:51.915 00:15:51.915 00:15:51.915 Latency(us) 00:15:51.915 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:51.915 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:51.915 Nvme0n1 : 10.00 23550.98 92.00 0.00 0.00 5432.02 3245.59 20971.52 00:15:51.915 =================================================================================================================== 00:15:51.915 Total : 23550.98 92.00 0.00 0.00 5432.02 3245.59 20971.52 00:15:51.915 0 00:15:51.915 20:16:29 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1750596 00:15:51.915 20:16:29 -- common/autotest_common.sh@924 -- # '[' -z 1750596 ']' 00:15:51.915 20:16:29 -- common/autotest_common.sh@928 -- # kill -0 1750596 00:15:51.915 20:16:29 -- common/autotest_common.sh@929 -- # uname 00:15:51.915 20:16:29 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:15:51.915 20:16:29 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 1750596 00:15:51.915 20:16:29 -- common/autotest_common.sh@930 -- # process_name=reactor_1 00:15:51.915 20:16:29 -- common/autotest_common.sh@934 -- # '[' reactor_1 = sudo ']' 00:15:51.915 20:16:29 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 1750596' 00:15:51.915 killing process with pid 1750596 00:15:51.915 20:16:29 -- common/autotest_common.sh@943 -- # kill 1750596 00:15:51.915 Received shutdown signal, test time was about 10.000000 seconds 00:15:51.915 00:15:51.915 Latency(us) 00:15:51.915 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:51.915 =================================================================================================================== 00:15:51.915 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:51.915 20:16:29 -- common/autotest_common.sh@948 -- # wait 1750596 00:15:52.175 20:16:29 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:52.175 20:16:29 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bb83d3b6-074d-456e-b3fb-24675d26ace3 00:15:52.175 20:16:29 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:15:52.435 20:16:29 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:15:52.435 20:16:29 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:15:52.435 20:16:29 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:52.435 [2024-02-14 20:16:29.829900] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:15:52.695 20:16:29 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bb83d3b6-074d-456e-b3fb-24675d26ace3 00:15:52.695 20:16:29 -- common/autotest_common.sh@638 -- # local es=0 00:15:52.695 20:16:29 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bb83d3b6-074d-456e-b3fb-24675d26ace3 00:15:52.695 20:16:29 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:52.695 20:16:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:52.695 20:16:29 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:52.695 20:16:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:52.695 20:16:29 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:52.695 20:16:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:52.695 20:16:29 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:52.695 20:16:29 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:52.695 20:16:29 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bb83d3b6-074d-456e-b3fb-24675d26ace3 00:15:52.695 request: 00:15:52.695 { 00:15:52.695 "uuid": "bb83d3b6-074d-456e-b3fb-24675d26ace3", 00:15:52.695 "method": "bdev_lvol_get_lvstores", 00:15:52.695 "req_id": 1 00:15:52.695 } 00:15:52.695 Got JSON-RPC error response 00:15:52.695 response: 00:15:52.695 { 00:15:52.695 "code": -19, 00:15:52.695 "message": "No such device" 00:15:52.695 } 00:15:52.695 20:16:30 -- common/autotest_common.sh@641 -- # es=1 00:15:52.695 20:16:30 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:52.695 20:16:30 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:52.695 20:16:30 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:52.695 20:16:30 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:52.955 aio_bdev 00:15:52.955 20:16:30 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 093b46d3-1fca-44f2-aa9f-9246c74940ce 00:15:52.955 20:16:30 -- common/autotest_common.sh@885 -- # local bdev_name=093b46d3-1fca-44f2-aa9f-9246c74940ce 00:15:52.955 20:16:30 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:15:52.955 20:16:30 -- common/autotest_common.sh@887 -- # local i 00:15:52.955 20:16:30 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:15:52.955 20:16:30 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:15:52.955 20:16:30 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:53.215 20:16:30 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 093b46d3-1fca-44f2-aa9f-9246c74940ce -t 2000 00:15:53.215 [ 00:15:53.215 { 00:15:53.215 "name": "093b46d3-1fca-44f2-aa9f-9246c74940ce", 00:15:53.215 "aliases": [ 00:15:53.215 "lvs/lvol" 00:15:53.215 ], 00:15:53.215 "product_name": "Logical Volume", 00:15:53.215 "block_size": 4096, 00:15:53.215 "num_blocks": 38912, 00:15:53.215 "uuid": "093b46d3-1fca-44f2-aa9f-9246c74940ce", 00:15:53.215 "assigned_rate_limits": { 00:15:53.215 "rw_ios_per_sec": 0, 00:15:53.215 "rw_mbytes_per_sec": 0, 00:15:53.215 "r_mbytes_per_sec": 0, 00:15:53.215 "w_mbytes_per_sec": 0 00:15:53.215 }, 00:15:53.215 "claimed": false, 00:15:53.215 "zoned": false, 00:15:53.215 "supported_io_types": { 00:15:53.215 "read": true, 00:15:53.215 "write": true, 00:15:53.215 "unmap": true, 00:15:53.215 "write_zeroes": true, 00:15:53.215 "flush": false, 00:15:53.215 "reset": true, 00:15:53.215 "compare": false, 00:15:53.215 "compare_and_write": false, 00:15:53.215 "abort": false, 00:15:53.215 "nvme_admin": false, 00:15:53.215 "nvme_io": false 00:15:53.215 }, 00:15:53.215 "driver_specific": { 00:15:53.215 "lvol": { 00:15:53.215 "lvol_store_uuid": "bb83d3b6-074d-456e-b3fb-24675d26ace3", 00:15:53.215 "base_bdev": "aio_bdev", 00:15:53.215 "thin_provision": false, 00:15:53.215 "snapshot": false, 00:15:53.215 "clone": false, 00:15:53.215 "esnap_clone": false 00:15:53.215 } 00:15:53.215 } 00:15:53.215 } 00:15:53.215 ] 00:15:53.215 20:16:30 -- common/autotest_common.sh@893 -- # return 0 00:15:53.215 20:16:30 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bb83d3b6-074d-456e-b3fb-24675d26ace3 00:15:53.215 20:16:30 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:15:53.475 20:16:30 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:15:53.475 20:16:30 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bb83d3b6-074d-456e-b3fb-24675d26ace3 00:15:53.475 20:16:30 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:15:53.735 20:16:30 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:15:53.735 20:16:30 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 093b46d3-1fca-44f2-aa9f-9246c74940ce 00:15:53.735 20:16:31 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u bb83d3b6-074d-456e-b3fb-24675d26ace3 00:15:53.995 20:16:31 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:54.255 20:16:31 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:54.255 00:15:54.255 real 0m15.538s 00:15:54.255 user 0m15.179s 00:15:54.255 sys 0m1.450s 00:15:54.255 20:16:31 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:54.255 20:16:31 -- common/autotest_common.sh@10 -- # set +x 00:15:54.255 ************************************ 00:15:54.255 END TEST lvs_grow_clean 00:15:54.255 ************************************ 00:15:54.255 20:16:31 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:15:54.255 20:16:31 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:15:54.255 20:16:31 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:15:54.255 20:16:31 -- common/autotest_common.sh@10 -- # set +x 00:15:54.255 ************************************ 00:15:54.255 START TEST lvs_grow_dirty 00:15:54.255 ************************************ 00:15:54.255 20:16:31 -- common/autotest_common.sh@1102 -- # lvs_grow dirty 00:15:54.255 20:16:31 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:15:54.255 20:16:31 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:15:54.255 20:16:31 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:15:54.255 20:16:31 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:15:54.255 20:16:31 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:15:54.255 20:16:31 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:15:54.255 20:16:31 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:54.255 20:16:31 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:54.255 20:16:31 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:54.255 20:16:31 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:15:54.255 20:16:31 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:15:54.515 20:16:31 -- target/nvmf_lvs_grow.sh@28 -- # lvs=e4264c00-0c62-401f-ab21-77d4ea5dd9b9 00:15:54.515 20:16:31 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e4264c00-0c62-401f-ab21-77d4ea5dd9b9 00:15:54.515 20:16:31 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:15:54.774 20:16:32 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:15:54.774 20:16:32 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:15:54.774 20:16:32 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e4264c00-0c62-401f-ab21-77d4ea5dd9b9 lvol 150 00:15:54.774 20:16:32 -- target/nvmf_lvs_grow.sh@33 -- # lvol=57e623fb-04cf-4e12-b484-82899a44897b 00:15:54.774 20:16:32 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:54.774 20:16:32 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:15:55.033 [2024-02-14 20:16:32.335310] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:15:55.033 [2024-02-14 20:16:32.335361] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:15:55.033 true 00:15:55.033 20:16:32 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e4264c00-0c62-401f-ab21-77d4ea5dd9b9 00:15:55.033 20:16:32 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:15:55.293 20:16:32 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:15:55.293 20:16:32 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:55.293 20:16:32 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 57e623fb-04cf-4e12-b484-82899a44897b 00:15:55.553 20:16:32 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:55.812 20:16:32 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:55.813 20:16:33 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1753267 00:15:55.813 20:16:33 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:55.813 20:16:33 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:15:55.813 20:16:33 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1753267 /var/tmp/bdevperf.sock 00:15:55.813 20:16:33 -- common/autotest_common.sh@817 -- # '[' -z 1753267 ']' 00:15:55.813 20:16:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:55.813 20:16:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:55.813 20:16:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:55.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:55.813 20:16:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:55.813 20:16:33 -- common/autotest_common.sh@10 -- # set +x 00:15:55.813 [2024-02-14 20:16:33.197366] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:15:55.813 [2024-02-14 20:16:33.197416] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1753267 ] 00:15:55.813 EAL: No free 2048 kB hugepages reported on node 1 00:15:56.072 [2024-02-14 20:16:33.257146] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:56.072 [2024-02-14 20:16:33.331109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:56.642 20:16:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:56.642 20:16:33 -- common/autotest_common.sh@850 -- # return 0 00:15:56.642 20:16:33 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:15:56.901 Nvme0n1 00:15:56.901 20:16:34 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:15:57.161 [ 00:15:57.161 { 00:15:57.161 "name": "Nvme0n1", 00:15:57.161 "aliases": [ 00:15:57.161 "57e623fb-04cf-4e12-b484-82899a44897b" 00:15:57.161 ], 00:15:57.161 "product_name": "NVMe disk", 00:15:57.161 "block_size": 4096, 00:15:57.161 "num_blocks": 38912, 00:15:57.161 "uuid": "57e623fb-04cf-4e12-b484-82899a44897b", 00:15:57.161 "assigned_rate_limits": { 00:15:57.161 "rw_ios_per_sec": 0, 00:15:57.161 "rw_mbytes_per_sec": 0, 00:15:57.161 "r_mbytes_per_sec": 0, 00:15:57.161 "w_mbytes_per_sec": 0 00:15:57.161 }, 00:15:57.161 "claimed": false, 00:15:57.161 "zoned": false, 00:15:57.161 "supported_io_types": { 00:15:57.161 "read": true, 00:15:57.161 "write": true, 00:15:57.161 "unmap": true, 00:15:57.161 "write_zeroes": true, 00:15:57.161 "flush": true, 00:15:57.161 "reset": true, 00:15:57.161 "compare": true, 00:15:57.161 "compare_and_write": true, 00:15:57.161 "abort": true, 00:15:57.161 "nvme_admin": true, 00:15:57.161 "nvme_io": true 00:15:57.161 }, 00:15:57.161 "driver_specific": { 00:15:57.161 "nvme": [ 00:15:57.161 { 00:15:57.161 "trid": { 00:15:57.161 "trtype": "TCP", 00:15:57.161 "adrfam": "IPv4", 00:15:57.161 "traddr": "10.0.0.2", 00:15:57.161 "trsvcid": "4420", 00:15:57.161 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:15:57.161 }, 00:15:57.161 "ctrlr_data": { 00:15:57.161 "cntlid": 1, 00:15:57.161 "vendor_id": "0x8086", 00:15:57.161 "model_number": "SPDK bdev Controller", 00:15:57.161 "serial_number": "SPDK0", 00:15:57.161 "firmware_revision": "24.05", 00:15:57.161 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:57.161 "oacs": { 00:15:57.161 "security": 0, 00:15:57.161 "format": 0, 00:15:57.161 "firmware": 0, 00:15:57.161 "ns_manage": 0 00:15:57.161 }, 00:15:57.161 "multi_ctrlr": true, 00:15:57.161 "ana_reporting": false 00:15:57.161 }, 00:15:57.161 "vs": { 00:15:57.161 "nvme_version": "1.3" 00:15:57.161 }, 00:15:57.161 "ns_data": { 00:15:57.161 "id": 1, 00:15:57.161 "can_share": true 00:15:57.161 } 00:15:57.161 } 00:15:57.161 ], 00:15:57.161 "mp_policy": "active_passive" 00:15:57.161 } 00:15:57.161 } 00:15:57.161 ] 00:15:57.161 20:16:34 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1753444 00:15:57.161 20:16:34 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:15:57.161 20:16:34 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:57.161 Running I/O for 10 seconds... 00:15:58.145 Latency(us) 00:15:58.145 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:58.145 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:58.145 Nvme0n1 : 1.00 22959.00 89.68 0.00 0.00 0.00 0.00 0.00 00:15:58.145 =================================================================================================================== 00:15:58.145 Total : 22959.00 89.68 0.00 0.00 0.00 0.00 0.00 00:15:58.145 00:15:59.084 20:16:36 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u e4264c00-0c62-401f-ab21-77d4ea5dd9b9 00:15:59.344 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:59.344 Nvme0n1 : 2.00 23358.00 91.24 0.00 0.00 0.00 0.00 0.00 00:15:59.344 =================================================================================================================== 00:15:59.344 Total : 23358.00 91.24 0.00 0.00 0.00 0.00 0.00 00:15:59.344 00:15:59.344 true 00:15:59.344 20:16:36 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e4264c00-0c62-401f-ab21-77d4ea5dd9b9 00:15:59.344 20:16:36 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:15:59.604 20:16:36 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:15:59.604 20:16:36 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:15:59.604 20:16:36 -- target/nvmf_lvs_grow.sh@65 -- # wait 1753444 00:16:00.173 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:00.173 Nvme0n1 : 3.00 23390.67 91.37 0.00 0.00 0.00 0.00 0.00 00:16:00.173 =================================================================================================================== 00:16:00.173 Total : 23390.67 91.37 0.00 0.00 0.00 0.00 0.00 00:16:00.173 00:16:01.553 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:01.553 Nvme0n1 : 4.00 23447.00 91.59 0.00 0.00 0.00 0.00 0.00 00:16:01.553 =================================================================================================================== 00:16:01.553 Total : 23447.00 91.59 0.00 0.00 0.00 0.00 0.00 00:16:01.553 00:16:02.123 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:02.123 Nvme0n1 : 5.00 23485.60 91.74 0.00 0.00 0.00 0.00 0.00 00:16:02.123 =================================================================================================================== 00:16:02.123 Total : 23485.60 91.74 0.00 0.00 0.00 0.00 0.00 00:16:02.123 00:16:03.503 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:03.503 Nvme0n1 : 6.00 23446.00 91.59 0.00 0.00 0.00 0.00 0.00 00:16:03.503 =================================================================================================================== 00:16:03.503 Total : 23446.00 91.59 0.00 0.00 0.00 0.00 0.00 00:16:03.503 00:16:04.441 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:04.441 Nvme0n1 : 7.00 23470.29 91.68 0.00 0.00 0.00 0.00 0.00 00:16:04.441 =================================================================================================================== 00:16:04.441 Total : 23470.29 91.68 0.00 0.00 0.00 0.00 0.00 00:16:04.441 00:16:05.380 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:05.380 Nvme0n1 : 8.00 23545.12 91.97 0.00 0.00 0.00 0.00 0.00 00:16:05.380 =================================================================================================================== 00:16:05.380 Total : 23545.12 91.97 0.00 0.00 0.00 0.00 0.00 00:16:05.380 00:16:06.319 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:06.319 Nvme0n1 : 9.00 23655.89 92.41 0.00 0.00 0.00 0.00 0.00 00:16:06.319 =================================================================================================================== 00:16:06.319 Total : 23655.89 92.41 0.00 0.00 0.00 0.00 0.00 00:16:06.319 00:16:07.260 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:07.260 Nvme0n1 : 10.00 23747.90 92.77 0.00 0.00 0.00 0.00 0.00 00:16:07.260 =================================================================================================================== 00:16:07.260 Total : 23747.90 92.77 0.00 0.00 0.00 0.00 0.00 00:16:07.260 00:16:07.260 00:16:07.260 Latency(us) 00:16:07.260 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:07.260 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:07.260 Nvme0n1 : 10.00 23751.27 92.78 0.00 0.00 5386.12 1825.65 20597.03 00:16:07.260 =================================================================================================================== 00:16:07.261 Total : 23751.27 92.78 0.00 0.00 5386.12 1825.65 20597.03 00:16:07.261 0 00:16:07.261 20:16:44 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1753267 00:16:07.261 20:16:44 -- common/autotest_common.sh@924 -- # '[' -z 1753267 ']' 00:16:07.261 20:16:44 -- common/autotest_common.sh@928 -- # kill -0 1753267 00:16:07.261 20:16:44 -- common/autotest_common.sh@929 -- # uname 00:16:07.261 20:16:44 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:16:07.261 20:16:44 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 1753267 00:16:07.261 20:16:44 -- common/autotest_common.sh@930 -- # process_name=reactor_1 00:16:07.261 20:16:44 -- common/autotest_common.sh@934 -- # '[' reactor_1 = sudo ']' 00:16:07.261 20:16:44 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 1753267' 00:16:07.261 killing process with pid 1753267 00:16:07.261 20:16:44 -- common/autotest_common.sh@943 -- # kill 1753267 00:16:07.261 Received shutdown signal, test time was about 10.000000 seconds 00:16:07.261 00:16:07.261 Latency(us) 00:16:07.261 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:07.261 =================================================================================================================== 00:16:07.261 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:07.261 20:16:44 -- common/autotest_common.sh@948 -- # wait 1753267 00:16:07.520 20:16:44 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:07.781 20:16:44 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e4264c00-0c62-401f-ab21-77d4ea5dd9b9 00:16:07.781 20:16:44 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:16:07.781 20:16:45 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:16:07.781 20:16:45 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:16:07.781 20:16:45 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 1750096 00:16:07.781 20:16:45 -- target/nvmf_lvs_grow.sh@74 -- # wait 1750096 00:16:08.041 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 1750096 Killed "${NVMF_APP[@]}" "$@" 00:16:08.041 20:16:45 -- target/nvmf_lvs_grow.sh@74 -- # true 00:16:08.041 20:16:45 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:16:08.041 20:16:45 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:08.041 20:16:45 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:08.041 20:16:45 -- common/autotest_common.sh@10 -- # set +x 00:16:08.041 20:16:45 -- nvmf/common.sh@469 -- # nvmfpid=1755277 00:16:08.041 20:16:45 -- nvmf/common.sh@470 -- # waitforlisten 1755277 00:16:08.041 20:16:45 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:08.041 20:16:45 -- common/autotest_common.sh@817 -- # '[' -z 1755277 ']' 00:16:08.041 20:16:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:08.041 20:16:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:08.041 20:16:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:08.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:08.041 20:16:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:08.041 20:16:45 -- common/autotest_common.sh@10 -- # set +x 00:16:08.041 [2024-02-14 20:16:45.255094] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:16:08.041 [2024-02-14 20:16:45.255139] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:08.041 EAL: No free 2048 kB hugepages reported on node 1 00:16:08.041 [2024-02-14 20:16:45.317857] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:08.041 [2024-02-14 20:16:45.392977] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:08.041 [2024-02-14 20:16:45.393079] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:08.041 [2024-02-14 20:16:45.393087] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:08.041 [2024-02-14 20:16:45.393093] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:08.041 [2024-02-14 20:16:45.393108] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:08.981 20:16:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:08.981 20:16:46 -- common/autotest_common.sh@850 -- # return 0 00:16:08.981 20:16:46 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:08.981 20:16:46 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:08.981 20:16:46 -- common/autotest_common.sh@10 -- # set +x 00:16:08.981 20:16:46 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:08.981 20:16:46 -- target/nvmf_lvs_grow.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:08.981 [2024-02-14 20:16:46.233348] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:16:08.981 [2024-02-14 20:16:46.233437] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:16:08.981 [2024-02-14 20:16:46.233460] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:16:08.981 20:16:46 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:16:08.981 20:16:46 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev 57e623fb-04cf-4e12-b484-82899a44897b 00:16:08.981 20:16:46 -- common/autotest_common.sh@885 -- # local bdev_name=57e623fb-04cf-4e12-b484-82899a44897b 00:16:08.981 20:16:46 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:16:08.981 20:16:46 -- common/autotest_common.sh@887 -- # local i 00:16:08.981 20:16:46 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:16:08.981 20:16:46 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:16:08.981 20:16:46 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:09.241 20:16:46 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 57e623fb-04cf-4e12-b484-82899a44897b -t 2000 00:16:09.241 [ 00:16:09.241 { 00:16:09.241 "name": "57e623fb-04cf-4e12-b484-82899a44897b", 00:16:09.241 "aliases": [ 00:16:09.241 "lvs/lvol" 00:16:09.241 ], 00:16:09.241 "product_name": "Logical Volume", 00:16:09.241 "block_size": 4096, 00:16:09.241 "num_blocks": 38912, 00:16:09.241 "uuid": "57e623fb-04cf-4e12-b484-82899a44897b", 00:16:09.241 "assigned_rate_limits": { 00:16:09.241 "rw_ios_per_sec": 0, 00:16:09.241 "rw_mbytes_per_sec": 0, 00:16:09.241 "r_mbytes_per_sec": 0, 00:16:09.241 "w_mbytes_per_sec": 0 00:16:09.241 }, 00:16:09.241 "claimed": false, 00:16:09.241 "zoned": false, 00:16:09.241 "supported_io_types": { 00:16:09.241 "read": true, 00:16:09.241 "write": true, 00:16:09.241 "unmap": true, 00:16:09.241 "write_zeroes": true, 00:16:09.241 "flush": false, 00:16:09.241 "reset": true, 00:16:09.241 "compare": false, 00:16:09.241 "compare_and_write": false, 00:16:09.241 "abort": false, 00:16:09.241 "nvme_admin": false, 00:16:09.241 "nvme_io": false 00:16:09.241 }, 00:16:09.241 "driver_specific": { 00:16:09.241 "lvol": { 00:16:09.241 "lvol_store_uuid": "e4264c00-0c62-401f-ab21-77d4ea5dd9b9", 00:16:09.241 "base_bdev": "aio_bdev", 00:16:09.241 "thin_provision": false, 00:16:09.241 "snapshot": false, 00:16:09.241 "clone": false, 00:16:09.241 "esnap_clone": false 00:16:09.241 } 00:16:09.241 } 00:16:09.241 } 00:16:09.241 ] 00:16:09.241 20:16:46 -- common/autotest_common.sh@893 -- # return 0 00:16:09.241 20:16:46 -- target/nvmf_lvs_grow.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e4264c00-0c62-401f-ab21-77d4ea5dd9b9 00:16:09.241 20:16:46 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:16:09.501 20:16:46 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:16:09.501 20:16:46 -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e4264c00-0c62-401f-ab21-77d4ea5dd9b9 00:16:09.501 20:16:46 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:16:09.501 20:16:46 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:16:09.501 20:16:46 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:09.761 [2024-02-14 20:16:47.041978] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:16:09.761 20:16:47 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e4264c00-0c62-401f-ab21-77d4ea5dd9b9 00:16:09.761 20:16:47 -- common/autotest_common.sh@638 -- # local es=0 00:16:09.761 20:16:47 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e4264c00-0c62-401f-ab21-77d4ea5dd9b9 00:16:09.761 20:16:47 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:09.761 20:16:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:09.761 20:16:47 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:09.761 20:16:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:09.761 20:16:47 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:09.761 20:16:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:09.761 20:16:47 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:09.761 20:16:47 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:16:09.761 20:16:47 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e4264c00-0c62-401f-ab21-77d4ea5dd9b9 00:16:10.020 request: 00:16:10.020 { 00:16:10.020 "uuid": "e4264c00-0c62-401f-ab21-77d4ea5dd9b9", 00:16:10.020 "method": "bdev_lvol_get_lvstores", 00:16:10.020 "req_id": 1 00:16:10.020 } 00:16:10.020 Got JSON-RPC error response 00:16:10.020 response: 00:16:10.020 { 00:16:10.020 "code": -19, 00:16:10.020 "message": "No such device" 00:16:10.020 } 00:16:10.020 20:16:47 -- common/autotest_common.sh@641 -- # es=1 00:16:10.020 20:16:47 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:10.020 20:16:47 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:16:10.020 20:16:47 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:10.020 20:16:47 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:10.020 aio_bdev 00:16:10.020 20:16:47 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 57e623fb-04cf-4e12-b484-82899a44897b 00:16:10.020 20:16:47 -- common/autotest_common.sh@885 -- # local bdev_name=57e623fb-04cf-4e12-b484-82899a44897b 00:16:10.020 20:16:47 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:16:10.020 20:16:47 -- common/autotest_common.sh@887 -- # local i 00:16:10.020 20:16:47 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:16:10.020 20:16:47 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:16:10.020 20:16:47 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:10.280 20:16:47 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 57e623fb-04cf-4e12-b484-82899a44897b -t 2000 00:16:10.539 [ 00:16:10.539 { 00:16:10.539 "name": "57e623fb-04cf-4e12-b484-82899a44897b", 00:16:10.539 "aliases": [ 00:16:10.539 "lvs/lvol" 00:16:10.539 ], 00:16:10.539 "product_name": "Logical Volume", 00:16:10.539 "block_size": 4096, 00:16:10.539 "num_blocks": 38912, 00:16:10.539 "uuid": "57e623fb-04cf-4e12-b484-82899a44897b", 00:16:10.539 "assigned_rate_limits": { 00:16:10.539 "rw_ios_per_sec": 0, 00:16:10.539 "rw_mbytes_per_sec": 0, 00:16:10.539 "r_mbytes_per_sec": 0, 00:16:10.539 "w_mbytes_per_sec": 0 00:16:10.539 }, 00:16:10.539 "claimed": false, 00:16:10.539 "zoned": false, 00:16:10.539 "supported_io_types": { 00:16:10.539 "read": true, 00:16:10.539 "write": true, 00:16:10.539 "unmap": true, 00:16:10.539 "write_zeroes": true, 00:16:10.539 "flush": false, 00:16:10.539 "reset": true, 00:16:10.539 "compare": false, 00:16:10.539 "compare_and_write": false, 00:16:10.539 "abort": false, 00:16:10.539 "nvme_admin": false, 00:16:10.539 "nvme_io": false 00:16:10.539 }, 00:16:10.539 "driver_specific": { 00:16:10.539 "lvol": { 00:16:10.539 "lvol_store_uuid": "e4264c00-0c62-401f-ab21-77d4ea5dd9b9", 00:16:10.539 "base_bdev": "aio_bdev", 00:16:10.539 "thin_provision": false, 00:16:10.539 "snapshot": false, 00:16:10.539 "clone": false, 00:16:10.540 "esnap_clone": false 00:16:10.540 } 00:16:10.540 } 00:16:10.540 } 00:16:10.540 ] 00:16:10.540 20:16:47 -- common/autotest_common.sh@893 -- # return 0 00:16:10.540 20:16:47 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e4264c00-0c62-401f-ab21-77d4ea5dd9b9 00:16:10.540 20:16:47 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:16:10.540 20:16:47 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:16:10.540 20:16:47 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e4264c00-0c62-401f-ab21-77d4ea5dd9b9 00:16:10.540 20:16:47 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:16:10.799 20:16:48 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:16:10.799 20:16:48 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 57e623fb-04cf-4e12-b484-82899a44897b 00:16:11.058 20:16:48 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e4264c00-0c62-401f-ab21-77d4ea5dd9b9 00:16:11.058 20:16:48 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:11.317 20:16:48 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:11.317 00:16:11.317 real 0m17.155s 00:16:11.317 user 0m43.634s 00:16:11.317 sys 0m4.149s 00:16:11.317 20:16:48 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:11.317 20:16:48 -- common/autotest_common.sh@10 -- # set +x 00:16:11.317 ************************************ 00:16:11.317 END TEST lvs_grow_dirty 00:16:11.317 ************************************ 00:16:11.317 20:16:48 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:16:11.317 20:16:48 -- common/autotest_common.sh@794 -- # type=--id 00:16:11.317 20:16:48 -- common/autotest_common.sh@795 -- # id=0 00:16:11.317 20:16:48 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:16:11.317 20:16:48 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:16:11.317 20:16:48 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:16:11.317 20:16:48 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:16:11.317 20:16:48 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:16:11.317 20:16:48 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:16:11.317 nvmf_trace.0 00:16:11.317 20:16:48 -- common/autotest_common.sh@809 -- # return 0 00:16:11.317 20:16:48 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:16:11.317 20:16:48 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:11.317 20:16:48 -- nvmf/common.sh@116 -- # sync 00:16:11.317 20:16:48 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:11.317 20:16:48 -- nvmf/common.sh@119 -- # set +e 00:16:11.317 20:16:48 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:11.317 20:16:48 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:11.317 rmmod nvme_tcp 00:16:11.317 rmmod nvme_fabrics 00:16:11.577 rmmod nvme_keyring 00:16:11.577 20:16:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:11.577 20:16:48 -- nvmf/common.sh@123 -- # set -e 00:16:11.577 20:16:48 -- nvmf/common.sh@124 -- # return 0 00:16:11.577 20:16:48 -- nvmf/common.sh@477 -- # '[' -n 1755277 ']' 00:16:11.577 20:16:48 -- nvmf/common.sh@478 -- # killprocess 1755277 00:16:11.577 20:16:48 -- common/autotest_common.sh@924 -- # '[' -z 1755277 ']' 00:16:11.577 20:16:48 -- common/autotest_common.sh@928 -- # kill -0 1755277 00:16:11.577 20:16:48 -- common/autotest_common.sh@929 -- # uname 00:16:11.577 20:16:48 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:16:11.577 20:16:48 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 1755277 00:16:11.577 20:16:48 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:16:11.577 20:16:48 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:16:11.577 20:16:48 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 1755277' 00:16:11.577 killing process with pid 1755277 00:16:11.577 20:16:48 -- common/autotest_common.sh@943 -- # kill 1755277 00:16:11.577 20:16:48 -- common/autotest_common.sh@948 -- # wait 1755277 00:16:11.837 20:16:49 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:11.837 20:16:49 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:11.837 20:16:49 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:11.837 20:16:49 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:11.837 20:16:49 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:11.837 20:16:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:11.837 20:16:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:11.837 20:16:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:13.746 20:16:51 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:16:13.746 00:16:13.746 real 0m42.593s 00:16:13.746 user 1m4.647s 00:16:13.746 sys 0m10.703s 00:16:13.746 20:16:51 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:13.746 20:16:51 -- common/autotest_common.sh@10 -- # set +x 00:16:13.746 ************************************ 00:16:13.746 END TEST nvmf_lvs_grow 00:16:13.746 ************************************ 00:16:13.746 20:16:51 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:16:13.746 20:16:51 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:16:13.746 20:16:51 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:16:13.746 20:16:51 -- common/autotest_common.sh@10 -- # set +x 00:16:13.746 ************************************ 00:16:13.746 START TEST nvmf_bdev_io_wait 00:16:13.746 ************************************ 00:16:13.746 20:16:51 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:16:14.007 * Looking for test storage... 00:16:14.007 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:14.007 20:16:51 -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:14.007 20:16:51 -- nvmf/common.sh@7 -- # uname -s 00:16:14.007 20:16:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:14.008 20:16:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:14.008 20:16:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:14.008 20:16:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:14.008 20:16:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:14.008 20:16:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:14.008 20:16:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:14.008 20:16:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:14.008 20:16:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:14.008 20:16:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:14.008 20:16:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:14.008 20:16:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:16:14.008 20:16:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:14.008 20:16:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:14.008 20:16:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:14.008 20:16:51 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:14.008 20:16:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:14.008 20:16:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:14.008 20:16:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:14.008 20:16:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.008 20:16:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.008 20:16:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.008 20:16:51 -- paths/export.sh@5 -- # export PATH 00:16:14.008 20:16:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.008 20:16:51 -- nvmf/common.sh@46 -- # : 0 00:16:14.008 20:16:51 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:14.008 20:16:51 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:14.008 20:16:51 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:14.008 20:16:51 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:14.008 20:16:51 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:14.008 20:16:51 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:14.008 20:16:51 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:14.008 20:16:51 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:14.008 20:16:51 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:14.008 20:16:51 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:14.008 20:16:51 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:16:14.008 20:16:51 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:14.008 20:16:51 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:14.008 20:16:51 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:14.008 20:16:51 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:14.008 20:16:51 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:14.008 20:16:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:14.008 20:16:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:14.008 20:16:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:14.008 20:16:51 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:14.008 20:16:51 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:14.008 20:16:51 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:14.008 20:16:51 -- common/autotest_common.sh@10 -- # set +x 00:16:20.620 20:16:56 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:20.620 20:16:56 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:20.620 20:16:56 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:20.620 20:16:56 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:20.620 20:16:56 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:20.620 20:16:56 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:20.620 20:16:56 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:20.620 20:16:56 -- nvmf/common.sh@294 -- # net_devs=() 00:16:20.620 20:16:56 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:20.620 20:16:56 -- nvmf/common.sh@295 -- # e810=() 00:16:20.620 20:16:56 -- nvmf/common.sh@295 -- # local -ga e810 00:16:20.620 20:16:56 -- nvmf/common.sh@296 -- # x722=() 00:16:20.620 20:16:56 -- nvmf/common.sh@296 -- # local -ga x722 00:16:20.620 20:16:56 -- nvmf/common.sh@297 -- # mlx=() 00:16:20.620 20:16:56 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:20.620 20:16:56 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:20.620 20:16:56 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:20.620 20:16:56 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:20.620 20:16:56 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:20.621 20:16:56 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:20.621 20:16:56 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:20.621 20:16:56 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:20.621 20:16:56 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:20.621 20:16:56 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:20.621 20:16:56 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:20.621 20:16:56 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:20.621 20:16:56 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:20.621 20:16:56 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:16:20.621 20:16:56 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:16:20.621 20:16:56 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:16:20.621 20:16:56 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:16:20.621 20:16:56 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:20.621 20:16:56 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:20.621 20:16:56 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:20.621 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:20.621 20:16:56 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:20.621 20:16:56 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:20.621 20:16:56 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:20.621 20:16:56 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:20.621 20:16:56 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:20.621 20:16:56 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:20.621 20:16:56 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:20.621 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:20.621 20:16:56 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:20.621 20:16:56 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:20.621 20:16:56 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:20.621 20:16:56 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:20.621 20:16:56 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:20.621 20:16:56 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:20.621 20:16:56 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:16:20.621 20:16:56 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:16:20.621 20:16:56 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:20.621 20:16:56 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:20.621 20:16:56 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:20.621 20:16:56 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:20.621 20:16:56 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:20.621 Found net devices under 0000:af:00.0: cvl_0_0 00:16:20.621 20:16:56 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:20.621 20:16:56 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:20.621 20:16:56 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:20.621 20:16:56 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:20.621 20:16:56 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:20.621 20:16:56 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:20.621 Found net devices under 0000:af:00.1: cvl_0_1 00:16:20.621 20:16:56 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:20.621 20:16:56 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:20.621 20:16:56 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:20.621 20:16:56 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:20.621 20:16:56 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:16:20.621 20:16:56 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:16:20.621 20:16:56 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:20.621 20:16:56 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:20.621 20:16:56 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:20.621 20:16:56 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:16:20.621 20:16:56 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:20.621 20:16:56 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:20.621 20:16:56 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:16:20.621 20:16:56 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:20.621 20:16:56 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:20.621 20:16:56 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:16:20.621 20:16:56 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:16:20.621 20:16:56 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:16:20.621 20:16:56 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:20.621 20:16:57 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:20.621 20:16:57 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:20.621 20:16:57 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:16:20.621 20:16:57 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:20.621 20:16:57 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:20.621 20:16:57 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:20.621 20:16:57 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:16:20.621 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:20.621 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:16:20.621 00:16:20.621 --- 10.0.0.2 ping statistics --- 00:16:20.621 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:20.621 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:16:20.621 20:16:57 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:20.621 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:20.621 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:16:20.621 00:16:20.621 --- 10.0.0.1 ping statistics --- 00:16:20.621 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:20.621 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:16:20.621 20:16:57 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:20.621 20:16:57 -- nvmf/common.sh@410 -- # return 0 00:16:20.621 20:16:57 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:20.621 20:16:57 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:20.621 20:16:57 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:20.621 20:16:57 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:20.621 20:16:57 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:20.621 20:16:57 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:20.621 20:16:57 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:20.621 20:16:57 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:16:20.621 20:16:57 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:20.621 20:16:57 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:20.621 20:16:57 -- common/autotest_common.sh@10 -- # set +x 00:16:20.621 20:16:57 -- nvmf/common.sh@469 -- # nvmfpid=1759818 00:16:20.621 20:16:57 -- nvmf/common.sh@470 -- # waitforlisten 1759818 00:16:20.621 20:16:57 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:16:20.621 20:16:57 -- common/autotest_common.sh@817 -- # '[' -z 1759818 ']' 00:16:20.621 20:16:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:20.621 20:16:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:20.621 20:16:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:20.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:20.621 20:16:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:20.621 20:16:57 -- common/autotest_common.sh@10 -- # set +x 00:16:20.621 [2024-02-14 20:16:57.312214] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:16:20.621 [2024-02-14 20:16:57.312256] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:20.621 EAL: No free 2048 kB hugepages reported on node 1 00:16:20.621 [2024-02-14 20:16:57.376747] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:20.621 [2024-02-14 20:16:57.446979] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:20.621 [2024-02-14 20:16:57.447107] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:20.621 [2024-02-14 20:16:57.447115] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:20.621 [2024-02-14 20:16:57.447121] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:20.621 [2024-02-14 20:16:57.447167] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:20.621 [2024-02-14 20:16:57.447268] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:20.621 [2024-02-14 20:16:57.447335] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:20.621 [2024-02-14 20:16:57.447336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:20.882 20:16:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:20.882 20:16:58 -- common/autotest_common.sh@850 -- # return 0 00:16:20.882 20:16:58 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:20.882 20:16:58 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:20.882 20:16:58 -- common/autotest_common.sh@10 -- # set +x 00:16:20.882 20:16:58 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:20.882 20:16:58 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:16:20.882 20:16:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:20.882 20:16:58 -- common/autotest_common.sh@10 -- # set +x 00:16:20.882 20:16:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:20.882 20:16:58 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:16:20.882 20:16:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:20.882 20:16:58 -- common/autotest_common.sh@10 -- # set +x 00:16:20.882 20:16:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:20.882 20:16:58 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:20.882 20:16:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:20.882 20:16:58 -- common/autotest_common.sh@10 -- # set +x 00:16:20.882 [2024-02-14 20:16:58.228748] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:20.882 20:16:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:20.882 20:16:58 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:20.882 20:16:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:20.882 20:16:58 -- common/autotest_common.sh@10 -- # set +x 00:16:20.882 Malloc0 00:16:20.882 20:16:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:20.882 20:16:58 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:20.882 20:16:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:20.882 20:16:58 -- common/autotest_common.sh@10 -- # set +x 00:16:20.882 20:16:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:20.882 20:16:58 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:20.882 20:16:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:20.882 20:16:58 -- common/autotest_common.sh@10 -- # set +x 00:16:20.882 20:16:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:20.882 20:16:58 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:20.882 20:16:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:20.882 20:16:58 -- common/autotest_common.sh@10 -- # set +x 00:16:20.882 [2024-02-14 20:16:58.281236] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:20.882 20:16:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:20.882 20:16:58 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1760045 00:16:20.882 20:16:58 -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:16:20.882 20:16:58 -- target/bdev_io_wait.sh@30 -- # READ_PID=1760048 00:16:20.882 20:16:58 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:16:20.882 20:16:58 -- nvmf/common.sh@520 -- # config=() 00:16:20.882 20:16:58 -- nvmf/common.sh@520 -- # local subsystem config 00:16:20.882 20:16:58 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:20.882 20:16:58 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:20.882 { 00:16:20.882 "params": { 00:16:20.882 "name": "Nvme$subsystem", 00:16:20.882 "trtype": "$TEST_TRANSPORT", 00:16:20.882 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:20.882 "adrfam": "ipv4", 00:16:20.882 "trsvcid": "$NVMF_PORT", 00:16:20.882 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:20.882 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:20.882 "hdgst": ${hdgst:-false}, 00:16:20.882 "ddgst": ${ddgst:-false} 00:16:20.882 }, 00:16:20.882 "method": "bdev_nvme_attach_controller" 00:16:20.882 } 00:16:20.882 EOF 00:16:20.882 )") 00:16:20.882 20:16:58 -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:16:20.882 20:16:58 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:16:20.882 20:16:58 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1760051 00:16:20.882 20:16:58 -- nvmf/common.sh@520 -- # config=() 00:16:20.882 20:16:58 -- nvmf/common.sh@520 -- # local subsystem config 00:16:20.882 20:16:58 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:20.882 20:16:58 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:20.882 { 00:16:20.882 "params": { 00:16:20.883 "name": "Nvme$subsystem", 00:16:20.883 "trtype": "$TEST_TRANSPORT", 00:16:20.883 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:20.883 "adrfam": "ipv4", 00:16:20.883 "trsvcid": "$NVMF_PORT", 00:16:20.883 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:20.883 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:20.883 "hdgst": ${hdgst:-false}, 00:16:20.883 "ddgst": ${ddgst:-false} 00:16:20.883 }, 00:16:20.883 "method": "bdev_nvme_attach_controller" 00:16:20.883 } 00:16:20.883 EOF 00:16:20.883 )") 00:16:20.883 20:16:58 -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:16:20.883 20:16:58 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:16:20.883 20:16:58 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1760054 00:16:20.883 20:16:58 -- target/bdev_io_wait.sh@35 -- # sync 00:16:20.883 20:16:58 -- nvmf/common.sh@520 -- # config=() 00:16:20.883 20:16:58 -- nvmf/common.sh@542 -- # cat 00:16:20.883 20:16:58 -- nvmf/common.sh@520 -- # local subsystem config 00:16:20.883 20:16:58 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:20.883 20:16:58 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:20.883 { 00:16:20.883 "params": { 00:16:20.883 "name": "Nvme$subsystem", 00:16:20.883 "trtype": "$TEST_TRANSPORT", 00:16:20.883 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:20.883 "adrfam": "ipv4", 00:16:20.883 "trsvcid": "$NVMF_PORT", 00:16:20.883 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:20.883 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:20.883 "hdgst": ${hdgst:-false}, 00:16:20.883 "ddgst": ${ddgst:-false} 00:16:20.883 }, 00:16:20.883 "method": "bdev_nvme_attach_controller" 00:16:20.883 } 00:16:20.883 EOF 00:16:20.883 )") 00:16:20.883 20:16:58 -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:16:20.883 20:16:58 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:16:20.883 20:16:58 -- nvmf/common.sh@542 -- # cat 00:16:20.883 20:16:58 -- nvmf/common.sh@520 -- # config=() 00:16:20.883 20:16:58 -- nvmf/common.sh@520 -- # local subsystem config 00:16:20.883 20:16:58 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:20.883 20:16:58 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:20.883 { 00:16:20.883 "params": { 00:16:20.883 "name": "Nvme$subsystem", 00:16:20.883 "trtype": "$TEST_TRANSPORT", 00:16:20.883 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:20.883 "adrfam": "ipv4", 00:16:20.883 "trsvcid": "$NVMF_PORT", 00:16:20.883 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:20.883 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:20.883 "hdgst": ${hdgst:-false}, 00:16:20.883 "ddgst": ${ddgst:-false} 00:16:20.883 }, 00:16:20.883 "method": "bdev_nvme_attach_controller" 00:16:20.883 } 00:16:20.883 EOF 00:16:20.883 )") 00:16:20.883 20:16:58 -- nvmf/common.sh@542 -- # cat 00:16:20.883 20:16:58 -- target/bdev_io_wait.sh@37 -- # wait 1760045 00:16:20.883 20:16:58 -- nvmf/common.sh@542 -- # cat 00:16:20.883 20:16:58 -- nvmf/common.sh@544 -- # jq . 00:16:20.883 20:16:58 -- nvmf/common.sh@544 -- # jq . 00:16:21.143 20:16:58 -- nvmf/common.sh@544 -- # jq . 00:16:21.143 20:16:58 -- nvmf/common.sh@545 -- # IFS=, 00:16:21.143 20:16:58 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:21.143 "params": { 00:16:21.143 "name": "Nvme1", 00:16:21.143 "trtype": "tcp", 00:16:21.143 "traddr": "10.0.0.2", 00:16:21.143 "adrfam": "ipv4", 00:16:21.143 "trsvcid": "4420", 00:16:21.143 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:21.143 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:21.143 "hdgst": false, 00:16:21.143 "ddgst": false 00:16:21.143 }, 00:16:21.143 "method": "bdev_nvme_attach_controller" 00:16:21.143 }' 00:16:21.143 20:16:58 -- nvmf/common.sh@545 -- # IFS=, 00:16:21.143 20:16:58 -- nvmf/common.sh@544 -- # jq . 00:16:21.143 20:16:58 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:21.143 "params": { 00:16:21.143 "name": "Nvme1", 00:16:21.143 "trtype": "tcp", 00:16:21.143 "traddr": "10.0.0.2", 00:16:21.143 "adrfam": "ipv4", 00:16:21.143 "trsvcid": "4420", 00:16:21.143 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:21.143 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:21.143 "hdgst": false, 00:16:21.143 "ddgst": false 00:16:21.143 }, 00:16:21.143 "method": "bdev_nvme_attach_controller" 00:16:21.143 }' 00:16:21.143 20:16:58 -- nvmf/common.sh@545 -- # IFS=, 00:16:21.143 20:16:58 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:21.143 "params": { 00:16:21.143 "name": "Nvme1", 00:16:21.143 "trtype": "tcp", 00:16:21.143 "traddr": "10.0.0.2", 00:16:21.143 "adrfam": "ipv4", 00:16:21.143 "trsvcid": "4420", 00:16:21.143 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:21.143 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:21.143 "hdgst": false, 00:16:21.143 "ddgst": false 00:16:21.143 }, 00:16:21.143 "method": "bdev_nvme_attach_controller" 00:16:21.143 }' 00:16:21.143 20:16:58 -- nvmf/common.sh@545 -- # IFS=, 00:16:21.143 20:16:58 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:21.143 "params": { 00:16:21.143 "name": "Nvme1", 00:16:21.143 "trtype": "tcp", 00:16:21.143 "traddr": "10.0.0.2", 00:16:21.143 "adrfam": "ipv4", 00:16:21.143 "trsvcid": "4420", 00:16:21.143 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:21.143 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:21.143 "hdgst": false, 00:16:21.143 "ddgst": false 00:16:21.143 }, 00:16:21.143 "method": "bdev_nvme_attach_controller" 00:16:21.143 }' 00:16:21.143 [2024-02-14 20:16:58.327779] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:16:21.143 [2024-02-14 20:16:58.327825] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:16:21.143 [2024-02-14 20:16:58.329823] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:16:21.143 [2024-02-14 20:16:58.329822] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:16:21.143 [2024-02-14 20:16:58.329877] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-02-14 20:16:58.329878] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:16:21.143 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:16:21.143 [2024-02-14 20:16:58.338277] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:16:21.143 [2024-02-14 20:16:58.338345] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:16:21.143 EAL: No free 2048 kB hugepages reported on node 1 00:16:21.143 EAL: No free 2048 kB hugepages reported on node 1 00:16:21.143 [2024-02-14 20:16:58.519505] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:21.403 EAL: No free 2048 kB hugepages reported on node 1 00:16:21.403 [2024-02-14 20:16:58.598069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:16:21.403 [2024-02-14 20:16:58.598104] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:16:21.403 [2024-02-14 20:16:58.622095] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:21.403 EAL: No free 2048 kB hugepages reported on node 1 00:16:21.403 [2024-02-14 20:16:58.697398] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:16:21.403 [2024-02-14 20:16:58.697458] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:16:21.403 [2024-02-14 20:16:58.715858] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:21.403 [2024-02-14 20:16:58.776367] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:21.403 [2024-02-14 20:16:58.809363] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:16:21.403 [2024-02-14 20:16:58.809422] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:16:21.663 [2024-02-14 20:16:58.849330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:21.663 [2024-02-14 20:16:58.849383] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:16:21.663 Running I/O for 1 seconds... 00:16:21.663 Running I/O for 1 seconds... 00:16:21.663 Running I/O for 1 seconds... 00:16:21.663 Running I/O for 1 seconds... 00:16:22.608 00:16:22.608 Latency(us) 00:16:22.608 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:22.608 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:16:22.608 Nvme1n1 : 1.00 256883.16 1003.45 0.00 0.00 496.31 199.92 647.56 00:16:22.608 =================================================================================================================== 00:16:22.608 Total : 256883.16 1003.45 0.00 0.00 496.31 199.92 647.56 00:16:22.608 [2024-02-14 20:16:59.873928] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:16:22.608 00:16:22.608 Latency(us) 00:16:22.608 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:22.608 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:16:22.608 Nvme1n1 : 1.02 7716.05 30.14 0.00 0.00 16463.73 2434.19 23093.64 00:16:22.608 =================================================================================================================== 00:16:22.608 Total : 7716.05 30.14 0.00 0.00 16463.73 2434.19 23093.64 00:16:22.608 [2024-02-14 20:16:59.997933] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:16:22.867 00:16:22.867 Latency(us) 00:16:22.867 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:22.867 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:16:22.867 Nvme1n1 : 1.01 13317.45 52.02 0.00 0.00 9577.03 6647.22 21470.84 00:16:22.867 =================================================================================================================== 00:16:22.867 Total : 13317.45 52.02 0.00 0.00 9577.03 6647.22 21470.84 00:16:22.867 [2024-02-14 20:17:00.049731] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:16:22.867 00:16:22.867 Latency(us) 00:16:22.867 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:22.867 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:16:22.867 Nvme1n1 : 1.00 7373.65 28.80 0.00 0.00 17318.83 4400.27 38447.79 00:16:22.867 =================================================================================================================== 00:16:22.867 Total : 7373.65 28.80 0.00 0.00 17318.83 4400.27 38447.79 00:16:22.867 [2024-02-14 20:17:00.065907] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:16:23.126 20:17:00 -- target/bdev_io_wait.sh@38 -- # wait 1760048 00:16:23.126 20:17:00 -- target/bdev_io_wait.sh@39 -- # wait 1760051 00:16:23.126 20:17:00 -- target/bdev_io_wait.sh@40 -- # wait 1760054 00:16:23.126 20:17:00 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:23.126 20:17:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:23.126 20:17:00 -- common/autotest_common.sh@10 -- # set +x 00:16:23.126 20:17:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:23.126 20:17:00 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:16:23.126 20:17:00 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:16:23.126 20:17:00 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:23.126 20:17:00 -- nvmf/common.sh@116 -- # sync 00:16:23.126 20:17:00 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:23.126 20:17:00 -- nvmf/common.sh@119 -- # set +e 00:16:23.126 20:17:00 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:23.126 20:17:00 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:23.126 rmmod nvme_tcp 00:16:23.126 rmmod nvme_fabrics 00:16:23.126 rmmod nvme_keyring 00:16:23.126 20:17:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:23.126 20:17:00 -- nvmf/common.sh@123 -- # set -e 00:16:23.126 20:17:00 -- nvmf/common.sh@124 -- # return 0 00:16:23.126 20:17:00 -- nvmf/common.sh@477 -- # '[' -n 1759818 ']' 00:16:23.126 20:17:00 -- nvmf/common.sh@478 -- # killprocess 1759818 00:16:23.126 20:17:00 -- common/autotest_common.sh@924 -- # '[' -z 1759818 ']' 00:16:23.126 20:17:00 -- common/autotest_common.sh@928 -- # kill -0 1759818 00:16:23.126 20:17:00 -- common/autotest_common.sh@929 -- # uname 00:16:23.126 20:17:00 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:16:23.126 20:17:00 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 1759818 00:16:23.126 20:17:00 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:16:23.126 20:17:00 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:16:23.126 20:17:00 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 1759818' 00:16:23.126 killing process with pid 1759818 00:16:23.126 20:17:00 -- common/autotest_common.sh@943 -- # kill 1759818 00:16:23.126 20:17:00 -- common/autotest_common.sh@948 -- # wait 1759818 00:16:23.385 20:17:00 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:23.385 20:17:00 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:23.385 20:17:00 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:23.385 20:17:00 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:23.385 20:17:00 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:23.385 20:17:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:23.385 20:17:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:23.385 20:17:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:25.921 20:17:02 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:16:25.921 00:16:25.921 real 0m11.579s 00:16:25.921 user 0m19.975s 00:16:25.921 sys 0m6.215s 00:16:25.921 20:17:02 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:25.921 20:17:02 -- common/autotest_common.sh@10 -- # set +x 00:16:25.921 ************************************ 00:16:25.921 END TEST nvmf_bdev_io_wait 00:16:25.921 ************************************ 00:16:25.921 20:17:02 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:16:25.921 20:17:02 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:16:25.921 20:17:02 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:16:25.921 20:17:02 -- common/autotest_common.sh@10 -- # set +x 00:16:25.921 ************************************ 00:16:25.921 START TEST nvmf_queue_depth 00:16:25.921 ************************************ 00:16:25.921 20:17:02 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:16:25.921 * Looking for test storage... 00:16:25.921 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:25.921 20:17:02 -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:25.921 20:17:02 -- nvmf/common.sh@7 -- # uname -s 00:16:25.921 20:17:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:25.921 20:17:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:25.921 20:17:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:25.921 20:17:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:25.921 20:17:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:25.921 20:17:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:25.921 20:17:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:25.921 20:17:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:25.921 20:17:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:25.921 20:17:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:25.921 20:17:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:25.921 20:17:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:16:25.921 20:17:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:25.921 20:17:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:25.921 20:17:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:25.921 20:17:02 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:25.921 20:17:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:25.921 20:17:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:25.921 20:17:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:25.921 20:17:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.921 20:17:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.921 20:17:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.921 20:17:02 -- paths/export.sh@5 -- # export PATH 00:16:25.921 20:17:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.921 20:17:02 -- nvmf/common.sh@46 -- # : 0 00:16:25.921 20:17:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:25.921 20:17:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:25.921 20:17:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:25.921 20:17:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:25.921 20:17:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:25.921 20:17:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:25.921 20:17:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:25.921 20:17:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:25.921 20:17:02 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:16:25.921 20:17:02 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:16:25.921 20:17:02 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:25.921 20:17:02 -- target/queue_depth.sh@19 -- # nvmftestinit 00:16:25.921 20:17:02 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:25.921 20:17:02 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:25.921 20:17:02 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:25.921 20:17:02 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:25.921 20:17:02 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:25.921 20:17:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:25.921 20:17:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:25.921 20:17:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:25.921 20:17:02 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:25.921 20:17:02 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:25.921 20:17:02 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:25.921 20:17:02 -- common/autotest_common.sh@10 -- # set +x 00:16:31.194 20:17:08 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:31.194 20:17:08 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:31.194 20:17:08 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:31.194 20:17:08 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:31.194 20:17:08 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:31.194 20:17:08 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:31.194 20:17:08 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:31.194 20:17:08 -- nvmf/common.sh@294 -- # net_devs=() 00:16:31.194 20:17:08 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:31.194 20:17:08 -- nvmf/common.sh@295 -- # e810=() 00:16:31.194 20:17:08 -- nvmf/common.sh@295 -- # local -ga e810 00:16:31.194 20:17:08 -- nvmf/common.sh@296 -- # x722=() 00:16:31.194 20:17:08 -- nvmf/common.sh@296 -- # local -ga x722 00:16:31.194 20:17:08 -- nvmf/common.sh@297 -- # mlx=() 00:16:31.194 20:17:08 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:31.194 20:17:08 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:31.194 20:17:08 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:31.194 20:17:08 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:31.194 20:17:08 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:31.194 20:17:08 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:31.194 20:17:08 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:31.194 20:17:08 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:31.194 20:17:08 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:31.194 20:17:08 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:31.194 20:17:08 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:31.194 20:17:08 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:31.194 20:17:08 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:31.194 20:17:08 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:16:31.194 20:17:08 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:16:31.194 20:17:08 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:16:31.194 20:17:08 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:16:31.194 20:17:08 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:31.194 20:17:08 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:31.194 20:17:08 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:31.194 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:31.194 20:17:08 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:31.194 20:17:08 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:31.194 20:17:08 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:31.194 20:17:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:31.194 20:17:08 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:31.194 20:17:08 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:31.194 20:17:08 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:31.194 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:31.194 20:17:08 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:31.194 20:17:08 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:31.194 20:17:08 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:31.194 20:17:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:31.194 20:17:08 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:31.194 20:17:08 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:31.194 20:17:08 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:16:31.194 20:17:08 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:16:31.195 20:17:08 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:31.195 20:17:08 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:31.195 20:17:08 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:31.195 20:17:08 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:31.195 20:17:08 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:31.195 Found net devices under 0000:af:00.0: cvl_0_0 00:16:31.195 20:17:08 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:31.195 20:17:08 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:31.195 20:17:08 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:31.195 20:17:08 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:31.195 20:17:08 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:31.195 20:17:08 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:31.195 Found net devices under 0000:af:00.1: cvl_0_1 00:16:31.195 20:17:08 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:31.195 20:17:08 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:31.195 20:17:08 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:31.195 20:17:08 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:31.195 20:17:08 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:16:31.195 20:17:08 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:16:31.195 20:17:08 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:31.195 20:17:08 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:31.195 20:17:08 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:31.195 20:17:08 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:16:31.195 20:17:08 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:31.195 20:17:08 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:31.195 20:17:08 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:16:31.195 20:17:08 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:31.195 20:17:08 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:31.195 20:17:08 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:16:31.195 20:17:08 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:16:31.195 20:17:08 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:16:31.195 20:17:08 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:31.195 20:17:08 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:31.195 20:17:08 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:31.195 20:17:08 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:16:31.195 20:17:08 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:31.195 20:17:08 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:31.195 20:17:08 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:31.195 20:17:08 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:16:31.195 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:31.195 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.243 ms 00:16:31.195 00:16:31.195 --- 10.0.0.2 ping statistics --- 00:16:31.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:31.195 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:16:31.195 20:17:08 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:31.195 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:31.195 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.305 ms 00:16:31.195 00:16:31.195 --- 10.0.0.1 ping statistics --- 00:16:31.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:31.195 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:16:31.195 20:17:08 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:31.195 20:17:08 -- nvmf/common.sh@410 -- # return 0 00:16:31.195 20:17:08 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:31.195 20:17:08 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:31.195 20:17:08 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:31.195 20:17:08 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:31.195 20:17:08 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:31.195 20:17:08 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:31.195 20:17:08 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:31.195 20:17:08 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:16:31.195 20:17:08 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:31.195 20:17:08 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:31.195 20:17:08 -- common/autotest_common.sh@10 -- # set +x 00:16:31.195 20:17:08 -- nvmf/common.sh@469 -- # nvmfpid=1764123 00:16:31.195 20:17:08 -- nvmf/common.sh@470 -- # waitforlisten 1764123 00:16:31.195 20:17:08 -- common/autotest_common.sh@817 -- # '[' -z 1764123 ']' 00:16:31.195 20:17:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:31.195 20:17:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:31.195 20:17:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:31.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:31.195 20:17:08 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:31.195 20:17:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:31.195 20:17:08 -- common/autotest_common.sh@10 -- # set +x 00:16:31.195 [2024-02-14 20:17:08.576772] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:16:31.195 [2024-02-14 20:17:08.576813] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:31.195 EAL: No free 2048 kB hugepages reported on node 1 00:16:31.455 [2024-02-14 20:17:08.638179] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:31.455 [2024-02-14 20:17:08.712832] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:31.455 [2024-02-14 20:17:08.712934] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:31.455 [2024-02-14 20:17:08.712942] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:31.455 [2024-02-14 20:17:08.712948] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:31.455 [2024-02-14 20:17:08.712963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:32.023 20:17:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:32.023 20:17:09 -- common/autotest_common.sh@850 -- # return 0 00:16:32.023 20:17:09 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:32.023 20:17:09 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:32.023 20:17:09 -- common/autotest_common.sh@10 -- # set +x 00:16:32.023 20:17:09 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:32.023 20:17:09 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:32.023 20:17:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:32.023 20:17:09 -- common/autotest_common.sh@10 -- # set +x 00:16:32.023 [2024-02-14 20:17:09.398731] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:32.023 20:17:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:32.023 20:17:09 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:32.023 20:17:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:32.023 20:17:09 -- common/autotest_common.sh@10 -- # set +x 00:16:32.023 Malloc0 00:16:32.023 20:17:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:32.023 20:17:09 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:32.023 20:17:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:32.023 20:17:09 -- common/autotest_common.sh@10 -- # set +x 00:16:32.023 20:17:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:32.283 20:17:09 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:32.283 20:17:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:32.283 20:17:09 -- common/autotest_common.sh@10 -- # set +x 00:16:32.283 20:17:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:32.283 20:17:09 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:32.283 20:17:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:32.283 20:17:09 -- common/autotest_common.sh@10 -- # set +x 00:16:32.283 [2024-02-14 20:17:09.452005] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:32.283 20:17:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:32.283 20:17:09 -- target/queue_depth.sh@30 -- # bdevperf_pid=1764366 00:16:32.283 20:17:09 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:32.283 20:17:09 -- target/queue_depth.sh@33 -- # waitforlisten 1764366 /var/tmp/bdevperf.sock 00:16:32.283 20:17:09 -- common/autotest_common.sh@817 -- # '[' -z 1764366 ']' 00:16:32.283 20:17:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:32.283 20:17:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:32.283 20:17:09 -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:16:32.283 20:17:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:32.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:32.283 20:17:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:32.283 20:17:09 -- common/autotest_common.sh@10 -- # set +x 00:16:32.283 [2024-02-14 20:17:09.495367] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:16:32.283 [2024-02-14 20:17:09.495406] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1764366 ] 00:16:32.283 EAL: No free 2048 kB hugepages reported on node 1 00:16:32.283 [2024-02-14 20:17:09.555112] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:32.283 [2024-02-14 20:17:09.624965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:33.221 20:17:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:33.221 20:17:10 -- common/autotest_common.sh@850 -- # return 0 00:16:33.221 20:17:10 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:33.221 20:17:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:33.221 20:17:10 -- common/autotest_common.sh@10 -- # set +x 00:16:33.221 NVMe0n1 00:16:33.221 20:17:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:33.221 20:17:10 -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:33.221 Running I/O for 10 seconds... 00:16:45.434 00:16:45.434 Latency(us) 00:16:45.434 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:45.434 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:16:45.434 Verification LBA range: start 0x0 length 0x4000 00:16:45.434 NVMe0n1 : 10.05 18430.01 71.99 0.00 0.00 55403.39 11983.73 54176.43 00:16:45.434 =================================================================================================================== 00:16:45.434 Total : 18430.01 71.99 0.00 0.00 55403.39 11983.73 54176.43 00:16:45.434 0 00:16:45.434 20:17:20 -- target/queue_depth.sh@39 -- # killprocess 1764366 00:16:45.434 20:17:20 -- common/autotest_common.sh@924 -- # '[' -z 1764366 ']' 00:16:45.434 20:17:20 -- common/autotest_common.sh@928 -- # kill -0 1764366 00:16:45.434 20:17:20 -- common/autotest_common.sh@929 -- # uname 00:16:45.434 20:17:20 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:16:45.434 20:17:20 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 1764366 00:16:45.434 20:17:20 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:16:45.434 20:17:20 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:16:45.434 20:17:20 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 1764366' 00:16:45.434 killing process with pid 1764366 00:16:45.434 20:17:20 -- common/autotest_common.sh@943 -- # kill 1764366 00:16:45.434 Received shutdown signal, test time was about 10.000000 seconds 00:16:45.434 00:16:45.434 Latency(us) 00:16:45.434 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:45.435 =================================================================================================================== 00:16:45.435 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:45.435 20:17:20 -- common/autotest_common.sh@948 -- # wait 1764366 00:16:45.435 20:17:20 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:16:45.435 20:17:20 -- target/queue_depth.sh@43 -- # nvmftestfini 00:16:45.435 20:17:20 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:45.435 20:17:20 -- nvmf/common.sh@116 -- # sync 00:16:45.435 20:17:20 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:45.435 20:17:20 -- nvmf/common.sh@119 -- # set +e 00:16:45.435 20:17:20 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:45.435 20:17:20 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:45.435 rmmod nvme_tcp 00:16:45.435 rmmod nvme_fabrics 00:16:45.435 rmmod nvme_keyring 00:16:45.435 20:17:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:45.435 20:17:20 -- nvmf/common.sh@123 -- # set -e 00:16:45.435 20:17:20 -- nvmf/common.sh@124 -- # return 0 00:16:45.435 20:17:20 -- nvmf/common.sh@477 -- # '[' -n 1764123 ']' 00:16:45.435 20:17:20 -- nvmf/common.sh@478 -- # killprocess 1764123 00:16:45.435 20:17:20 -- common/autotest_common.sh@924 -- # '[' -z 1764123 ']' 00:16:45.435 20:17:20 -- common/autotest_common.sh@928 -- # kill -0 1764123 00:16:45.435 20:17:20 -- common/autotest_common.sh@929 -- # uname 00:16:45.435 20:17:21 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:16:45.435 20:17:21 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 1764123 00:16:45.435 20:17:21 -- common/autotest_common.sh@930 -- # process_name=reactor_1 00:16:45.435 20:17:21 -- common/autotest_common.sh@934 -- # '[' reactor_1 = sudo ']' 00:16:45.435 20:17:21 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 1764123' 00:16:45.435 killing process with pid 1764123 00:16:45.435 20:17:21 -- common/autotest_common.sh@943 -- # kill 1764123 00:16:45.435 20:17:21 -- common/autotest_common.sh@948 -- # wait 1764123 00:16:45.435 20:17:21 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:45.435 20:17:21 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:45.435 20:17:21 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:45.435 20:17:21 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:45.435 20:17:21 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:45.435 20:17:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:45.435 20:17:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:45.435 20:17:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:46.005 20:17:23 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:16:46.005 00:16:46.005 real 0m20.570s 00:16:46.005 user 0m24.840s 00:16:46.005 sys 0m5.987s 00:16:46.005 20:17:23 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:46.005 20:17:23 -- common/autotest_common.sh@10 -- # set +x 00:16:46.005 ************************************ 00:16:46.005 END TEST nvmf_queue_depth 00:16:46.005 ************************************ 00:16:46.005 20:17:23 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:16:46.005 20:17:23 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:16:46.005 20:17:23 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:16:46.005 20:17:23 -- common/autotest_common.sh@10 -- # set +x 00:16:46.005 ************************************ 00:16:46.005 START TEST nvmf_multipath 00:16:46.005 ************************************ 00:16:46.005 20:17:23 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:16:46.265 * Looking for test storage... 00:16:46.265 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:46.265 20:17:23 -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:46.265 20:17:23 -- nvmf/common.sh@7 -- # uname -s 00:16:46.265 20:17:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:46.265 20:17:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:46.265 20:17:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:46.265 20:17:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:46.265 20:17:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:46.265 20:17:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:46.265 20:17:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:46.265 20:17:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:46.265 20:17:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:46.265 20:17:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:46.265 20:17:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:46.265 20:17:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:16:46.265 20:17:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:46.265 20:17:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:46.265 20:17:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:46.265 20:17:23 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:46.265 20:17:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:46.265 20:17:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:46.265 20:17:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:46.265 20:17:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.266 20:17:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.266 20:17:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.266 20:17:23 -- paths/export.sh@5 -- # export PATH 00:16:46.266 20:17:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.266 20:17:23 -- nvmf/common.sh@46 -- # : 0 00:16:46.266 20:17:23 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:46.266 20:17:23 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:46.266 20:17:23 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:46.266 20:17:23 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:46.266 20:17:23 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:46.266 20:17:23 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:46.266 20:17:23 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:46.266 20:17:23 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:46.266 20:17:23 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:46.266 20:17:23 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:46.266 20:17:23 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:16:46.266 20:17:23 -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:46.266 20:17:23 -- target/multipath.sh@43 -- # nvmftestinit 00:16:46.266 20:17:23 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:46.266 20:17:23 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:46.266 20:17:23 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:46.266 20:17:23 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:46.266 20:17:23 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:46.266 20:17:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:46.266 20:17:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:46.266 20:17:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:46.266 20:17:23 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:46.266 20:17:23 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:46.266 20:17:23 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:46.266 20:17:23 -- common/autotest_common.sh@10 -- # set +x 00:16:52.832 20:17:29 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:52.832 20:17:29 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:52.832 20:17:29 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:52.832 20:17:29 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:52.832 20:17:29 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:52.832 20:17:29 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:52.832 20:17:29 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:52.832 20:17:29 -- nvmf/common.sh@294 -- # net_devs=() 00:16:52.832 20:17:29 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:52.832 20:17:29 -- nvmf/common.sh@295 -- # e810=() 00:16:52.832 20:17:29 -- nvmf/common.sh@295 -- # local -ga e810 00:16:52.832 20:17:29 -- nvmf/common.sh@296 -- # x722=() 00:16:52.832 20:17:29 -- nvmf/common.sh@296 -- # local -ga x722 00:16:52.832 20:17:29 -- nvmf/common.sh@297 -- # mlx=() 00:16:52.832 20:17:29 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:52.832 20:17:29 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:52.832 20:17:29 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:52.832 20:17:29 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:52.832 20:17:29 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:52.832 20:17:29 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:52.832 20:17:29 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:52.833 20:17:29 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:52.833 20:17:29 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:52.833 20:17:29 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:52.833 20:17:29 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:52.833 20:17:29 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:52.833 20:17:29 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:52.833 20:17:29 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:16:52.833 20:17:29 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:16:52.833 20:17:29 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:16:52.833 20:17:29 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:16:52.833 20:17:29 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:52.833 20:17:29 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:52.833 20:17:29 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:52.833 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:52.833 20:17:29 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:52.833 20:17:29 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:52.833 20:17:29 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:52.833 20:17:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:52.833 20:17:29 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:52.833 20:17:29 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:52.833 20:17:29 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:52.833 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:52.833 20:17:29 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:52.833 20:17:29 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:52.833 20:17:29 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:52.833 20:17:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:52.833 20:17:29 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:52.833 20:17:29 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:52.833 20:17:29 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:16:52.833 20:17:29 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:16:52.833 20:17:29 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:52.833 20:17:29 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:52.833 20:17:29 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:52.833 20:17:29 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:52.833 20:17:29 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:52.833 Found net devices under 0000:af:00.0: cvl_0_0 00:16:52.833 20:17:29 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:52.833 20:17:29 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:52.833 20:17:29 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:52.833 20:17:29 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:52.833 20:17:29 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:52.833 20:17:29 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:52.833 Found net devices under 0000:af:00.1: cvl_0_1 00:16:52.833 20:17:29 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:52.833 20:17:29 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:52.833 20:17:29 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:52.833 20:17:29 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:52.833 20:17:29 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:16:52.833 20:17:29 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:16:52.833 20:17:29 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:52.833 20:17:29 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:52.833 20:17:29 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:52.833 20:17:29 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:16:52.833 20:17:29 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:52.833 20:17:29 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:52.833 20:17:29 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:16:52.833 20:17:29 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:52.833 20:17:29 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:52.833 20:17:29 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:16:52.833 20:17:29 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:16:52.833 20:17:29 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:16:52.833 20:17:29 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:52.833 20:17:29 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:52.833 20:17:29 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:52.833 20:17:29 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:16:52.833 20:17:29 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:52.833 20:17:29 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:52.833 20:17:29 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:52.833 20:17:29 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:16:52.833 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:52.833 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.160 ms 00:16:52.833 00:16:52.833 --- 10.0.0.2 ping statistics --- 00:16:52.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:52.833 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:16:52.833 20:17:29 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:52.833 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:52.833 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:16:52.833 00:16:52.833 --- 10.0.0.1 ping statistics --- 00:16:52.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:52.833 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:16:52.833 20:17:29 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:52.833 20:17:29 -- nvmf/common.sh@410 -- # return 0 00:16:52.833 20:17:29 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:52.833 20:17:29 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:52.833 20:17:29 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:52.833 20:17:29 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:52.833 20:17:29 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:52.833 20:17:29 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:52.833 20:17:29 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:52.833 20:17:29 -- target/multipath.sh@45 -- # '[' -z ']' 00:16:52.833 20:17:29 -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:16:52.833 only one NIC for nvmf test 00:16:52.833 20:17:29 -- target/multipath.sh@47 -- # nvmftestfini 00:16:52.833 20:17:29 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:52.833 20:17:29 -- nvmf/common.sh@116 -- # sync 00:16:52.833 20:17:29 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:52.833 20:17:29 -- nvmf/common.sh@119 -- # set +e 00:16:52.833 20:17:29 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:52.833 20:17:29 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:52.833 rmmod nvme_tcp 00:16:52.833 rmmod nvme_fabrics 00:16:52.833 rmmod nvme_keyring 00:16:52.833 20:17:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:52.833 20:17:29 -- nvmf/common.sh@123 -- # set -e 00:16:52.833 20:17:29 -- nvmf/common.sh@124 -- # return 0 00:16:52.833 20:17:29 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:16:52.833 20:17:29 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:52.833 20:17:29 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:52.833 20:17:29 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:52.833 20:17:29 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:52.833 20:17:29 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:52.833 20:17:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:52.833 20:17:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:52.833 20:17:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:54.213 20:17:31 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:16:54.213 20:17:31 -- target/multipath.sh@48 -- # exit 0 00:16:54.213 20:17:31 -- target/multipath.sh@1 -- # nvmftestfini 00:16:54.213 20:17:31 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:54.213 20:17:31 -- nvmf/common.sh@116 -- # sync 00:16:54.213 20:17:31 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:54.213 20:17:31 -- nvmf/common.sh@119 -- # set +e 00:16:54.213 20:17:31 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:54.213 20:17:31 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:54.213 20:17:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:54.213 20:17:31 -- nvmf/common.sh@123 -- # set -e 00:16:54.213 20:17:31 -- nvmf/common.sh@124 -- # return 0 00:16:54.213 20:17:31 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:16:54.213 20:17:31 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:54.213 20:17:31 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:54.213 20:17:31 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:54.213 20:17:31 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:54.213 20:17:31 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:54.213 20:17:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:54.213 20:17:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:54.213 20:17:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:54.213 20:17:31 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:16:54.213 00:16:54.213 real 0m8.234s 00:16:54.213 user 0m1.718s 00:16:54.213 sys 0m4.524s 00:16:54.213 20:17:31 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:54.213 20:17:31 -- common/autotest_common.sh@10 -- # set +x 00:16:54.213 ************************************ 00:16:54.213 END TEST nvmf_multipath 00:16:54.213 ************************************ 00:16:54.473 20:17:31 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:16:54.473 20:17:31 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:16:54.473 20:17:31 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:16:54.473 20:17:31 -- common/autotest_common.sh@10 -- # set +x 00:16:54.473 ************************************ 00:16:54.473 START TEST nvmf_zcopy 00:16:54.473 ************************************ 00:16:54.473 20:17:31 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:16:54.473 * Looking for test storage... 00:16:54.473 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:54.473 20:17:31 -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:54.473 20:17:31 -- nvmf/common.sh@7 -- # uname -s 00:16:54.473 20:17:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:54.473 20:17:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:54.473 20:17:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:54.473 20:17:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:54.473 20:17:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:54.473 20:17:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:54.473 20:17:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:54.473 20:17:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:54.473 20:17:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:54.473 20:17:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:54.473 20:17:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:54.473 20:17:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:16:54.473 20:17:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:54.473 20:17:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:54.473 20:17:31 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:54.473 20:17:31 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:54.473 20:17:31 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:54.473 20:17:31 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:54.473 20:17:31 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:54.474 20:17:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:54.474 20:17:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:54.474 20:17:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:54.474 20:17:31 -- paths/export.sh@5 -- # export PATH 00:16:54.474 20:17:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:54.474 20:17:31 -- nvmf/common.sh@46 -- # : 0 00:16:54.474 20:17:31 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:54.474 20:17:31 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:54.474 20:17:31 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:54.474 20:17:31 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:54.474 20:17:31 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:54.474 20:17:31 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:54.474 20:17:31 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:54.474 20:17:31 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:54.474 20:17:31 -- target/zcopy.sh@12 -- # nvmftestinit 00:16:54.474 20:17:31 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:54.474 20:17:31 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:54.474 20:17:31 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:54.474 20:17:31 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:54.474 20:17:31 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:54.474 20:17:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:54.474 20:17:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:54.474 20:17:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:54.474 20:17:31 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:54.474 20:17:31 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:54.474 20:17:31 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:54.474 20:17:31 -- common/autotest_common.sh@10 -- # set +x 00:17:01.042 20:17:37 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:01.042 20:17:37 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:01.042 20:17:37 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:01.042 20:17:37 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:01.042 20:17:37 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:01.042 20:17:37 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:01.042 20:17:37 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:01.042 20:17:37 -- nvmf/common.sh@294 -- # net_devs=() 00:17:01.042 20:17:37 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:01.042 20:17:37 -- nvmf/common.sh@295 -- # e810=() 00:17:01.042 20:17:37 -- nvmf/common.sh@295 -- # local -ga e810 00:17:01.042 20:17:37 -- nvmf/common.sh@296 -- # x722=() 00:17:01.042 20:17:37 -- nvmf/common.sh@296 -- # local -ga x722 00:17:01.042 20:17:37 -- nvmf/common.sh@297 -- # mlx=() 00:17:01.042 20:17:37 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:01.042 20:17:37 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:01.042 20:17:37 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:01.042 20:17:37 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:01.042 20:17:37 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:01.042 20:17:37 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:01.042 20:17:37 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:01.042 20:17:37 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:01.042 20:17:37 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:01.042 20:17:37 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:01.042 20:17:37 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:01.042 20:17:37 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:01.042 20:17:37 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:01.042 20:17:37 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:17:01.042 20:17:37 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:17:01.042 20:17:37 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:17:01.042 20:17:37 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:17:01.042 20:17:37 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:01.042 20:17:37 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:01.042 20:17:37 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:01.042 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:01.042 20:17:37 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:01.042 20:17:37 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:01.042 20:17:37 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:01.042 20:17:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:01.042 20:17:37 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:01.042 20:17:37 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:01.042 20:17:37 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:01.042 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:01.042 20:17:37 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:01.042 20:17:37 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:01.042 20:17:37 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:01.042 20:17:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:01.042 20:17:37 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:01.042 20:17:37 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:01.042 20:17:37 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:17:01.042 20:17:37 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:17:01.042 20:17:37 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:01.042 20:17:37 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:01.042 20:17:37 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:01.042 20:17:37 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:01.042 20:17:37 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:01.042 Found net devices under 0000:af:00.0: cvl_0_0 00:17:01.042 20:17:37 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:01.042 20:17:37 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:01.042 20:17:37 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:01.042 20:17:37 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:01.042 20:17:37 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:01.042 20:17:37 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:01.042 Found net devices under 0000:af:00.1: cvl_0_1 00:17:01.042 20:17:37 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:01.042 20:17:37 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:01.042 20:17:37 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:01.042 20:17:37 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:01.042 20:17:37 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:17:01.042 20:17:37 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:17:01.042 20:17:37 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:01.042 20:17:37 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:01.042 20:17:37 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:01.042 20:17:37 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:17:01.042 20:17:37 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:01.042 20:17:37 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:01.042 20:17:37 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:17:01.042 20:17:37 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:01.042 20:17:37 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:01.042 20:17:37 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:17:01.042 20:17:37 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:17:01.042 20:17:37 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:17:01.042 20:17:37 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:01.042 20:17:37 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:01.042 20:17:37 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:01.042 20:17:37 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:17:01.042 20:17:37 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:01.042 20:17:37 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:01.042 20:17:37 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:01.042 20:17:37 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:17:01.042 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:01.042 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.155 ms 00:17:01.042 00:17:01.042 --- 10.0.0.2 ping statistics --- 00:17:01.042 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:01.042 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:17:01.042 20:17:37 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:01.042 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:01.042 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:17:01.042 00:17:01.042 --- 10.0.0.1 ping statistics --- 00:17:01.042 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:01.042 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:17:01.042 20:17:37 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:01.042 20:17:37 -- nvmf/common.sh@410 -- # return 0 00:17:01.042 20:17:37 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:01.042 20:17:37 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:01.042 20:17:37 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:01.042 20:17:37 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:01.042 20:17:37 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:01.042 20:17:37 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:01.042 20:17:37 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:01.042 20:17:37 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:17:01.043 20:17:37 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:01.043 20:17:37 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:01.043 20:17:37 -- common/autotest_common.sh@10 -- # set +x 00:17:01.043 20:17:37 -- nvmf/common.sh@469 -- # nvmfpid=1773793 00:17:01.043 20:17:37 -- nvmf/common.sh@470 -- # waitforlisten 1773793 00:17:01.043 20:17:37 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:01.043 20:17:37 -- common/autotest_common.sh@817 -- # '[' -z 1773793 ']' 00:17:01.043 20:17:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:01.043 20:17:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:01.043 20:17:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:01.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:01.043 20:17:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:01.043 20:17:37 -- common/autotest_common.sh@10 -- # set +x 00:17:01.043 [2024-02-14 20:17:37.974809] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:17:01.043 [2024-02-14 20:17:37.974848] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:01.043 EAL: No free 2048 kB hugepages reported on node 1 00:17:01.043 [2024-02-14 20:17:38.038151] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:01.043 [2024-02-14 20:17:38.110948] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:01.043 [2024-02-14 20:17:38.111054] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:01.043 [2024-02-14 20:17:38.111061] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:01.043 [2024-02-14 20:17:38.111068] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:01.043 [2024-02-14 20:17:38.111083] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:01.612 20:17:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:01.612 20:17:38 -- common/autotest_common.sh@850 -- # return 0 00:17:01.612 20:17:38 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:01.612 20:17:38 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:01.612 20:17:38 -- common/autotest_common.sh@10 -- # set +x 00:17:01.612 20:17:38 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:01.612 20:17:38 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:17:01.612 20:17:38 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:17:01.612 20:17:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:01.612 20:17:38 -- common/autotest_common.sh@10 -- # set +x 00:17:01.612 [2024-02-14 20:17:38.801491] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:01.612 20:17:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:01.612 20:17:38 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:01.612 20:17:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:01.612 20:17:38 -- common/autotest_common.sh@10 -- # set +x 00:17:01.612 20:17:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:01.612 20:17:38 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:01.612 20:17:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:01.612 20:17:38 -- common/autotest_common.sh@10 -- # set +x 00:17:01.612 [2024-02-14 20:17:38.825678] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:01.612 20:17:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:01.612 20:17:38 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:01.612 20:17:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:01.612 20:17:38 -- common/autotest_common.sh@10 -- # set +x 00:17:01.612 20:17:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:01.612 20:17:38 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:17:01.612 20:17:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:01.612 20:17:38 -- common/autotest_common.sh@10 -- # set +x 00:17:01.612 malloc0 00:17:01.612 20:17:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:01.612 20:17:38 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:01.612 20:17:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:01.612 20:17:38 -- common/autotest_common.sh@10 -- # set +x 00:17:01.612 20:17:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:01.612 20:17:38 -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:17:01.612 20:17:38 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:17:01.612 20:17:38 -- nvmf/common.sh@520 -- # config=() 00:17:01.612 20:17:38 -- nvmf/common.sh@520 -- # local subsystem config 00:17:01.612 20:17:38 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:01.612 20:17:38 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:01.612 { 00:17:01.612 "params": { 00:17:01.612 "name": "Nvme$subsystem", 00:17:01.612 "trtype": "$TEST_TRANSPORT", 00:17:01.612 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:01.612 "adrfam": "ipv4", 00:17:01.612 "trsvcid": "$NVMF_PORT", 00:17:01.612 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:01.612 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:01.612 "hdgst": ${hdgst:-false}, 00:17:01.612 "ddgst": ${ddgst:-false} 00:17:01.612 }, 00:17:01.612 "method": "bdev_nvme_attach_controller" 00:17:01.612 } 00:17:01.612 EOF 00:17:01.612 )") 00:17:01.612 20:17:38 -- nvmf/common.sh@542 -- # cat 00:17:01.612 20:17:38 -- nvmf/common.sh@544 -- # jq . 00:17:01.612 20:17:38 -- nvmf/common.sh@545 -- # IFS=, 00:17:01.612 20:17:38 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:01.612 "params": { 00:17:01.612 "name": "Nvme1", 00:17:01.612 "trtype": "tcp", 00:17:01.612 "traddr": "10.0.0.2", 00:17:01.612 "adrfam": "ipv4", 00:17:01.612 "trsvcid": "4420", 00:17:01.612 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:01.612 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:01.613 "hdgst": false, 00:17:01.613 "ddgst": false 00:17:01.613 }, 00:17:01.613 "method": "bdev_nvme_attach_controller" 00:17:01.613 }' 00:17:01.613 [2024-02-14 20:17:38.900130] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:17:01.613 [2024-02-14 20:17:38.900174] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1774038 ] 00:17:01.613 EAL: No free 2048 kB hugepages reported on node 1 00:17:01.613 [2024-02-14 20:17:38.958887] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:01.613 [2024-02-14 20:17:39.028624] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:01.613 [2024-02-14 20:17:39.028680] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:17:02.181 Running I/O for 10 seconds... 00:17:12.169 00:17:12.169 Latency(us) 00:17:12.169 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:12.169 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:17:12.169 Verification LBA range: start 0x0 length 0x1000 00:17:12.169 Nvme1n1 : 10.01 13234.11 103.39 0.00 0.00 9651.00 1092.27 31706.94 00:17:12.169 =================================================================================================================== 00:17:12.169 Total : 13234.11 103.39 0.00 0.00 9651.00 1092.27 31706.94 00:17:12.169 [2024-02-14 20:17:49.377345] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:17:12.169 20:17:49 -- target/zcopy.sh@39 -- # perfpid=1775741 00:17:12.169 20:17:49 -- target/zcopy.sh@41 -- # xtrace_disable 00:17:12.169 20:17:49 -- common/autotest_common.sh@10 -- # set +x 00:17:12.169 20:17:49 -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:17:12.169 20:17:49 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:17:12.169 20:17:49 -- nvmf/common.sh@520 -- # config=() 00:17:12.169 20:17:49 -- nvmf/common.sh@520 -- # local subsystem config 00:17:12.169 20:17:49 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:12.169 20:17:49 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:12.169 { 00:17:12.169 "params": { 00:17:12.169 "name": "Nvme$subsystem", 00:17:12.169 "trtype": "$TEST_TRANSPORT", 00:17:12.169 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:12.169 "adrfam": "ipv4", 00:17:12.169 "trsvcid": "$NVMF_PORT", 00:17:12.169 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:12.169 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:12.170 "hdgst": ${hdgst:-false}, 00:17:12.170 "ddgst": ${ddgst:-false} 00:17:12.170 }, 00:17:12.170 "method": "bdev_nvme_attach_controller" 00:17:12.170 } 00:17:12.170 EOF 00:17:12.170 )") 00:17:12.430 20:17:49 -- nvmf/common.sh@542 -- # cat 00:17:12.430 [2024-02-14 20:17:49.589183] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.430 [2024-02-14 20:17:49.589217] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.430 20:17:49 -- nvmf/common.sh@544 -- # jq . 00:17:12.430 20:17:49 -- nvmf/common.sh@545 -- # IFS=, 00:17:12.430 20:17:49 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:12.430 "params": { 00:17:12.430 "name": "Nvme1", 00:17:12.430 "trtype": "tcp", 00:17:12.430 "traddr": "10.0.0.2", 00:17:12.430 "adrfam": "ipv4", 00:17:12.430 "trsvcid": "4420", 00:17:12.430 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:12.430 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:12.430 "hdgst": false, 00:17:12.430 "ddgst": false 00:17:12.430 }, 00:17:12.430 "method": "bdev_nvme_attach_controller" 00:17:12.430 }' 00:17:12.430 [2024-02-14 20:17:49.601185] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.430 [2024-02-14 20:17:49.601197] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.430 [2024-02-14 20:17:49.609199] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.430 [2024-02-14 20:17:49.609210] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.430 [2024-02-14 20:17:49.617219] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.430 [2024-02-14 20:17:49.617228] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.430 [2024-02-14 20:17:49.622172] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:17:12.430 [2024-02-14 20:17:49.622219] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1775741 ] 00:17:12.430 [2024-02-14 20:17:49.625239] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.430 [2024-02-14 20:17:49.625251] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.430 [2024-02-14 20:17:49.633262] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.430 [2024-02-14 20:17:49.633271] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.430 [2024-02-14 20:17:49.645295] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.430 [2024-02-14 20:17:49.645305] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.430 EAL: No free 2048 kB hugepages reported on node 1 00:17:12.430 [2024-02-14 20:17:49.653315] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.430 [2024-02-14 20:17:49.653324] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.430 [2024-02-14 20:17:49.661336] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.430 [2024-02-14 20:17:49.661346] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.430 [2024-02-14 20:17:49.669358] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.430 [2024-02-14 20:17:49.669368] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.430 [2024-02-14 20:17:49.677380] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.430 [2024-02-14 20:17:49.677390] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.430 [2024-02-14 20:17:49.684374] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:12.430 [2024-02-14 20:17:49.689418] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.430 [2024-02-14 20:17:49.689430] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.430 [2024-02-14 20:17:49.697436] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.430 [2024-02-14 20:17:49.697448] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.430 [2024-02-14 20:17:49.705456] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.430 [2024-02-14 20:17:49.705466] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.430 [2024-02-14 20:17:49.713478] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.430 [2024-02-14 20:17:49.713488] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.430 [2024-02-14 20:17:49.721504] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.430 [2024-02-14 20:17:49.721521] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.430 [2024-02-14 20:17:49.733537] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.430 [2024-02-14 20:17:49.733550] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.430 [2024-02-14 20:17:49.741553] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.430 [2024-02-14 20:17:49.741563] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.430 [2024-02-14 20:17:49.749575] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.430 [2024-02-14 20:17:49.749589] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.430 [2024-02-14 20:17:49.754243] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:12.430 [2024-02-14 20:17:49.754286] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:17:12.430 [2024-02-14 20:17:49.757599] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.430 [2024-02-14 20:17:49.757610] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.430 [2024-02-14 20:17:49.765630] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.430 [2024-02-14 20:17:49.765651] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.430 [2024-02-14 20:17:49.777669] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.431 [2024-02-14 20:17:49.777685] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.431 [2024-02-14 20:17:49.785683] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.431 [2024-02-14 20:17:49.785694] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.431 [2024-02-14 20:17:49.793699] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.431 [2024-02-14 20:17:49.793710] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.431 [2024-02-14 20:17:49.801719] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.431 [2024-02-14 20:17:49.801730] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.431 [2024-02-14 20:17:49.809741] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.431 [2024-02-14 20:17:49.809751] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.431 [2024-02-14 20:17:49.821776] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.431 [2024-02-14 20:17:49.821787] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.431 [2024-02-14 20:17:49.829794] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.431 [2024-02-14 20:17:49.829803] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.431 [2024-02-14 20:17:49.837818] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.431 [2024-02-14 20:17:49.837827] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.431 [2024-02-14 20:17:49.845854] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.431 [2024-02-14 20:17:49.845875] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.690 [2024-02-14 20:17:49.853874] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.690 [2024-02-14 20:17:49.853887] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.690 [2024-02-14 20:17:49.865905] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.690 [2024-02-14 20:17:49.865919] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.690 [2024-02-14 20:17:49.873923] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.690 [2024-02-14 20:17:49.873932] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.690 [2024-02-14 20:17:49.881955] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.690 [2024-02-14 20:17:49.881964] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.690 [2024-02-14 20:17:49.889977] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.690 [2024-02-14 20:17:49.889986] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.690 [2024-02-14 20:17:49.897999] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.690 [2024-02-14 20:17:49.898008] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.690 [2024-02-14 20:17:49.910038] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.690 [2024-02-14 20:17:49.910050] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.690 [2024-02-14 20:17:49.918060] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.691 [2024-02-14 20:17:49.918073] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.691 [2024-02-14 20:17:49.926079] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.691 [2024-02-14 20:17:49.926088] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.691 [2024-02-14 20:17:49.934099] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.691 [2024-02-14 20:17:49.934108] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.691 [2024-02-14 20:17:49.942122] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.691 [2024-02-14 20:17:49.942131] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.691 [2024-02-14 20:17:49.954157] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.691 [2024-02-14 20:17:49.954165] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.691 [2024-02-14 20:17:49.962178] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.691 [2024-02-14 20:17:49.962188] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.691 [2024-02-14 20:17:49.970204] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.691 [2024-02-14 20:17:49.970218] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.691 [2024-02-14 20:17:49.978223] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.691 [2024-02-14 20:17:49.978232] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.691 [2024-02-14 20:17:49.986242] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.691 [2024-02-14 20:17:49.986252] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.691 [2024-02-14 20:17:49.998277] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.691 [2024-02-14 20:17:49.998286] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.691 [2024-02-14 20:17:50.006303] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.691 [2024-02-14 20:17:50.006316] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.691 [2024-02-14 20:17:50.014325] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.691 [2024-02-14 20:17:50.014338] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.691 [2024-02-14 20:17:50.026370] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.691 [2024-02-14 20:17:50.026385] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.691 [2024-02-14 20:17:50.038395] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.691 [2024-02-14 20:17:50.038405] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.691 [2024-02-14 20:17:50.046415] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.691 [2024-02-14 20:17:50.046425] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.691 [2024-02-14 20:17:50.054437] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.691 [2024-02-14 20:17:50.054449] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.691 [2024-02-14 20:17:50.066510] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.691 [2024-02-14 20:17:50.066530] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.691 [2024-02-14 20:17:50.074521] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.691 [2024-02-14 20:17:50.074531] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.691 Running I/O for 5 seconds... 00:17:12.691 [2024-02-14 20:17:50.094372] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.691 [2024-02-14 20:17:50.094393] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.691 [2024-02-14 20:17:50.107288] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.691 [2024-02-14 20:17:50.107312] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.951 [2024-02-14 20:17:50.114631] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.951 [2024-02-14 20:17:50.114658] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.951 [2024-02-14 20:17:50.123628] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.951 [2024-02-14 20:17:50.123653] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.951 [2024-02-14 20:17:50.132502] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.951 [2024-02-14 20:17:50.132521] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.951 [2024-02-14 20:17:50.141664] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.951 [2024-02-14 20:17:50.141683] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.951 [2024-02-14 20:17:50.155428] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.951 [2024-02-14 20:17:50.155447] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.951 [2024-02-14 20:17:50.163845] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.951 [2024-02-14 20:17:50.163864] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.951 [2024-02-14 20:17:50.173057] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.951 [2024-02-14 20:17:50.173078] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.951 [2024-02-14 20:17:50.182430] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.951 [2024-02-14 20:17:50.182448] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.951 [2024-02-14 20:17:50.191003] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.952 [2024-02-14 20:17:50.191021] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.952 [2024-02-14 20:17:50.199958] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.952 [2024-02-14 20:17:50.199976] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.952 [2024-02-14 20:17:50.208314] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.952 [2024-02-14 20:17:50.208332] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.952 [2024-02-14 20:17:50.216751] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.952 [2024-02-14 20:17:50.216769] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.952 [2024-02-14 20:17:50.225496] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.952 [2024-02-14 20:17:50.225514] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.952 [2024-02-14 20:17:50.234880] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.952 [2024-02-14 20:17:50.234899] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.952 [2024-02-14 20:17:50.243678] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.952 [2024-02-14 20:17:50.243697] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.952 [2024-02-14 20:17:50.252829] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.952 [2024-02-14 20:17:50.252846] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.952 [2024-02-14 20:17:50.261229] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.952 [2024-02-14 20:17:50.261251] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.952 [2024-02-14 20:17:50.269643] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.952 [2024-02-14 20:17:50.269666] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.952 [2024-02-14 20:17:50.277924] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.952 [2024-02-14 20:17:50.277942] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.952 [2024-02-14 20:17:50.286586] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.952 [2024-02-14 20:17:50.286604] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.952 [2024-02-14 20:17:50.295557] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.952 [2024-02-14 20:17:50.295574] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.952 [2024-02-14 20:17:50.304538] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.952 [2024-02-14 20:17:50.304556] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.952 [2024-02-14 20:17:50.313146] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.952 [2024-02-14 20:17:50.313164] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.952 [2024-02-14 20:17:50.321633] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.952 [2024-02-14 20:17:50.321656] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.952 [2024-02-14 20:17:50.330363] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.952 [2024-02-14 20:17:50.330380] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.952 [2024-02-14 20:17:50.338485] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.952 [2024-02-14 20:17:50.338503] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.952 [2024-02-14 20:17:50.347181] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.952 [2024-02-14 20:17:50.347199] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.952 [2024-02-14 20:17:50.356107] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.952 [2024-02-14 20:17:50.356125] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.952 [2024-02-14 20:17:50.365049] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.952 [2024-02-14 20:17:50.365068] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.212 [2024-02-14 20:17:50.373736] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.212 [2024-02-14 20:17:50.373754] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.212 [2024-02-14 20:17:50.383009] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.212 [2024-02-14 20:17:50.383028] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.212 [2024-02-14 20:17:50.391874] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.212 [2024-02-14 20:17:50.391893] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.212 [2024-02-14 20:17:50.401088] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.212 [2024-02-14 20:17:50.401106] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.212 [2024-02-14 20:17:50.410317] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.212 [2024-02-14 20:17:50.410335] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.212 [2024-02-14 20:17:50.424204] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.212 [2024-02-14 20:17:50.424224] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.212 [2024-02-14 20:17:50.432652] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.212 [2024-02-14 20:17:50.432675] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.212 [2024-02-14 20:17:50.441190] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.212 [2024-02-14 20:17:50.441208] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.212 [2024-02-14 20:17:50.449785] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.212 [2024-02-14 20:17:50.449803] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.212 [2024-02-14 20:17:50.458770] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.212 [2024-02-14 20:17:50.458789] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.212 [2024-02-14 20:17:50.467615] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.212 [2024-02-14 20:17:50.467633] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.212 [2024-02-14 20:17:50.476215] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.212 [2024-02-14 20:17:50.476234] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.212 [2024-02-14 20:17:50.485005] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.212 [2024-02-14 20:17:50.485023] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.212 [2024-02-14 20:17:50.493463] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.212 [2024-02-14 20:17:50.493481] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.212 [2024-02-14 20:17:50.502509] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.212 [2024-02-14 20:17:50.502527] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.212 [2024-02-14 20:17:50.515883] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.212 [2024-02-14 20:17:50.515901] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.212 [2024-02-14 20:17:50.522863] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.212 [2024-02-14 20:17:50.522880] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.213 [2024-02-14 20:17:50.532276] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.213 [2024-02-14 20:17:50.532294] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.213 [2024-02-14 20:17:50.541324] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.213 [2024-02-14 20:17:50.541342] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.213 [2024-02-14 20:17:50.550387] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.213 [2024-02-14 20:17:50.550405] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.213 [2024-02-14 20:17:50.564075] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.213 [2024-02-14 20:17:50.564093] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.213 [2024-02-14 20:17:50.572709] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.213 [2024-02-14 20:17:50.572726] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.213 [2024-02-14 20:17:50.581896] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.213 [2024-02-14 20:17:50.581914] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.213 [2024-02-14 20:17:50.590049] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.213 [2024-02-14 20:17:50.590066] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.213 [2024-02-14 20:17:50.598941] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.213 [2024-02-14 20:17:50.598959] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.213 [2024-02-14 20:17:50.607696] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.213 [2024-02-14 20:17:50.607719] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.213 [2024-02-14 20:17:50.617066] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.213 [2024-02-14 20:17:50.617084] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.213 [2024-02-14 20:17:50.625684] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.213 [2024-02-14 20:17:50.625701] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.472 [2024-02-14 20:17:50.633948] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.472 [2024-02-14 20:17:50.633966] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.472 [2024-02-14 20:17:50.643065] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.472 [2024-02-14 20:17:50.643083] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.472 [2024-02-14 20:17:50.652408] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.472 [2024-02-14 20:17:50.652426] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.472 [2024-02-14 20:17:50.661510] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.472 [2024-02-14 20:17:50.661528] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.472 [2024-02-14 20:17:50.670530] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.472 [2024-02-14 20:17:50.670547] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.472 [2024-02-14 20:17:50.678965] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.472 [2024-02-14 20:17:50.678983] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.472 [2024-02-14 20:17:50.687913] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.473 [2024-02-14 20:17:50.687931] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.473 [2024-02-14 20:17:50.701429] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.473 [2024-02-14 20:17:50.701448] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.473 [2024-02-14 20:17:50.708762] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.473 [2024-02-14 20:17:50.708780] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.473 [2024-02-14 20:17:50.718208] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.473 [2024-02-14 20:17:50.718226] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.473 [2024-02-14 20:17:50.727035] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.473 [2024-02-14 20:17:50.727053] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.473 [2024-02-14 20:17:50.735828] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.473 [2024-02-14 20:17:50.735847] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.473 [2024-02-14 20:17:50.744842] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.473 [2024-02-14 20:17:50.744861] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.473 [2024-02-14 20:17:50.752931] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.473 [2024-02-14 20:17:50.752949] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.473 [2024-02-14 20:17:50.761915] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.473 [2024-02-14 20:17:50.761932] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.473 [2024-02-14 20:17:50.770764] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.473 [2024-02-14 20:17:50.770783] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.473 [2024-02-14 20:17:50.779276] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.473 [2024-02-14 20:17:50.779300] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.473 [2024-02-14 20:17:50.788326] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.473 [2024-02-14 20:17:50.788343] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.473 [2024-02-14 20:17:50.797433] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.473 [2024-02-14 20:17:50.797451] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.473 [2024-02-14 20:17:50.806652] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.473 [2024-02-14 20:17:50.806670] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.473 [2024-02-14 20:17:50.815250] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.473 [2024-02-14 20:17:50.815269] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.473 [2024-02-14 20:17:50.824161] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.473 [2024-02-14 20:17:50.824179] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.473 [2024-02-14 20:17:50.832763] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.473 [2024-02-14 20:17:50.832783] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.473 [2024-02-14 20:17:50.841470] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.473 [2024-02-14 20:17:50.841490] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.473 [2024-02-14 20:17:50.850318] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.473 [2024-02-14 20:17:50.850337] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.473 [2024-02-14 20:17:50.859085] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.473 [2024-02-14 20:17:50.859104] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.473 [2024-02-14 20:17:50.868051] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.473 [2024-02-14 20:17:50.868070] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.473 [2024-02-14 20:17:50.876869] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.473 [2024-02-14 20:17:50.876888] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.473 [2024-02-14 20:17:50.885355] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.473 [2024-02-14 20:17:50.885373] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.747 [2024-02-14 20:17:50.894194] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.747 [2024-02-14 20:17:50.894213] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.747 [2024-02-14 20:17:50.903363] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.747 [2024-02-14 20:17:50.903381] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.747 [2024-02-14 20:17:50.912567] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.747 [2024-02-14 20:17:50.912586] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.747 [2024-02-14 20:17:50.921503] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.747 [2024-02-14 20:17:50.921522] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.747 [2024-02-14 20:17:50.930284] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.747 [2024-02-14 20:17:50.930302] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.747 [2024-02-14 20:17:50.938986] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.747 [2024-02-14 20:17:50.939004] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.747 [2024-02-14 20:17:50.947784] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.747 [2024-02-14 20:17:50.947802] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.747 [2024-02-14 20:17:50.957079] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.747 [2024-02-14 20:17:50.957098] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.747 [2024-02-14 20:17:50.965983] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.747 [2024-02-14 20:17:50.966001] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.747 [2024-02-14 20:17:50.974503] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.747 [2024-02-14 20:17:50.974521] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.747 [2024-02-14 20:17:50.983638] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.747 [2024-02-14 20:17:50.983663] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.747 [2024-02-14 20:17:50.992973] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.747 [2024-02-14 20:17:50.992992] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.747 [2024-02-14 20:17:51.001134] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.747 [2024-02-14 20:17:51.001152] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.747 [2024-02-14 20:17:51.009722] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.747 [2024-02-14 20:17:51.009740] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.747 [2024-02-14 20:17:51.019096] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.747 [2024-02-14 20:17:51.019114] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.747 [2024-02-14 20:17:51.027827] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.747 [2024-02-14 20:17:51.027845] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.747 [2024-02-14 20:17:51.037194] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.747 [2024-02-14 20:17:51.037213] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.747 [2024-02-14 20:17:51.046178] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.747 [2024-02-14 20:17:51.046196] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.747 [2024-02-14 20:17:51.059819] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.747 [2024-02-14 20:17:51.059838] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.747 [2024-02-14 20:17:51.068279] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.747 [2024-02-14 20:17:51.068297] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.747 [2024-02-14 20:17:51.076613] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.747 [2024-02-14 20:17:51.076631] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.747 [2024-02-14 20:17:51.084526] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.747 [2024-02-14 20:17:51.084544] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.747 [2024-02-14 20:17:51.093859] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.747 [2024-02-14 20:17:51.093878] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.747 [2024-02-14 20:17:51.108093] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.747 [2024-02-14 20:17:51.108112] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.747 [2024-02-14 20:17:51.115519] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.747 [2024-02-14 20:17:51.115537] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.747 [2024-02-14 20:17:51.124039] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.747 [2024-02-14 20:17:51.124057] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.747 [2024-02-14 20:17:51.132775] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.747 [2024-02-14 20:17:51.132794] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.747 [2024-02-14 20:17:51.140895] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.747 [2024-02-14 20:17:51.140912] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.747 [2024-02-14 20:17:51.150135] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.747 [2024-02-14 20:17:51.150154] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.045 [2024-02-14 20:17:51.159694] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.045 [2024-02-14 20:17:51.159713] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.045 [2024-02-14 20:17:51.168879] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.045 [2024-02-14 20:17:51.168898] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.045 [2024-02-14 20:17:51.177326] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.045 [2024-02-14 20:17:51.177345] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.045 [2024-02-14 20:17:51.186626] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.045 [2024-02-14 20:17:51.186653] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.045 [2024-02-14 20:17:51.196039] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.045 [2024-02-14 20:17:51.196059] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.045 [2024-02-14 20:17:51.204265] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.045 [2024-02-14 20:17:51.204283] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.045 [2024-02-14 20:17:51.213389] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.045 [2024-02-14 20:17:51.213407] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.045 [2024-02-14 20:17:51.223464] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.045 [2024-02-14 20:17:51.223482] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.045 [2024-02-14 20:17:51.232761] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.045 [2024-02-14 20:17:51.232778] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.045 [2024-02-14 20:17:51.247403] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.045 [2024-02-14 20:17:51.247420] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.045 [2024-02-14 20:17:51.259398] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.045 [2024-02-14 20:17:51.259417] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.045 [2024-02-14 20:17:51.267728] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.045 [2024-02-14 20:17:51.267746] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.045 [2024-02-14 20:17:51.276440] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.045 [2024-02-14 20:17:51.276458] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.045 [2024-02-14 20:17:51.287184] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.045 [2024-02-14 20:17:51.287203] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.045 [2024-02-14 20:17:51.302345] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.045 [2024-02-14 20:17:51.302364] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.046 [2024-02-14 20:17:51.310744] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.046 [2024-02-14 20:17:51.310762] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.046 [2024-02-14 20:17:51.319965] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.046 [2024-02-14 20:17:51.319983] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.046 [2024-02-14 20:17:51.328129] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.046 [2024-02-14 20:17:51.328147] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.046 [2024-02-14 20:17:51.336696] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.046 [2024-02-14 20:17:51.336714] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.046 [2024-02-14 20:17:51.352435] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.046 [2024-02-14 20:17:51.352453] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.046 [2024-02-14 20:17:51.362183] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.046 [2024-02-14 20:17:51.362202] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.046 [2024-02-14 20:17:51.370985] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.046 [2024-02-14 20:17:51.371003] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.046 [2024-02-14 20:17:51.379484] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.046 [2024-02-14 20:17:51.379501] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.046 [2024-02-14 20:17:51.388322] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.046 [2024-02-14 20:17:51.388340] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.046 [2024-02-14 20:17:51.397221] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.046 [2024-02-14 20:17:51.397238] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.046 [2024-02-14 20:17:51.407396] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.046 [2024-02-14 20:17:51.407413] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.046 [2024-02-14 20:17:51.415599] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.046 [2024-02-14 20:17:51.415617] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.046 [2024-02-14 20:17:51.424152] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.046 [2024-02-14 20:17:51.424171] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.046 [2024-02-14 20:17:51.431628] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.046 [2024-02-14 20:17:51.431651] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.046 [2024-02-14 20:17:51.441414] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.046 [2024-02-14 20:17:51.441434] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.046 [2024-02-14 20:17:51.449545] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.046 [2024-02-14 20:17:51.449563] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.046 [2024-02-14 20:17:51.458661] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.046 [2024-02-14 20:17:51.458679] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.305 [2024-02-14 20:17:51.467403] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.305 [2024-02-14 20:17:51.467421] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.305 [2024-02-14 20:17:51.476099] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.305 [2024-02-14 20:17:51.476120] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.305 [2024-02-14 20:17:51.489824] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.305 [2024-02-14 20:17:51.489843] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.305 [2024-02-14 20:17:51.496336] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.305 [2024-02-14 20:17:51.496354] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.305 [2024-02-14 20:17:51.506102] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.305 [2024-02-14 20:17:51.506120] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.305 [2024-02-14 20:17:51.514417] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.305 [2024-02-14 20:17:51.514435] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.305 [2024-02-14 20:17:51.523179] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.305 [2024-02-14 20:17:51.523197] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.305 [2024-02-14 20:17:51.537210] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.305 [2024-02-14 20:17:51.537228] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.305 [2024-02-14 20:17:51.547021] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.305 [2024-02-14 20:17:51.547039] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.305 [2024-02-14 20:17:51.557148] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.305 [2024-02-14 20:17:51.557166] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.305 [2024-02-14 20:17:51.567352] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.305 [2024-02-14 20:17:51.567370] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.305 [2024-02-14 20:17:51.577903] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.305 [2024-02-14 20:17:51.577921] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.305 [2024-02-14 20:17:51.586595] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.305 [2024-02-14 20:17:51.586613] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.305 [2024-02-14 20:17:51.595504] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.305 [2024-02-14 20:17:51.595523] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.305 [2024-02-14 20:17:51.603609] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.305 [2024-02-14 20:17:51.603628] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.305 [2024-02-14 20:17:51.612231] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.305 [2024-02-14 20:17:51.612250] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.305 [2024-02-14 20:17:51.620447] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.305 [2024-02-14 20:17:51.620466] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.305 [2024-02-14 20:17:51.629236] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.305 [2024-02-14 20:17:51.629254] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.305 [2024-02-14 20:17:51.637731] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.305 [2024-02-14 20:17:51.637749] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.305 [2024-02-14 20:17:51.649059] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.305 [2024-02-14 20:17:51.649077] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.305 [2024-02-14 20:17:51.658172] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.305 [2024-02-14 20:17:51.658194] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.305 [2024-02-14 20:17:51.666623] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.305 [2024-02-14 20:17:51.666641] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.305 [2024-02-14 20:17:51.680631] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.305 [2024-02-14 20:17:51.680654] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.305 [2024-02-14 20:17:51.688936] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.305 [2024-02-14 20:17:51.688954] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.305 [2024-02-14 20:17:51.698468] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.305 [2024-02-14 20:17:51.698485] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.305 [2024-02-14 20:17:51.707014] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.305 [2024-02-14 20:17:51.707032] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.305 [2024-02-14 20:17:51.714722] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.306 [2024-02-14 20:17:51.714740] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.565 [2024-02-14 20:17:51.730738] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.565 [2024-02-14 20:17:51.730756] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.565 [2024-02-14 20:17:51.740535] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.565 [2024-02-14 20:17:51.740554] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.565 [2024-02-14 20:17:51.749156] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.565 [2024-02-14 20:17:51.749175] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.565 [2024-02-14 20:17:51.758988] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.565 [2024-02-14 20:17:51.759006] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.565 [2024-02-14 20:17:51.769984] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.565 [2024-02-14 20:17:51.770001] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.565 [2024-02-14 20:17:51.787784] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.565 [2024-02-14 20:17:51.787802] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.565 [2024-02-14 20:17:51.796427] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.565 [2024-02-14 20:17:51.796445] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.565 [2024-02-14 20:17:51.804832] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.565 [2024-02-14 20:17:51.804849] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.565 [2024-02-14 20:17:51.813587] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.565 [2024-02-14 20:17:51.813605] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.565 [2024-02-14 20:17:51.822004] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.565 [2024-02-14 20:17:51.822023] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.565 [2024-02-14 20:17:51.835558] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.565 [2024-02-14 20:17:51.835577] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.565 [2024-02-14 20:17:51.844013] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.565 [2024-02-14 20:17:51.844031] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.565 [2024-02-14 20:17:51.852892] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.565 [2024-02-14 20:17:51.852913] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.565 [2024-02-14 20:17:51.862227] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.565 [2024-02-14 20:17:51.862246] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.565 [2024-02-14 20:17:51.871022] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.565 [2024-02-14 20:17:51.871040] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.565 [2024-02-14 20:17:51.880138] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.565 [2024-02-14 20:17:51.880155] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.565 [2024-02-14 20:17:51.888425] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.565 [2024-02-14 20:17:51.888443] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.565 [2024-02-14 20:17:51.897369] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.565 [2024-02-14 20:17:51.897386] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.565 [2024-02-14 20:17:51.905598] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.565 [2024-02-14 20:17:51.905616] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.565 [2024-02-14 20:17:51.914340] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.565 [2024-02-14 20:17:51.914358] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.565 [2024-02-14 20:17:51.928048] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.565 [2024-02-14 20:17:51.928066] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.565 [2024-02-14 20:17:51.936562] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.565 [2024-02-14 20:17:51.936579] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.565 [2024-02-14 20:17:51.945281] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.565 [2024-02-14 20:17:51.945299] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.565 [2024-02-14 20:17:51.954223] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.565 [2024-02-14 20:17:51.954241] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.565 [2024-02-14 20:17:51.962607] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.565 [2024-02-14 20:17:51.962625] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.566 [2024-02-14 20:17:51.971356] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.566 [2024-02-14 20:17:51.971374] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.566 [2024-02-14 20:17:51.980519] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.566 [2024-02-14 20:17:51.980537] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.825 [2024-02-14 20:17:51.988899] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.825 [2024-02-14 20:17:51.988917] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.825 [2024-02-14 20:17:51.997449] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.825 [2024-02-14 20:17:51.997467] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.825 [2024-02-14 20:17:52.005568] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.825 [2024-02-14 20:17:52.005586] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.825 [2024-02-14 20:17:52.014465] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.825 [2024-02-14 20:17:52.014482] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.825 [2024-02-14 20:17:52.022705] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.825 [2024-02-14 20:17:52.022727] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.825 [2024-02-14 20:17:52.031526] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.825 [2024-02-14 20:17:52.031544] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.825 [2024-02-14 20:17:52.040113] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.825 [2024-02-14 20:17:52.040131] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.825 [2024-02-14 20:17:52.048692] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.825 [2024-02-14 20:17:52.048710] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.825 [2024-02-14 20:17:52.057854] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.825 [2024-02-14 20:17:52.057872] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.825 [2024-02-14 20:17:52.066779] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.825 [2024-02-14 20:17:52.066797] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.825 [2024-02-14 20:17:52.075392] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.825 [2024-02-14 20:17:52.075410] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.825 [2024-02-14 20:17:52.084090] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.826 [2024-02-14 20:17:52.084108] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.826 [2024-02-14 20:17:52.092800] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.826 [2024-02-14 20:17:52.092817] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.826 [2024-02-14 20:17:52.106483] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.826 [2024-02-14 20:17:52.106501] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.826 [2024-02-14 20:17:52.114904] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.826 [2024-02-14 20:17:52.114921] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.826 [2024-02-14 20:17:52.123243] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.826 [2024-02-14 20:17:52.123260] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.826 [2024-02-14 20:17:52.131830] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.826 [2024-02-14 20:17:52.131849] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.826 [2024-02-14 20:17:52.140921] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.826 [2024-02-14 20:17:52.140939] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.826 [2024-02-14 20:17:52.154588] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.826 [2024-02-14 20:17:52.154606] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.826 [2024-02-14 20:17:52.162678] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.826 [2024-02-14 20:17:52.162696] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.826 [2024-02-14 20:17:52.171213] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.826 [2024-02-14 20:17:52.171230] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.826 [2024-02-14 20:17:52.180073] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.826 [2024-02-14 20:17:52.180092] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.826 [2024-02-14 20:17:52.189181] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.826 [2024-02-14 20:17:52.189199] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.826 [2024-02-14 20:17:52.198211] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.826 [2024-02-14 20:17:52.198233] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.826 [2024-02-14 20:17:52.206487] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.826 [2024-02-14 20:17:52.206506] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.826 [2024-02-14 20:17:52.215603] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.826 [2024-02-14 20:17:52.215622] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.826 [2024-02-14 20:17:52.224409] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.826 [2024-02-14 20:17:52.224428] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.826 [2024-02-14 20:17:52.232783] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.826 [2024-02-14 20:17:52.232802] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.826 [2024-02-14 20:17:52.241188] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.826 [2024-02-14 20:17:52.241206] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.086 [2024-02-14 20:17:52.250304] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.086 [2024-02-14 20:17:52.250323] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.086 [2024-02-14 20:17:52.259056] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.086 [2024-02-14 20:17:52.259074] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.086 [2024-02-14 20:17:52.267536] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.086 [2024-02-14 20:17:52.267554] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.086 [2024-02-14 20:17:52.276288] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.086 [2024-02-14 20:17:52.276306] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.086 [2024-02-14 20:17:52.289891] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.086 [2024-02-14 20:17:52.289910] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.086 [2024-02-14 20:17:52.298074] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.086 [2024-02-14 20:17:52.298092] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.086 [2024-02-14 20:17:52.306743] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.086 [2024-02-14 20:17:52.306761] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.086 [2024-02-14 20:17:52.315815] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.086 [2024-02-14 20:17:52.315844] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.086 [2024-02-14 20:17:52.324654] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.086 [2024-02-14 20:17:52.324672] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.086 [2024-02-14 20:17:52.333844] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.086 [2024-02-14 20:17:52.333863] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.086 [2024-02-14 20:17:52.342274] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.086 [2024-02-14 20:17:52.342293] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.086 [2024-02-14 20:17:52.351243] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.086 [2024-02-14 20:17:52.351262] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.086 [2024-02-14 20:17:52.360057] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.086 [2024-02-14 20:17:52.360076] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.086 [2024-02-14 20:17:52.368670] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.086 [2024-02-14 20:17:52.368688] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.086 [2024-02-14 20:17:52.382667] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.086 [2024-02-14 20:17:52.382701] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.086 [2024-02-14 20:17:52.390924] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.086 [2024-02-14 20:17:52.390942] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.086 [2024-02-14 20:17:52.399483] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.086 [2024-02-14 20:17:52.399501] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.086 [2024-02-14 20:17:52.408483] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.086 [2024-02-14 20:17:52.408501] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.086 [2024-02-14 20:17:52.417349] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.086 [2024-02-14 20:17:52.417367] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.086 [2024-02-14 20:17:52.430807] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.086 [2024-02-14 20:17:52.430827] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.086 [2024-02-14 20:17:52.438854] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.086 [2024-02-14 20:17:52.438874] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.086 [2024-02-14 20:17:52.447199] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.086 [2024-02-14 20:17:52.447217] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.086 [2024-02-14 20:17:52.456215] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.086 [2024-02-14 20:17:52.456235] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.086 [2024-02-14 20:17:52.464840] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.087 [2024-02-14 20:17:52.464860] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.087 [2024-02-14 20:17:52.473718] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.087 [2024-02-14 20:17:52.473736] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.087 [2024-02-14 20:17:52.482038] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.087 [2024-02-14 20:17:52.482056] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.087 [2024-02-14 20:17:52.490819] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.087 [2024-02-14 20:17:52.490836] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.087 [2024-02-14 20:17:52.500052] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.087 [2024-02-14 20:17:52.500070] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.347 [2024-02-14 20:17:52.508855] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.347 [2024-02-14 20:17:52.508873] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.347 [2024-02-14 20:17:52.517990] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.347 [2024-02-14 20:17:52.518008] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.347 [2024-02-14 20:17:52.526831] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.347 [2024-02-14 20:17:52.526849] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.347 [2024-02-14 20:17:52.535746] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.347 [2024-02-14 20:17:52.535765] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.347 [2024-02-14 20:17:52.545056] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.347 [2024-02-14 20:17:52.545075] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.347 [2024-02-14 20:17:52.553448] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.347 [2024-02-14 20:17:52.553466] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.347 [2024-02-14 20:17:52.567109] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.347 [2024-02-14 20:17:52.567128] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.347 [2024-02-14 20:17:52.575037] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.347 [2024-02-14 20:17:52.575055] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.347 [2024-02-14 20:17:52.583898] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.347 [2024-02-14 20:17:52.583916] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.347 [2024-02-14 20:17:52.592532] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.347 [2024-02-14 20:17:52.592550] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.347 [2024-02-14 20:17:52.601665] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.347 [2024-02-14 20:17:52.601684] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.347 [2024-02-14 20:17:52.615399] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.347 [2024-02-14 20:17:52.615417] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.347 [2024-02-14 20:17:52.623525] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.347 [2024-02-14 20:17:52.623543] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.347 [2024-02-14 20:17:52.632276] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.347 [2024-02-14 20:17:52.632294] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.347 [2024-02-14 20:17:52.643134] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.347 [2024-02-14 20:17:52.643152] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.347 [2024-02-14 20:17:52.653069] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.347 [2024-02-14 20:17:52.653087] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.347 [2024-02-14 20:17:52.667174] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.347 [2024-02-14 20:17:52.667193] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.347 [2024-02-14 20:17:52.676340] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.347 [2024-02-14 20:17:52.676358] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.347 [2024-02-14 20:17:52.685171] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.347 [2024-02-14 20:17:52.685188] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.347 [2024-02-14 20:17:52.696246] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.347 [2024-02-14 20:17:52.696263] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.347 [2024-02-14 20:17:52.706984] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.347 [2024-02-14 20:17:52.707003] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.347 [2024-02-14 20:17:52.720663] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.347 [2024-02-14 20:17:52.720681] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.347 [2024-02-14 20:17:52.727172] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.347 [2024-02-14 20:17:52.727190] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.347 [2024-02-14 20:17:52.737239] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.347 [2024-02-14 20:17:52.737257] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.347 [2024-02-14 20:17:52.745604] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.347 [2024-02-14 20:17:52.745622] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.347 [2024-02-14 20:17:52.754328] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.347 [2024-02-14 20:17:52.754346] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.347 [2024-02-14 20:17:52.762831] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.347 [2024-02-14 20:17:52.762849] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.606 [2024-02-14 20:17:52.771496] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.606 [2024-02-14 20:17:52.771513] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.606 [2024-02-14 20:17:52.780358] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.606 [2024-02-14 20:17:52.780376] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.606 [2024-02-14 20:17:52.789213] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.606 [2024-02-14 20:17:52.789231] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.606 [2024-02-14 20:17:52.798085] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.606 [2024-02-14 20:17:52.798103] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.606 [2024-02-14 20:17:52.806721] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.606 [2024-02-14 20:17:52.806739] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.606 [2024-02-14 20:17:52.815387] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.606 [2024-02-14 20:17:52.815405] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.606 [2024-02-14 20:17:52.823806] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.606 [2024-02-14 20:17:52.823824] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.606 [2024-02-14 20:17:52.833596] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.606 [2024-02-14 20:17:52.833613] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.606 [2024-02-14 20:17:52.842939] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.606 [2024-02-14 20:17:52.842957] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.606 [2024-02-14 20:17:52.857767] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.606 [2024-02-14 20:17:52.857786] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.606 [2024-02-14 20:17:52.868105] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.606 [2024-02-14 20:17:52.868123] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.606 [2024-02-14 20:17:52.876736] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.606 [2024-02-14 20:17:52.876755] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.606 [2024-02-14 20:17:52.888138] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.606 [2024-02-14 20:17:52.888156] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.606 [2024-02-14 20:17:52.898106] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.606 [2024-02-14 20:17:52.898125] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.606 [2024-02-14 20:17:52.907419] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.606 [2024-02-14 20:17:52.907440] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.606 [2024-02-14 20:17:52.916433] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.606 [2024-02-14 20:17:52.916450] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.606 [2024-02-14 20:17:52.925034] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.606 [2024-02-14 20:17:52.925052] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.606 [2024-02-14 20:17:52.934439] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.606 [2024-02-14 20:17:52.934457] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.606 [2024-02-14 20:17:52.942968] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.607 [2024-02-14 20:17:52.942986] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.607 [2024-02-14 20:17:52.952170] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.607 [2024-02-14 20:17:52.952188] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.607 [2024-02-14 20:17:52.960732] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.607 [2024-02-14 20:17:52.960750] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.607 [2024-02-14 20:17:52.969042] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.607 [2024-02-14 20:17:52.969060] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.607 [2024-02-14 20:17:52.978326] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.607 [2024-02-14 20:17:52.978343] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.607 [2024-02-14 20:17:52.986570] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.607 [2024-02-14 20:17:52.986587] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.607 [2024-02-14 20:17:53.002068] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.607 [2024-02-14 20:17:53.002087] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.607 [2024-02-14 20:17:53.009262] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.607 [2024-02-14 20:17:53.009279] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.607 [2024-02-14 20:17:53.020246] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.607 [2024-02-14 20:17:53.020264] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.866 [2024-02-14 20:17:53.028889] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.866 [2024-02-14 20:17:53.028906] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.866 [2024-02-14 20:17:53.037457] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.866 [2024-02-14 20:17:53.037475] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.866 [2024-02-14 20:17:53.046061] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.866 [2024-02-14 20:17:53.046079] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.866 [2024-02-14 20:17:53.054526] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.866 [2024-02-14 20:17:53.054544] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.866 [2024-02-14 20:17:53.063174] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.866 [2024-02-14 20:17:53.063192] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.866 [2024-02-14 20:17:53.071690] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.866 [2024-02-14 20:17:53.071708] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.866 [2024-02-14 20:17:53.080491] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.866 [2024-02-14 20:17:53.080512] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.866 [2024-02-14 20:17:53.100095] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.866 [2024-02-14 20:17:53.100115] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.866 [2024-02-14 20:17:53.109548] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.866 [2024-02-14 20:17:53.109566] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.867 [2024-02-14 20:17:53.118628] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.867 [2024-02-14 20:17:53.118652] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.867 [2024-02-14 20:17:53.126805] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.867 [2024-02-14 20:17:53.126823] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.867 [2024-02-14 20:17:53.135124] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.867 [2024-02-14 20:17:53.135141] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.867 [2024-02-14 20:17:53.148437] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.867 [2024-02-14 20:17:53.148456] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.867 [2024-02-14 20:17:53.155491] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.867 [2024-02-14 20:17:53.155509] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.867 [2024-02-14 20:17:53.164251] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.867 [2024-02-14 20:17:53.164270] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.867 [2024-02-14 20:17:53.172220] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.867 [2024-02-14 20:17:53.172239] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.867 [2024-02-14 20:17:53.181395] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.867 [2024-02-14 20:17:53.181413] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.867 [2024-02-14 20:17:53.190267] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.867 [2024-02-14 20:17:53.190286] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.867 [2024-02-14 20:17:53.201380] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.867 [2024-02-14 20:17:53.201397] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.867 [2024-02-14 20:17:53.211079] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.867 [2024-02-14 20:17:53.211096] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.867 [2024-02-14 20:17:53.219519] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.867 [2024-02-14 20:17:53.219537] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.867 [2024-02-14 20:17:53.228170] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.867 [2024-02-14 20:17:53.228187] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.867 [2024-02-14 20:17:53.236827] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.867 [2024-02-14 20:17:53.236846] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.867 [2024-02-14 20:17:53.246329] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.867 [2024-02-14 20:17:53.246347] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.867 [2024-02-14 20:17:53.254060] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.867 [2024-02-14 20:17:53.254079] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.867 [2024-02-14 20:17:53.263290] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.867 [2024-02-14 20:17:53.263311] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.867 [2024-02-14 20:17:53.271554] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.867 [2024-02-14 20:17:53.271573] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.126 [2024-02-14 20:17:53.284822] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.126 [2024-02-14 20:17:53.284839] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.126 [2024-02-14 20:17:53.291833] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.126 [2024-02-14 20:17:53.291851] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.126 [2024-02-14 20:17:53.301619] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.126 [2024-02-14 20:17:53.301637] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.126 [2024-02-14 20:17:53.311218] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.126 [2024-02-14 20:17:53.311236] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.126 [2024-02-14 20:17:53.320537] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.126 [2024-02-14 20:17:53.320555] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.126 [2024-02-14 20:17:53.335589] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.126 [2024-02-14 20:17:53.335607] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.126 [2024-02-14 20:17:53.345422] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.126 [2024-02-14 20:17:53.345440] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.126 [2024-02-14 20:17:53.354594] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.126 [2024-02-14 20:17:53.354612] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.126 [2024-02-14 20:17:53.363279] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.126 [2024-02-14 20:17:53.363297] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.126 [2024-02-14 20:17:53.371865] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.126 [2024-02-14 20:17:53.371883] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.126 [2024-02-14 20:17:53.387082] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.126 [2024-02-14 20:17:53.387100] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.126 [2024-02-14 20:17:53.397683] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.126 [2024-02-14 20:17:53.397701] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.126 [2024-02-14 20:17:53.406876] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.126 [2024-02-14 20:17:53.406895] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.126 [2024-02-14 20:17:53.415149] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.126 [2024-02-14 20:17:53.415167] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.126 [2024-02-14 20:17:53.423852] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.126 [2024-02-14 20:17:53.423870] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.126 [2024-02-14 20:17:53.432081] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.126 [2024-02-14 20:17:53.432098] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.126 [2024-02-14 20:17:53.441149] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.126 [2024-02-14 20:17:53.441167] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.126 [2024-02-14 20:17:53.448478] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.126 [2024-02-14 20:17:53.448501] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.126 [2024-02-14 20:17:53.457924] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.126 [2024-02-14 20:17:53.457942] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.126 [2024-02-14 20:17:53.466321] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.126 [2024-02-14 20:17:53.466339] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.126 [2024-02-14 20:17:53.475079] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.126 [2024-02-14 20:17:53.475098] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.126 [2024-02-14 20:17:53.484095] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.126 [2024-02-14 20:17:53.484114] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.126 [2024-02-14 20:17:53.492633] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.126 [2024-02-14 20:17:53.492659] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.126 [2024-02-14 20:17:53.501256] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.126 [2024-02-14 20:17:53.501274] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.126 [2024-02-14 20:17:53.511720] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.126 [2024-02-14 20:17:53.511737] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.126 [2024-02-14 20:17:53.529212] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.126 [2024-02-14 20:17:53.529231] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.126 [2024-02-14 20:17:53.538554] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.126 [2024-02-14 20:17:53.538573] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.386 [2024-02-14 20:17:53.547333] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.386 [2024-02-14 20:17:53.547351] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.386 [2024-02-14 20:17:53.556063] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.386 [2024-02-14 20:17:53.556080] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.386 [2024-02-14 20:17:53.565044] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.386 [2024-02-14 20:17:53.565062] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.386 [2024-02-14 20:17:53.577947] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.386 [2024-02-14 20:17:53.577965] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.386 [2024-02-14 20:17:53.586563] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.386 [2024-02-14 20:17:53.586582] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.386 [2024-02-14 20:17:53.594028] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.386 [2024-02-14 20:17:53.594045] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.386 [2024-02-14 20:17:53.603187] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.386 [2024-02-14 20:17:53.603206] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.386 [2024-02-14 20:17:53.611448] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.386 [2024-02-14 20:17:53.611467] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.386 [2024-02-14 20:17:53.620080] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.386 [2024-02-14 20:17:53.620099] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.386 [2024-02-14 20:17:53.629383] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.386 [2024-02-14 20:17:53.629406] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.386 [2024-02-14 20:17:53.638113] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.386 [2024-02-14 20:17:53.638131] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.386 [2024-02-14 20:17:53.646676] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.386 [2024-02-14 20:17:53.646711] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.386 [2024-02-14 20:17:53.655911] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.386 [2024-02-14 20:17:53.655931] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.386 [2024-02-14 20:17:53.664670] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.386 [2024-02-14 20:17:53.664704] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.386 [2024-02-14 20:17:53.673122] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.386 [2024-02-14 20:17:53.673140] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.386 [2024-02-14 20:17:53.681890] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.386 [2024-02-14 20:17:53.681908] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.386 [2024-02-14 20:17:53.690698] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.386 [2024-02-14 20:17:53.690716] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.386 [2024-02-14 20:17:53.699149] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.386 [2024-02-14 20:17:53.699167] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.386 [2024-02-14 20:17:53.712544] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.386 [2024-02-14 20:17:53.712563] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.386 [2024-02-14 20:17:53.719674] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.386 [2024-02-14 20:17:53.719692] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.386 [2024-02-14 20:17:53.728861] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.386 [2024-02-14 20:17:53.728879] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.386 [2024-02-14 20:17:53.737071] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.386 [2024-02-14 20:17:53.737089] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.386 [2024-02-14 20:17:53.745663] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.386 [2024-02-14 20:17:53.745681] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.386 [2024-02-14 20:17:53.754549] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.386 [2024-02-14 20:17:53.754568] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.386 [2024-02-14 20:17:53.763286] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.386 [2024-02-14 20:17:53.763306] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.386 [2024-02-14 20:17:53.772444] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.386 [2024-02-14 20:17:53.772462] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.386 [2024-02-14 20:17:53.781471] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.386 [2024-02-14 20:17:53.781490] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.386 [2024-02-14 20:17:53.789973] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.386 [2024-02-14 20:17:53.789992] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.646 [2024-02-14 20:17:53.803960] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.646 [2024-02-14 20:17:53.803979] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.646 [2024-02-14 20:17:53.812404] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.646 [2024-02-14 20:17:53.812423] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.646 [2024-02-14 20:17:53.820638] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.646 [2024-02-14 20:17:53.820664] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.646 [2024-02-14 20:17:53.829905] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.646 [2024-02-14 20:17:53.829923] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.646 [2024-02-14 20:17:53.838702] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.646 [2024-02-14 20:17:53.838720] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.646 [2024-02-14 20:17:53.856554] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.646 [2024-02-14 20:17:53.856574] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.646 [2024-02-14 20:17:53.864509] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.646 [2024-02-14 20:17:53.864527] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.646 [2024-02-14 20:17:53.873164] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.646 [2024-02-14 20:17:53.873182] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.646 [2024-02-14 20:17:53.881906] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.646 [2024-02-14 20:17:53.881924] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.646 [2024-02-14 20:17:53.890615] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.646 [2024-02-14 20:17:53.890634] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.646 [2024-02-14 20:17:53.899305] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.646 [2024-02-14 20:17:53.899324] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.646 [2024-02-14 20:17:53.907806] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.646 [2024-02-14 20:17:53.907824] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.646 [2024-02-14 20:17:53.916230] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.646 [2024-02-14 20:17:53.916247] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.646 [2024-02-14 20:17:53.925238] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.646 [2024-02-14 20:17:53.925256] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.646 [2024-02-14 20:17:53.934038] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.646 [2024-02-14 20:17:53.934056] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.646 [2024-02-14 20:17:53.942658] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.646 [2024-02-14 20:17:53.942692] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.646 [2024-02-14 20:17:53.951521] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.647 [2024-02-14 20:17:53.951539] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.647 [2024-02-14 20:17:53.959953] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.647 [2024-02-14 20:17:53.959972] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.647 [2024-02-14 20:17:53.968336] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.647 [2024-02-14 20:17:53.968355] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.647 [2024-02-14 20:17:53.982477] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.647 [2024-02-14 20:17:53.982496] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.647 [2024-02-14 20:17:53.989469] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.647 [2024-02-14 20:17:53.989488] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.647 [2024-02-14 20:17:53.999363] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.647 [2024-02-14 20:17:53.999383] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.647 [2024-02-14 20:17:54.008471] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.647 [2024-02-14 20:17:54.008489] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.647 [2024-02-14 20:17:54.016841] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.647 [2024-02-14 20:17:54.016859] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.647 [2024-02-14 20:17:54.026209] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.647 [2024-02-14 20:17:54.026227] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.647 [2024-02-14 20:17:54.034710] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.647 [2024-02-14 20:17:54.034728] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.647 [2024-02-14 20:17:54.043166] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.647 [2024-02-14 20:17:54.043185] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.647 [2024-02-14 20:17:54.052437] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.647 [2024-02-14 20:17:54.052455] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.647 [2024-02-14 20:17:54.061495] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.647 [2024-02-14 20:17:54.061514] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.907 [2024-02-14 20:17:54.070856] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.907 [2024-02-14 20:17:54.070874] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.907 [2024-02-14 20:17:54.079125] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.907 [2024-02-14 20:17:54.079143] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.907 [2024-02-14 20:17:54.088499] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.907 [2024-02-14 20:17:54.088517] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.907 [2024-02-14 20:17:54.097975] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.907 [2024-02-14 20:17:54.098009] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.907 [2024-02-14 20:17:54.107108] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.907 [2024-02-14 20:17:54.107126] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.907 [2024-02-14 20:17:54.116217] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.907 [2024-02-14 20:17:54.116235] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.907 [2024-02-14 20:17:54.124507] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.907 [2024-02-14 20:17:54.124525] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.907 [2024-02-14 20:17:54.133061] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.907 [2024-02-14 20:17:54.133079] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.907 [2024-02-14 20:17:54.141259] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.907 [2024-02-14 20:17:54.141276] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.907 [2024-02-14 20:17:54.150333] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.907 [2024-02-14 20:17:54.150351] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.907 [2024-02-14 20:17:54.164062] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.907 [2024-02-14 20:17:54.164080] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.907 [2024-02-14 20:17:54.172578] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.907 [2024-02-14 20:17:54.172596] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.907 [2024-02-14 20:17:54.181306] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.907 [2024-02-14 20:17:54.181323] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.907 [2024-02-14 20:17:54.190371] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.907 [2024-02-14 20:17:54.190388] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.907 [2024-02-14 20:17:54.198869] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.907 [2024-02-14 20:17:54.198887] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.907 [2024-02-14 20:17:54.212881] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.907 [2024-02-14 20:17:54.212899] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.907 [2024-02-14 20:17:54.220975] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.907 [2024-02-14 20:17:54.220993] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.907 [2024-02-14 20:17:54.229829] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.907 [2024-02-14 20:17:54.229847] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.907 [2024-02-14 20:17:54.238673] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.907 [2024-02-14 20:17:54.238690] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.907 [2024-02-14 20:17:54.247320] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.907 [2024-02-14 20:17:54.247337] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.907 [2024-02-14 20:17:54.260609] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.907 [2024-02-14 20:17:54.260628] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.907 [2024-02-14 20:17:54.268631] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.907 [2024-02-14 20:17:54.268655] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.907 [2024-02-14 20:17:54.277734] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.907 [2024-02-14 20:17:54.277752] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.907 [2024-02-14 20:17:54.286183] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.907 [2024-02-14 20:17:54.286201] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.907 [2024-02-14 20:17:54.294510] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.907 [2024-02-14 20:17:54.294527] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.907 [2024-02-14 20:17:54.308244] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.907 [2024-02-14 20:17:54.308263] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.907 [2024-02-14 20:17:54.316569] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.907 [2024-02-14 20:17:54.316587] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.167 [2024-02-14 20:17:54.325104] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.167 [2024-02-14 20:17:54.325125] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.167 [2024-02-14 20:17:54.333494] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.167 [2024-02-14 20:17:54.333511] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.167 [2024-02-14 20:17:54.342751] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.167 [2024-02-14 20:17:54.342769] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.167 [2024-02-14 20:17:54.356677] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.167 [2024-02-14 20:17:54.356695] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.167 [2024-02-14 20:17:54.364986] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.167 [2024-02-14 20:17:54.365004] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.167 [2024-02-14 20:17:54.374068] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.167 [2024-02-14 20:17:54.374087] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.167 [2024-02-14 20:17:54.382639] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.167 [2024-02-14 20:17:54.382661] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.167 [2024-02-14 20:17:54.391384] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.167 [2024-02-14 20:17:54.391401] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.167 [2024-02-14 20:17:54.400144] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.168 [2024-02-14 20:17:54.400161] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.168 [2024-02-14 20:17:54.408394] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.168 [2024-02-14 20:17:54.408412] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.168 [2024-02-14 20:17:54.416895] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.168 [2024-02-14 20:17:54.416912] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.168 [2024-02-14 20:17:54.425405] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.168 [2024-02-14 20:17:54.425422] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.168 [2024-02-14 20:17:54.434545] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.168 [2024-02-14 20:17:54.434563] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.168 [2024-02-14 20:17:54.448616] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.168 [2024-02-14 20:17:54.448635] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.168 [2024-02-14 20:17:54.457603] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.168 [2024-02-14 20:17:54.457621] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.168 [2024-02-14 20:17:54.466122] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.168 [2024-02-14 20:17:54.466140] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.168 [2024-02-14 20:17:54.475785] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.168 [2024-02-14 20:17:54.475802] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.168 [2024-02-14 20:17:54.484046] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.168 [2024-02-14 20:17:54.484064] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.168 [2024-02-14 20:17:54.491267] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.168 [2024-02-14 20:17:54.491285] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.168 [2024-02-14 20:17:54.501894] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.168 [2024-02-14 20:17:54.501917] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.168 [2024-02-14 20:17:54.512156] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.168 [2024-02-14 20:17:54.512174] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.168 [2024-02-14 20:17:54.523346] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.168 [2024-02-14 20:17:54.523365] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.168 [2024-02-14 20:17:54.533407] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.168 [2024-02-14 20:17:54.533426] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.168 [2024-02-14 20:17:54.547269] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.168 [2024-02-14 20:17:54.547288] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.168 [2024-02-14 20:17:54.553700] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.168 [2024-02-14 20:17:54.553718] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.168 [2024-02-14 20:17:54.564445] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.168 [2024-02-14 20:17:54.564463] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.168 [2024-02-14 20:17:54.573429] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.168 [2024-02-14 20:17:54.573447] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.168 [2024-02-14 20:17:54.582054] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.168 [2024-02-14 20:17:54.582072] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.427 [2024-02-14 20:17:54.590781] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.427 [2024-02-14 20:17:54.590798] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.427 [2024-02-14 20:17:54.599090] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.427 [2024-02-14 20:17:54.599108] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.427 [2024-02-14 20:17:54.607823] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.427 [2024-02-14 20:17:54.607841] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.427 [2024-02-14 20:17:54.616693] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.427 [2024-02-14 20:17:54.616712] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.427 [2024-02-14 20:17:54.626039] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.427 [2024-02-14 20:17:54.626058] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.428 [2024-02-14 20:17:54.638611] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.428 [2024-02-14 20:17:54.638630] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.428 [2024-02-14 20:17:54.646958] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.428 [2024-02-14 20:17:54.646976] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.428 [2024-02-14 20:17:54.655983] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.428 [2024-02-14 20:17:54.656000] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.428 [2024-02-14 20:17:54.664856] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.428 [2024-02-14 20:17:54.664874] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.428 [2024-02-14 20:17:54.673060] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.428 [2024-02-14 20:17:54.673078] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.428 [2024-02-14 20:17:54.686607] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.428 [2024-02-14 20:17:54.686629] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.428 [2024-02-14 20:17:54.693045] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.428 [2024-02-14 20:17:54.693063] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.428 [2024-02-14 20:17:54.702862] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.428 [2024-02-14 20:17:54.702880] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.428 [2024-02-14 20:17:54.709789] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.428 [2024-02-14 20:17:54.709806] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.428 [2024-02-14 20:17:54.720140] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.428 [2024-02-14 20:17:54.720158] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.428 [2024-02-14 20:17:54.729008] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.428 [2024-02-14 20:17:54.729026] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.428 [2024-02-14 20:17:54.737196] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.428 [2024-02-14 20:17:54.737213] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.428 [2024-02-14 20:17:54.745611] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.428 [2024-02-14 20:17:54.745628] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.428 [2024-02-14 20:17:54.754221] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.428 [2024-02-14 20:17:54.754239] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.428 [2024-02-14 20:17:54.763032] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.428 [2024-02-14 20:17:54.763050] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.428 [2024-02-14 20:17:54.772024] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.428 [2024-02-14 20:17:54.772043] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.428 [2024-02-14 20:17:54.780382] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.428 [2024-02-14 20:17:54.780400] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.428 [2024-02-14 20:17:54.790178] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.428 [2024-02-14 20:17:54.790196] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.428 [2024-02-14 20:17:54.800582] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.428 [2024-02-14 20:17:54.800599] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.428 [2024-02-14 20:17:54.807616] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.428 [2024-02-14 20:17:54.807634] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.428 [2024-02-14 20:17:54.822341] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.428 [2024-02-14 20:17:54.822360] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.428 [2024-02-14 20:17:54.830937] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.428 [2024-02-14 20:17:54.830955] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.428 [2024-02-14 20:17:54.839672] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.428 [2024-02-14 20:17:54.839690] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.688 [2024-02-14 20:17:54.848538] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.688 [2024-02-14 20:17:54.848556] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.688 [2024-02-14 20:17:54.857583] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.688 [2024-02-14 20:17:54.857604] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.688 [2024-02-14 20:17:54.871238] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.688 [2024-02-14 20:17:54.871256] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.688 [2024-02-14 20:17:54.878136] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.688 [2024-02-14 20:17:54.878154] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.688 [2024-02-14 20:17:54.888131] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.688 [2024-02-14 20:17:54.888150] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.688 [2024-02-14 20:17:54.896841] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.688 [2024-02-14 20:17:54.896859] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.688 [2024-02-14 20:17:54.905545] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.688 [2024-02-14 20:17:54.905563] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.688 [2024-02-14 20:17:54.914335] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.688 [2024-02-14 20:17:54.914354] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.688 [2024-02-14 20:17:54.923054] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.688 [2024-02-14 20:17:54.923072] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.688 [2024-02-14 20:17:54.933323] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.688 [2024-02-14 20:17:54.933341] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.688 [2024-02-14 20:17:54.942148] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.688 [2024-02-14 20:17:54.942166] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.688 [2024-02-14 20:17:54.952135] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.688 [2024-02-14 20:17:54.952152] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.688 [2024-02-14 20:17:54.960878] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.688 [2024-02-14 20:17:54.960896] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.688 [2024-02-14 20:17:54.969195] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.688 [2024-02-14 20:17:54.969213] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.688 [2024-02-14 20:17:54.979138] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.688 [2024-02-14 20:17:54.979157] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.688 [2024-02-14 20:17:54.986427] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.688 [2024-02-14 20:17:54.986445] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.688 [2024-02-14 20:17:54.993723] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.688 [2024-02-14 20:17:54.993740] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.688 [2024-02-14 20:17:55.008833] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.688 [2024-02-14 20:17:55.008853] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.688 [2024-02-14 20:17:55.016961] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.688 [2024-02-14 20:17:55.016980] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.688 [2024-02-14 20:17:55.025212] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.688 [2024-02-14 20:17:55.025231] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.688 [2024-02-14 20:17:55.034014] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.688 [2024-02-14 20:17:55.034037] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.688 [2024-02-14 20:17:55.042880] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.688 [2024-02-14 20:17:55.042899] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.688 [2024-02-14 20:17:55.056939] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.688 [2024-02-14 20:17:55.056960] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.688 [2024-02-14 20:17:55.063704] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.688 [2024-02-14 20:17:55.063724] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.688 [2024-02-14 20:17:55.073739] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.688 [2024-02-14 20:17:55.073758] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.688 [2024-02-14 20:17:55.081973] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.688 [2024-02-14 20:17:55.081993] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.688 [2024-02-14 20:17:55.089953] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.688 [2024-02-14 20:17:55.089972] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.688 00:17:17.688 Latency(us) 00:17:17.688 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:17.688 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:17:17.688 Nvme1n1 : 5.01 17070.19 133.36 0.00 0.00 7492.56 1888.06 28336.52 00:17:17.688 =================================================================================================================== 00:17:17.688 Total : 17070.19 133.36 0.00 0.00 7492.56 1888.06 28336.52 00:17:17.688 [2024-02-14 20:17:55.095404] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:17:17.688 [2024-02-14 20:17:55.100390] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.688 [2024-02-14 20:17:55.100408] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.948 [2024-02-14 20:17:55.108407] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.948 [2024-02-14 20:17:55.108420] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.948 [2024-02-14 20:17:55.116430] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.948 [2024-02-14 20:17:55.116441] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.948 [2024-02-14 20:17:55.124465] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.948 [2024-02-14 20:17:55.124486] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.948 [2024-02-14 20:17:55.132478] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.948 [2024-02-14 20:17:55.132492] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.948 [2024-02-14 20:17:55.144514] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.948 [2024-02-14 20:17:55.144526] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.948 [2024-02-14 20:17:55.152526] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.948 [2024-02-14 20:17:55.152537] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.948 [2024-02-14 20:17:55.160548] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.948 [2024-02-14 20:17:55.160561] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.948 [2024-02-14 20:17:55.168570] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.948 [2024-02-14 20:17:55.168582] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.948 [2024-02-14 20:17:55.176590] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.948 [2024-02-14 20:17:55.176601] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.948 [2024-02-14 20:17:55.188626] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.948 [2024-02-14 20:17:55.188638] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.948 [2024-02-14 20:17:55.196642] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.948 [2024-02-14 20:17:55.196659] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.948 [2024-02-14 20:17:55.204668] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.948 [2024-02-14 20:17:55.204679] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.948 [2024-02-14 20:17:55.212690] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.948 [2024-02-14 20:17:55.212700] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.948 [2024-02-14 20:17:55.220705] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.948 [2024-02-14 20:17:55.220715] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.948 [2024-02-14 20:17:55.232746] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.948 [2024-02-14 20:17:55.232758] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.948 [2024-02-14 20:17:55.240761] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.948 [2024-02-14 20:17:55.240772] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.948 [2024-02-14 20:17:55.248779] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.948 [2024-02-14 20:17:55.248789] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.948 [2024-02-14 20:17:55.256799] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.948 [2024-02-14 20:17:55.256809] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.948 [2024-02-14 20:17:55.264821] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.948 [2024-02-14 20:17:55.264831] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.948 [2024-02-14 20:17:55.276858] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.948 [2024-02-14 20:17:55.276881] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.948 [2024-02-14 20:17:55.284888] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.948 [2024-02-14 20:17:55.284899] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.948 [2024-02-14 20:17:55.292906] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.948 [2024-02-14 20:17:55.292915] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.948 [2024-02-14 20:17:55.300927] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.948 [2024-02-14 20:17:55.300937] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.948 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1775741) - No such process 00:17:17.948 20:17:55 -- target/zcopy.sh@49 -- # wait 1775741 00:17:17.948 20:17:55 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:17.949 20:17:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:17.949 20:17:55 -- common/autotest_common.sh@10 -- # set +x 00:17:17.949 20:17:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:17.949 20:17:55 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:17:17.949 20:17:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:17.949 20:17:55 -- common/autotest_common.sh@10 -- # set +x 00:17:17.949 delay0 00:17:17.949 20:17:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:17.949 20:17:55 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:17:17.949 20:17:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:17.949 20:17:55 -- common/autotest_common.sh@10 -- # set +x 00:17:17.949 20:17:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:17.949 20:17:55 -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:17:17.949 EAL: No free 2048 kB hugepages reported on node 1 00:17:18.208 [2024-02-14 20:17:55.389526] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:17:24.789 Initializing NVMe Controllers 00:17:24.789 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:24.789 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:24.789 Initialization complete. Launching workers. 00:17:24.789 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 71 00:17:24.789 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 353, failed to submit 38 00:17:24.789 success 153, unsuccess 200, failed 0 00:17:24.789 20:18:01 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:17:24.789 20:18:01 -- target/zcopy.sh@60 -- # nvmftestfini 00:17:24.789 20:18:01 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:24.789 20:18:01 -- nvmf/common.sh@116 -- # sync 00:17:24.789 20:18:01 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:24.789 20:18:01 -- nvmf/common.sh@119 -- # set +e 00:17:24.789 20:18:01 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:24.789 20:18:01 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:24.789 rmmod nvme_tcp 00:17:24.789 rmmod nvme_fabrics 00:17:24.789 rmmod nvme_keyring 00:17:24.789 20:18:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:24.789 20:18:01 -- nvmf/common.sh@123 -- # set -e 00:17:24.789 20:18:01 -- nvmf/common.sh@124 -- # return 0 00:17:24.789 20:18:01 -- nvmf/common.sh@477 -- # '[' -n 1773793 ']' 00:17:24.789 20:18:01 -- nvmf/common.sh@478 -- # killprocess 1773793 00:17:24.789 20:18:01 -- common/autotest_common.sh@924 -- # '[' -z 1773793 ']' 00:17:24.789 20:18:01 -- common/autotest_common.sh@928 -- # kill -0 1773793 00:17:24.789 20:18:01 -- common/autotest_common.sh@929 -- # uname 00:17:24.789 20:18:01 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:17:24.789 20:18:01 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 1773793 00:17:24.789 20:18:01 -- common/autotest_common.sh@930 -- # process_name=reactor_1 00:17:24.789 20:18:01 -- common/autotest_common.sh@934 -- # '[' reactor_1 = sudo ']' 00:17:24.789 20:18:01 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 1773793' 00:17:24.789 killing process with pid 1773793 00:17:24.789 20:18:01 -- common/autotest_common.sh@943 -- # kill 1773793 00:17:24.789 20:18:01 -- common/autotest_common.sh@948 -- # wait 1773793 00:17:24.789 20:18:01 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:24.789 20:18:01 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:24.789 20:18:01 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:24.789 20:18:01 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:24.789 20:18:01 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:24.789 20:18:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:24.789 20:18:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:24.789 20:18:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:26.698 20:18:03 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:17:26.698 00:17:26.698 real 0m32.311s 00:17:26.698 user 0m43.347s 00:17:26.698 sys 0m11.154s 00:17:26.698 20:18:03 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:26.698 20:18:03 -- common/autotest_common.sh@10 -- # set +x 00:17:26.698 ************************************ 00:17:26.698 END TEST nvmf_zcopy 00:17:26.698 ************************************ 00:17:26.698 20:18:03 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:17:26.698 20:18:03 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:17:26.698 20:18:03 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:17:26.698 20:18:03 -- common/autotest_common.sh@10 -- # set +x 00:17:26.698 ************************************ 00:17:26.698 START TEST nvmf_nmic 00:17:26.698 ************************************ 00:17:26.698 20:18:03 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:17:26.698 * Looking for test storage... 00:17:26.698 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:26.698 20:18:04 -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:26.698 20:18:04 -- nvmf/common.sh@7 -- # uname -s 00:17:26.698 20:18:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:26.698 20:18:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:26.698 20:18:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:26.698 20:18:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:26.698 20:18:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:26.698 20:18:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:26.698 20:18:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:26.698 20:18:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:26.698 20:18:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:26.698 20:18:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:26.698 20:18:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:26.698 20:18:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:17:26.698 20:18:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:26.698 20:18:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:26.698 20:18:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:26.698 20:18:04 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:26.698 20:18:04 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:26.698 20:18:04 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:26.698 20:18:04 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:26.698 20:18:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.698 20:18:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.698 20:18:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.698 20:18:04 -- paths/export.sh@5 -- # export PATH 00:17:26.698 20:18:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.698 20:18:04 -- nvmf/common.sh@46 -- # : 0 00:17:26.698 20:18:04 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:26.698 20:18:04 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:26.698 20:18:04 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:26.698 20:18:04 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:26.698 20:18:04 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:26.698 20:18:04 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:26.698 20:18:04 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:26.698 20:18:04 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:26.698 20:18:04 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:26.698 20:18:04 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:26.698 20:18:04 -- target/nmic.sh@14 -- # nvmftestinit 00:17:26.698 20:18:04 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:26.698 20:18:04 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:26.698 20:18:04 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:26.698 20:18:04 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:26.698 20:18:04 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:26.698 20:18:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:26.698 20:18:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:26.698 20:18:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:26.958 20:18:04 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:26.958 20:18:04 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:26.958 20:18:04 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:26.958 20:18:04 -- common/autotest_common.sh@10 -- # set +x 00:17:32.230 20:18:09 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:32.230 20:18:09 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:32.230 20:18:09 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:32.230 20:18:09 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:32.230 20:18:09 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:32.230 20:18:09 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:32.231 20:18:09 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:32.231 20:18:09 -- nvmf/common.sh@294 -- # net_devs=() 00:17:32.231 20:18:09 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:32.231 20:18:09 -- nvmf/common.sh@295 -- # e810=() 00:17:32.231 20:18:09 -- nvmf/common.sh@295 -- # local -ga e810 00:17:32.231 20:18:09 -- nvmf/common.sh@296 -- # x722=() 00:17:32.231 20:18:09 -- nvmf/common.sh@296 -- # local -ga x722 00:17:32.231 20:18:09 -- nvmf/common.sh@297 -- # mlx=() 00:17:32.231 20:18:09 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:32.231 20:18:09 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:32.231 20:18:09 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:32.231 20:18:09 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:32.231 20:18:09 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:32.231 20:18:09 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:32.231 20:18:09 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:32.231 20:18:09 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:32.231 20:18:09 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:32.231 20:18:09 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:32.231 20:18:09 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:32.231 20:18:09 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:32.231 20:18:09 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:32.231 20:18:09 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:17:32.231 20:18:09 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:17:32.231 20:18:09 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:17:32.231 20:18:09 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:17:32.231 20:18:09 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:32.231 20:18:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:32.231 20:18:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:32.231 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:32.231 20:18:09 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:32.231 20:18:09 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:32.231 20:18:09 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:32.231 20:18:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:32.231 20:18:09 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:32.231 20:18:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:32.231 20:18:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:32.231 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:32.231 20:18:09 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:32.231 20:18:09 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:32.231 20:18:09 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:32.231 20:18:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:32.231 20:18:09 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:32.231 20:18:09 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:32.231 20:18:09 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:17:32.231 20:18:09 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:17:32.231 20:18:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:32.231 20:18:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:32.231 20:18:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:32.231 20:18:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:32.231 20:18:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:32.231 Found net devices under 0000:af:00.0: cvl_0_0 00:17:32.231 20:18:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:32.231 20:18:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:32.231 20:18:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:32.231 20:18:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:32.231 20:18:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:32.231 20:18:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:32.231 Found net devices under 0000:af:00.1: cvl_0_1 00:17:32.231 20:18:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:32.231 20:18:09 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:32.231 20:18:09 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:32.231 20:18:09 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:32.231 20:18:09 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:17:32.231 20:18:09 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:17:32.231 20:18:09 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:32.231 20:18:09 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:32.231 20:18:09 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:32.231 20:18:09 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:17:32.231 20:18:09 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:32.231 20:18:09 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:32.231 20:18:09 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:17:32.231 20:18:09 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:32.231 20:18:09 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:32.231 20:18:09 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:17:32.231 20:18:09 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:17:32.231 20:18:09 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:17:32.231 20:18:09 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:32.231 20:18:09 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:32.231 20:18:09 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:32.231 20:18:09 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:17:32.231 20:18:09 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:32.231 20:18:09 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:32.231 20:18:09 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:32.231 20:18:09 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:17:32.231 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:32.231 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.166 ms 00:17:32.231 00:17:32.231 --- 10.0.0.2 ping statistics --- 00:17:32.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:32.231 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:17:32.231 20:18:09 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:32.231 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:32.231 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:17:32.231 00:17:32.231 --- 10.0.0.1 ping statistics --- 00:17:32.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:32.231 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:17:32.231 20:18:09 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:32.231 20:18:09 -- nvmf/common.sh@410 -- # return 0 00:17:32.231 20:18:09 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:32.231 20:18:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:32.231 20:18:09 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:32.231 20:18:09 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:32.231 20:18:09 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:32.231 20:18:09 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:32.231 20:18:09 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:32.231 20:18:09 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:17:32.231 20:18:09 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:32.231 20:18:09 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:32.231 20:18:09 -- common/autotest_common.sh@10 -- # set +x 00:17:32.231 20:18:09 -- nvmf/common.sh@469 -- # nvmfpid=1782019 00:17:32.231 20:18:09 -- nvmf/common.sh@470 -- # waitforlisten 1782019 00:17:32.231 20:18:09 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:32.231 20:18:09 -- common/autotest_common.sh@817 -- # '[' -z 1782019 ']' 00:17:32.231 20:18:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:32.231 20:18:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:32.231 20:18:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:32.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:32.231 20:18:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:32.231 20:18:09 -- common/autotest_common.sh@10 -- # set +x 00:17:32.231 [2024-02-14 20:18:09.486864] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:17:32.231 [2024-02-14 20:18:09.486905] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:32.231 EAL: No free 2048 kB hugepages reported on node 1 00:17:32.231 [2024-02-14 20:18:09.549376] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:32.231 [2024-02-14 20:18:09.624497] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:32.231 [2024-02-14 20:18:09.624604] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:32.231 [2024-02-14 20:18:09.624612] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:32.231 [2024-02-14 20:18:09.624618] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:32.231 [2024-02-14 20:18:09.624684] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:32.231 [2024-02-14 20:18:09.624784] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:32.231 [2024-02-14 20:18:09.624861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:32.231 [2024-02-14 20:18:09.624862] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:33.169 20:18:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:33.169 20:18:10 -- common/autotest_common.sh@850 -- # return 0 00:17:33.169 20:18:10 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:33.169 20:18:10 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:33.169 20:18:10 -- common/autotest_common.sh@10 -- # set +x 00:17:33.169 20:18:10 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:33.169 20:18:10 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:33.169 20:18:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:33.169 20:18:10 -- common/autotest_common.sh@10 -- # set +x 00:17:33.169 [2024-02-14 20:18:10.321855] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:33.169 20:18:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:33.169 20:18:10 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:33.169 20:18:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:33.169 20:18:10 -- common/autotest_common.sh@10 -- # set +x 00:17:33.169 Malloc0 00:17:33.169 20:18:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:33.169 20:18:10 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:33.169 20:18:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:33.169 20:18:10 -- common/autotest_common.sh@10 -- # set +x 00:17:33.169 20:18:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:33.169 20:18:10 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:33.169 20:18:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:33.169 20:18:10 -- common/autotest_common.sh@10 -- # set +x 00:17:33.169 20:18:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:33.169 20:18:10 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:33.169 20:18:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:33.169 20:18:10 -- common/autotest_common.sh@10 -- # set +x 00:17:33.169 [2024-02-14 20:18:10.373684] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:33.169 20:18:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:33.169 20:18:10 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:17:33.169 test case1: single bdev can't be used in multiple subsystems 00:17:33.169 20:18:10 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:17:33.169 20:18:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:33.169 20:18:10 -- common/autotest_common.sh@10 -- # set +x 00:17:33.169 20:18:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:33.169 20:18:10 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:17:33.169 20:18:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:33.169 20:18:10 -- common/autotest_common.sh@10 -- # set +x 00:17:33.169 20:18:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:33.169 20:18:10 -- target/nmic.sh@28 -- # nmic_status=0 00:17:33.169 20:18:10 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:17:33.169 20:18:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:33.169 20:18:10 -- common/autotest_common.sh@10 -- # set +x 00:17:33.169 [2024-02-14 20:18:10.397591] bdev.c:7935:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:17:33.169 [2024-02-14 20:18:10.397611] subsystem.c:1779:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:17:33.169 [2024-02-14 20:18:10.397617] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.169 request: 00:17:33.169 { 00:17:33.169 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:17:33.169 "namespace": { 00:17:33.169 "bdev_name": "Malloc0" 00:17:33.169 }, 00:17:33.169 "method": "nvmf_subsystem_add_ns", 00:17:33.169 "req_id": 1 00:17:33.169 } 00:17:33.169 Got JSON-RPC error response 00:17:33.169 response: 00:17:33.169 { 00:17:33.169 "code": -32602, 00:17:33.169 "message": "Invalid parameters" 00:17:33.169 } 00:17:33.169 20:18:10 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:17:33.169 20:18:10 -- target/nmic.sh@29 -- # nmic_status=1 00:17:33.169 20:18:10 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:17:33.169 20:18:10 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:17:33.169 Adding namespace failed - expected result. 00:17:33.169 20:18:10 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:17:33.169 test case2: host connect to nvmf target in multiple paths 00:17:33.169 20:18:10 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:33.169 20:18:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:33.169 20:18:10 -- common/autotest_common.sh@10 -- # set +x 00:17:33.169 [2024-02-14 20:18:10.409715] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:33.169 20:18:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:33.169 20:18:10 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:34.546 20:18:11 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:17:35.492 20:18:12 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:17:35.492 20:18:12 -- common/autotest_common.sh@1175 -- # local i=0 00:17:35.492 20:18:12 -- common/autotest_common.sh@1176 -- # local nvme_device_counter=1 nvme_devices=0 00:17:35.492 20:18:12 -- common/autotest_common.sh@1177 -- # [[ -n '' ]] 00:17:35.492 20:18:12 -- common/autotest_common.sh@1182 -- # sleep 2 00:17:37.442 20:18:14 -- common/autotest_common.sh@1183 -- # (( i++ <= 15 )) 00:17:37.442 20:18:14 -- common/autotest_common.sh@1184 -- # lsblk -l -o NAME,SERIAL 00:17:37.442 20:18:14 -- common/autotest_common.sh@1184 -- # grep -c SPDKISFASTANDAWESOME 00:17:37.442 20:18:14 -- common/autotest_common.sh@1184 -- # nvme_devices=1 00:17:37.442 20:18:14 -- common/autotest_common.sh@1185 -- # (( nvme_devices == nvme_device_counter )) 00:17:37.442 20:18:14 -- common/autotest_common.sh@1185 -- # return 0 00:17:37.442 20:18:14 -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:17:37.442 [global] 00:17:37.442 thread=1 00:17:37.442 invalidate=1 00:17:37.442 rw=write 00:17:37.442 time_based=1 00:17:37.442 runtime=1 00:17:37.442 ioengine=libaio 00:17:37.442 direct=1 00:17:37.442 bs=4096 00:17:37.442 iodepth=1 00:17:37.442 norandommap=0 00:17:37.442 numjobs=1 00:17:37.442 00:17:37.442 verify_dump=1 00:17:37.442 verify_backlog=512 00:17:37.442 verify_state_save=0 00:17:37.442 do_verify=1 00:17:37.442 verify=crc32c-intel 00:17:37.442 [job0] 00:17:37.442 filename=/dev/nvme0n1 00:17:37.442 Could not set queue depth (nvme0n1) 00:17:37.700 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:37.700 fio-3.35 00:17:37.700 Starting 1 thread 00:17:39.077 00:17:39.077 job0: (groupid=0, jobs=1): err= 0: pid=1783104: Wed Feb 14 20:18:16 2024 00:17:39.077 read: IOPS=18, BW=74.8KiB/s (76.6kB/s)(76.0KiB/1016msec) 00:17:39.077 slat (nsec): min=9822, max=27517, avg=22420.37, stdev=3410.71 00:17:39.077 clat (usec): min=41121, max=42134, avg=41907.07, stdev=208.74 00:17:39.077 lat (usec): min=41142, max=42159, avg=41929.49, stdev=209.40 00:17:39.077 clat percentiles (usec): 00:17:39.077 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:17:39.077 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:17:39.077 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:17:39.077 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:39.077 | 99.99th=[42206] 00:17:39.077 write: IOPS=503, BW=2016KiB/s (2064kB/s)(2048KiB/1016msec); 0 zone resets 00:17:39.077 slat (usec): min=10, max=26870, avg=64.51, stdev=1187.00 00:17:39.077 clat (usec): min=254, max=713, avg=358.48, stdev=65.17 00:17:39.077 lat (usec): min=266, max=27533, avg=422.99, stdev=1202.21 00:17:39.077 clat percentiles (usec): 00:17:39.077 | 1.00th=[ 258], 5.00th=[ 273], 10.00th=[ 277], 20.00th=[ 285], 00:17:39.077 | 30.00th=[ 338], 40.00th=[ 351], 50.00th=[ 355], 60.00th=[ 363], 00:17:39.077 | 70.00th=[ 400], 80.00th=[ 412], 90.00th=[ 445], 95.00th=[ 474], 00:17:39.077 | 99.00th=[ 506], 99.50th=[ 545], 99.90th=[ 717], 99.95th=[ 717], 00:17:39.077 | 99.99th=[ 717] 00:17:39.077 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:17:39.077 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:39.077 lat (usec) : 500=95.10%, 750=1.32% 00:17:39.077 lat (msec) : 50=3.58% 00:17:39.077 cpu : usr=0.10%, sys=1.28%, ctx=533, majf=0, minf=2 00:17:39.077 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:39.077 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:39.077 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:39.077 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:39.077 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:39.077 00:17:39.077 Run status group 0 (all jobs): 00:17:39.077 READ: bw=74.8KiB/s (76.6kB/s), 74.8KiB/s-74.8KiB/s (76.6kB/s-76.6kB/s), io=76.0KiB (77.8kB), run=1016-1016msec 00:17:39.077 WRITE: bw=2016KiB/s (2064kB/s), 2016KiB/s-2016KiB/s (2064kB/s-2064kB/s), io=2048KiB (2097kB), run=1016-1016msec 00:17:39.077 00:17:39.077 Disk stats (read/write): 00:17:39.077 nvme0n1: ios=41/512, merge=0/0, ticks=1641/178, in_queue=1819, util=98.70% 00:17:39.077 20:18:16 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:39.336 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:17:39.336 20:18:16 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:39.336 20:18:16 -- common/autotest_common.sh@1196 -- # local i=0 00:17:39.336 20:18:16 -- common/autotest_common.sh@1197 -- # lsblk -o NAME,SERIAL 00:17:39.336 20:18:16 -- common/autotest_common.sh@1197 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:39.336 20:18:16 -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:17:39.336 20:18:16 -- common/autotest_common.sh@1204 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:39.336 20:18:16 -- common/autotest_common.sh@1208 -- # return 0 00:17:39.336 20:18:16 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:17:39.336 20:18:16 -- target/nmic.sh@53 -- # nvmftestfini 00:17:39.336 20:18:16 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:39.336 20:18:16 -- nvmf/common.sh@116 -- # sync 00:17:39.336 20:18:16 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:39.336 20:18:16 -- nvmf/common.sh@119 -- # set +e 00:17:39.336 20:18:16 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:39.336 20:18:16 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:39.336 rmmod nvme_tcp 00:17:39.336 rmmod nvme_fabrics 00:17:39.336 rmmod nvme_keyring 00:17:39.336 20:18:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:39.336 20:18:16 -- nvmf/common.sh@123 -- # set -e 00:17:39.336 20:18:16 -- nvmf/common.sh@124 -- # return 0 00:17:39.336 20:18:16 -- nvmf/common.sh@477 -- # '[' -n 1782019 ']' 00:17:39.336 20:18:16 -- nvmf/common.sh@478 -- # killprocess 1782019 00:17:39.336 20:18:16 -- common/autotest_common.sh@924 -- # '[' -z 1782019 ']' 00:17:39.336 20:18:16 -- common/autotest_common.sh@928 -- # kill -0 1782019 00:17:39.336 20:18:16 -- common/autotest_common.sh@929 -- # uname 00:17:39.336 20:18:16 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:17:39.336 20:18:16 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 1782019 00:17:39.336 20:18:16 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:17:39.336 20:18:16 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:17:39.336 20:18:16 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 1782019' 00:17:39.336 killing process with pid 1782019 00:17:39.336 20:18:16 -- common/autotest_common.sh@943 -- # kill 1782019 00:17:39.336 20:18:16 -- common/autotest_common.sh@948 -- # wait 1782019 00:17:39.595 20:18:16 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:39.595 20:18:16 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:39.595 20:18:16 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:39.595 20:18:16 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:39.595 20:18:16 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:39.595 20:18:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:39.595 20:18:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:39.595 20:18:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:42.131 20:18:18 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:17:42.131 00:17:42.131 real 0m14.926s 00:17:42.131 user 0m35.609s 00:17:42.131 sys 0m4.640s 00:17:42.131 20:18:18 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:42.131 20:18:18 -- common/autotest_common.sh@10 -- # set +x 00:17:42.131 ************************************ 00:17:42.131 END TEST nvmf_nmic 00:17:42.131 ************************************ 00:17:42.131 20:18:18 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:17:42.131 20:18:18 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:17:42.131 20:18:18 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:17:42.131 20:18:18 -- common/autotest_common.sh@10 -- # set +x 00:17:42.131 ************************************ 00:17:42.131 START TEST nvmf_fio_target 00:17:42.131 ************************************ 00:17:42.131 20:18:18 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:17:42.131 * Looking for test storage... 00:17:42.131 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:42.132 20:18:19 -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:42.132 20:18:19 -- nvmf/common.sh@7 -- # uname -s 00:17:42.132 20:18:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:42.132 20:18:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:42.132 20:18:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:42.132 20:18:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:42.132 20:18:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:42.132 20:18:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:42.132 20:18:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:42.132 20:18:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:42.132 20:18:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:42.132 20:18:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:42.132 20:18:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:42.132 20:18:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:17:42.132 20:18:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:42.132 20:18:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:42.132 20:18:19 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:42.132 20:18:19 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:42.132 20:18:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:42.132 20:18:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:42.132 20:18:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:42.132 20:18:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.132 20:18:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.132 20:18:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.132 20:18:19 -- paths/export.sh@5 -- # export PATH 00:17:42.132 20:18:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.132 20:18:19 -- nvmf/common.sh@46 -- # : 0 00:17:42.132 20:18:19 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:42.132 20:18:19 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:42.132 20:18:19 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:42.132 20:18:19 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:42.132 20:18:19 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:42.132 20:18:19 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:42.132 20:18:19 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:42.132 20:18:19 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:42.132 20:18:19 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:42.132 20:18:19 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:42.132 20:18:19 -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:42.132 20:18:19 -- target/fio.sh@16 -- # nvmftestinit 00:17:42.132 20:18:19 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:42.132 20:18:19 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:42.132 20:18:19 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:42.132 20:18:19 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:42.132 20:18:19 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:42.132 20:18:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:42.132 20:18:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:42.132 20:18:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:42.132 20:18:19 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:42.132 20:18:19 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:42.132 20:18:19 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:42.132 20:18:19 -- common/autotest_common.sh@10 -- # set +x 00:17:48.703 20:18:25 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:48.703 20:18:25 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:48.703 20:18:25 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:48.703 20:18:25 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:48.703 20:18:25 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:48.703 20:18:25 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:48.703 20:18:25 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:48.703 20:18:25 -- nvmf/common.sh@294 -- # net_devs=() 00:17:48.703 20:18:25 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:48.703 20:18:25 -- nvmf/common.sh@295 -- # e810=() 00:17:48.703 20:18:25 -- nvmf/common.sh@295 -- # local -ga e810 00:17:48.703 20:18:25 -- nvmf/common.sh@296 -- # x722=() 00:17:48.703 20:18:25 -- nvmf/common.sh@296 -- # local -ga x722 00:17:48.703 20:18:25 -- nvmf/common.sh@297 -- # mlx=() 00:17:48.703 20:18:25 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:48.703 20:18:25 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:48.703 20:18:25 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:48.703 20:18:25 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:48.703 20:18:25 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:48.703 20:18:25 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:48.703 20:18:25 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:48.703 20:18:25 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:48.703 20:18:25 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:48.703 20:18:25 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:48.703 20:18:25 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:48.703 20:18:25 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:48.703 20:18:25 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:48.703 20:18:25 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:17:48.703 20:18:25 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:17:48.703 20:18:25 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:17:48.703 20:18:25 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:17:48.703 20:18:25 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:48.703 20:18:25 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:48.703 20:18:25 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:48.703 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:48.703 20:18:25 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:48.703 20:18:25 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:48.703 20:18:25 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:48.703 20:18:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:48.703 20:18:25 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:48.703 20:18:25 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:48.703 20:18:25 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:48.703 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:48.703 20:18:25 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:48.703 20:18:25 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:48.703 20:18:25 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:48.703 20:18:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:48.703 20:18:25 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:48.703 20:18:25 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:48.703 20:18:25 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:17:48.703 20:18:25 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:17:48.703 20:18:25 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:48.703 20:18:25 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:48.703 20:18:25 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:48.703 20:18:25 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:48.704 20:18:25 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:48.704 Found net devices under 0000:af:00.0: cvl_0_0 00:17:48.704 20:18:25 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:48.704 20:18:25 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:48.704 20:18:25 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:48.704 20:18:25 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:48.704 20:18:25 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:48.704 20:18:25 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:48.704 Found net devices under 0000:af:00.1: cvl_0_1 00:17:48.704 20:18:25 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:48.704 20:18:25 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:48.704 20:18:25 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:48.704 20:18:25 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:48.704 20:18:25 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:17:48.704 20:18:25 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:17:48.704 20:18:25 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:48.704 20:18:25 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:48.704 20:18:25 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:48.704 20:18:25 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:17:48.704 20:18:25 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:48.704 20:18:25 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:48.704 20:18:25 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:17:48.704 20:18:25 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:48.704 20:18:25 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:48.704 20:18:25 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:17:48.704 20:18:25 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:17:48.704 20:18:25 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:17:48.704 20:18:25 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:48.704 20:18:25 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:48.704 20:18:25 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:48.704 20:18:25 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:17:48.704 20:18:25 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:48.704 20:18:25 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:48.704 20:18:25 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:48.704 20:18:25 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:17:48.704 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:48.704 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.160 ms 00:17:48.704 00:17:48.704 --- 10.0.0.2 ping statistics --- 00:17:48.704 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:48.704 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:17:48.704 20:18:25 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:48.704 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:48.704 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:17:48.704 00:17:48.704 --- 10.0.0.1 ping statistics --- 00:17:48.704 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:48.704 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:17:48.704 20:18:25 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:48.704 20:18:25 -- nvmf/common.sh@410 -- # return 0 00:17:48.704 20:18:25 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:48.704 20:18:25 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:48.704 20:18:25 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:48.704 20:18:25 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:48.704 20:18:25 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:48.704 20:18:25 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:48.704 20:18:25 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:48.704 20:18:25 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:17:48.704 20:18:25 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:48.704 20:18:25 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:48.704 20:18:25 -- common/autotest_common.sh@10 -- # set +x 00:17:48.704 20:18:25 -- nvmf/common.sh@469 -- # nvmfpid=1787137 00:17:48.704 20:18:25 -- nvmf/common.sh@470 -- # waitforlisten 1787137 00:17:48.704 20:18:25 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:48.704 20:18:25 -- common/autotest_common.sh@817 -- # '[' -z 1787137 ']' 00:17:48.704 20:18:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:48.704 20:18:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:48.704 20:18:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:48.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:48.704 20:18:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:48.704 20:18:25 -- common/autotest_common.sh@10 -- # set +x 00:17:48.704 [2024-02-14 20:18:25.394293] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:17:48.704 [2024-02-14 20:18:25.394332] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:48.704 EAL: No free 2048 kB hugepages reported on node 1 00:17:48.704 [2024-02-14 20:18:25.455305] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:48.704 [2024-02-14 20:18:25.529431] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:48.704 [2024-02-14 20:18:25.529542] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:48.704 [2024-02-14 20:18:25.529550] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:48.704 [2024-02-14 20:18:25.529557] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:48.704 [2024-02-14 20:18:25.529605] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:48.704 [2024-02-14 20:18:25.529628] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:48.704 [2024-02-14 20:18:25.529693] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:48.704 [2024-02-14 20:18:25.529694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:48.962 20:18:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:48.962 20:18:26 -- common/autotest_common.sh@850 -- # return 0 00:17:48.962 20:18:26 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:48.962 20:18:26 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:48.962 20:18:26 -- common/autotest_common.sh@10 -- # set +x 00:17:48.962 20:18:26 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:48.962 20:18:26 -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:48.962 [2024-02-14 20:18:26.376333] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:49.221 20:18:26 -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:49.221 20:18:26 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:17:49.221 20:18:26 -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:49.479 20:18:26 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:17:49.479 20:18:26 -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:49.738 20:18:26 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:17:49.738 20:18:26 -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:49.738 20:18:27 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:17:49.738 20:18:27 -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:17:49.996 20:18:27 -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:50.254 20:18:27 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:17:50.254 20:18:27 -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:50.512 20:18:27 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:17:50.512 20:18:27 -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:50.512 20:18:27 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:17:50.512 20:18:27 -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:17:50.770 20:18:28 -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:51.028 20:18:28 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:17:51.028 20:18:28 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:51.028 20:18:28 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:17:51.028 20:18:28 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:51.286 20:18:28 -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:51.544 [2024-02-14 20:18:28.760947] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:51.544 20:18:28 -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:17:51.802 20:18:28 -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:17:51.802 20:18:29 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:53.176 20:18:30 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:17:53.176 20:18:30 -- common/autotest_common.sh@1175 -- # local i=0 00:17:53.176 20:18:30 -- common/autotest_common.sh@1176 -- # local nvme_device_counter=1 nvme_devices=0 00:17:53.176 20:18:30 -- common/autotest_common.sh@1177 -- # [[ -n 4 ]] 00:17:53.176 20:18:30 -- common/autotest_common.sh@1178 -- # nvme_device_counter=4 00:17:53.176 20:18:30 -- common/autotest_common.sh@1182 -- # sleep 2 00:17:55.085 20:18:32 -- common/autotest_common.sh@1183 -- # (( i++ <= 15 )) 00:17:55.086 20:18:32 -- common/autotest_common.sh@1184 -- # lsblk -l -o NAME,SERIAL 00:17:55.086 20:18:32 -- common/autotest_common.sh@1184 -- # grep -c SPDKISFASTANDAWESOME 00:17:55.086 20:18:32 -- common/autotest_common.sh@1184 -- # nvme_devices=4 00:17:55.086 20:18:32 -- common/autotest_common.sh@1185 -- # (( nvme_devices == nvme_device_counter )) 00:17:55.086 20:18:32 -- common/autotest_common.sh@1185 -- # return 0 00:17:55.086 20:18:32 -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:17:55.086 [global] 00:17:55.086 thread=1 00:17:55.086 invalidate=1 00:17:55.086 rw=write 00:17:55.086 time_based=1 00:17:55.086 runtime=1 00:17:55.086 ioengine=libaio 00:17:55.086 direct=1 00:17:55.086 bs=4096 00:17:55.086 iodepth=1 00:17:55.086 norandommap=0 00:17:55.086 numjobs=1 00:17:55.086 00:17:55.086 verify_dump=1 00:17:55.086 verify_backlog=512 00:17:55.086 verify_state_save=0 00:17:55.086 do_verify=1 00:17:55.086 verify=crc32c-intel 00:17:55.086 [job0] 00:17:55.086 filename=/dev/nvme0n1 00:17:55.086 [job1] 00:17:55.086 filename=/dev/nvme0n2 00:17:55.086 [job2] 00:17:55.086 filename=/dev/nvme0n3 00:17:55.086 [job3] 00:17:55.086 filename=/dev/nvme0n4 00:17:55.086 Could not set queue depth (nvme0n1) 00:17:55.086 Could not set queue depth (nvme0n2) 00:17:55.086 Could not set queue depth (nvme0n3) 00:17:55.086 Could not set queue depth (nvme0n4) 00:17:55.344 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:55.344 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:55.344 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:55.344 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:55.344 fio-3.35 00:17:55.344 Starting 4 threads 00:17:56.720 00:17:56.720 job0: (groupid=0, jobs=1): err= 0: pid=1788554: Wed Feb 14 20:18:33 2024 00:17:56.720 read: IOPS=19, BW=78.8KiB/s (80.7kB/s)(80.0KiB/1015msec) 00:17:56.720 slat (nsec): min=10270, max=22894, avg=21899.65, stdev=2757.64 00:17:56.720 clat (usec): min=41027, max=42027, avg=41829.32, stdev=336.04 00:17:56.720 lat (usec): min=41050, max=42050, avg=41851.22, stdev=337.39 00:17:56.720 clat percentiles (usec): 00:17:56.720 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:17:56.720 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:17:56.720 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:17:56.720 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:56.720 | 99.99th=[42206] 00:17:56.720 write: IOPS=504, BW=2018KiB/s (2066kB/s)(2048KiB/1015msec); 0 zone resets 00:17:56.720 slat (nsec): min=6935, max=46511, avg=12422.76, stdev=2567.00 00:17:56.720 clat (usec): min=187, max=1938, avg=330.38, stdev=157.92 00:17:56.720 lat (usec): min=198, max=1954, avg=342.80, stdev=158.23 00:17:56.720 clat percentiles (usec): 00:17:56.720 | 1.00th=[ 192], 5.00th=[ 198], 10.00th=[ 204], 20.00th=[ 219], 00:17:56.720 | 30.00th=[ 269], 40.00th=[ 281], 50.00th=[ 293], 60.00th=[ 310], 00:17:56.720 | 70.00th=[ 330], 80.00th=[ 437], 90.00th=[ 474], 95.00th=[ 506], 00:17:56.720 | 99.00th=[ 922], 99.50th=[ 1254], 99.90th=[ 1942], 99.95th=[ 1942], 00:17:56.720 | 99.99th=[ 1942] 00:17:56.720 bw ( KiB/s): min= 4096, max= 4096, per=41.04%, avg=4096.00, stdev= 0.00, samples=1 00:17:56.720 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:56.720 lat (usec) : 250=25.19%, 500=65.23%, 750=4.14%, 1000=1.13% 00:17:56.720 lat (msec) : 2=0.56%, 50=3.76% 00:17:56.720 cpu : usr=0.59%, sys=0.79%, ctx=533, majf=0, minf=1 00:17:56.720 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:56.720 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:56.720 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:56.720 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:56.720 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:56.720 job1: (groupid=0, jobs=1): err= 0: pid=1788572: Wed Feb 14 20:18:33 2024 00:17:56.720 read: IOPS=19, BW=79.1KiB/s (80.9kB/s)(80.0KiB/1012msec) 00:17:56.720 slat (nsec): min=9548, max=23445, avg=22237.35, stdev=2996.84 00:17:56.720 clat (usec): min=40976, max=43007, avg=42370.62, stdev=600.78 00:17:56.720 lat (usec): min=41000, max=43030, avg=42392.86, stdev=601.22 00:17:56.720 clat percentiles (usec): 00:17:56.720 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41681], 20.00th=[42206], 00:17:56.720 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42730], 00:17:56.720 | 70.00th=[42730], 80.00th=[42730], 90.00th=[42730], 95.00th=[43254], 00:17:56.720 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:17:56.720 | 99.99th=[43254] 00:17:56.720 write: IOPS=505, BW=2024KiB/s (2072kB/s)(2048KiB/1012msec); 0 zone resets 00:17:56.721 slat (usec): min=4, max=3042, avg=18.30, stdev=134.39 00:17:56.721 clat (usec): min=194, max=877, avg=296.31, stdev=102.89 00:17:56.721 lat (usec): min=205, max=3649, avg=314.61, stdev=180.53 00:17:56.721 clat percentiles (usec): 00:17:56.721 | 1.00th=[ 198], 5.00th=[ 206], 10.00th=[ 210], 20.00th=[ 219], 00:17:56.721 | 30.00th=[ 225], 40.00th=[ 233], 50.00th=[ 260], 60.00th=[ 273], 00:17:56.721 | 70.00th=[ 306], 80.00th=[ 379], 90.00th=[ 478], 95.00th=[ 486], 00:17:56.721 | 99.00th=[ 553], 99.50th=[ 725], 99.90th=[ 881], 99.95th=[ 881], 00:17:56.721 | 99.99th=[ 881] 00:17:56.721 bw ( KiB/s): min= 4096, max= 4096, per=41.04%, avg=4096.00, stdev= 0.00, samples=1 00:17:56.721 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:56.721 lat (usec) : 250=42.67%, 500=51.88%, 750=1.32%, 1000=0.38% 00:17:56.721 lat (msec) : 50=3.76% 00:17:56.721 cpu : usr=0.10%, sys=0.79%, ctx=535, majf=0, minf=2 00:17:56.721 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:56.721 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:56.721 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:56.721 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:56.721 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:56.721 job2: (groupid=0, jobs=1): err= 0: pid=1788592: Wed Feb 14 20:18:33 2024 00:17:56.721 read: IOPS=984, BW=3936KiB/s (4031kB/s)(3940KiB/1001msec) 00:17:56.721 slat (nsec): min=6648, max=51178, avg=14772.53, stdev=8419.43 00:17:56.721 clat (usec): min=352, max=1620, avg=663.95, stdev=210.72 00:17:56.721 lat (usec): min=360, max=1643, avg=678.73, stdev=216.94 00:17:56.721 clat percentiles (usec): 00:17:56.721 | 1.00th=[ 363], 5.00th=[ 375], 10.00th=[ 404], 20.00th=[ 449], 00:17:56.721 | 30.00th=[ 469], 40.00th=[ 506], 50.00th=[ 717], 60.00th=[ 758], 00:17:56.721 | 70.00th=[ 799], 80.00th=[ 840], 90.00th=[ 971], 95.00th=[ 1004], 00:17:56.721 | 99.00th=[ 1045], 99.50th=[ 1188], 99.90th=[ 1614], 99.95th=[ 1614], 00:17:56.721 | 99.99th=[ 1614] 00:17:56.721 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:17:56.721 slat (nsec): min=9666, max=60991, avg=11534.79, stdev=3063.39 00:17:56.721 clat (usec): min=186, max=1073, avg=304.87, stdev=109.75 00:17:56.721 lat (usec): min=196, max=1084, avg=316.40, stdev=110.56 00:17:56.721 clat percentiles (usec): 00:17:56.721 | 1.00th=[ 190], 5.00th=[ 196], 10.00th=[ 204], 20.00th=[ 219], 00:17:56.721 | 30.00th=[ 233], 40.00th=[ 251], 50.00th=[ 273], 60.00th=[ 297], 00:17:56.721 | 70.00th=[ 334], 80.00th=[ 371], 90.00th=[ 465], 95.00th=[ 502], 00:17:56.721 | 99.00th=[ 652], 99.50th=[ 750], 99.90th=[ 971], 99.95th=[ 1074], 00:17:56.721 | 99.99th=[ 1074] 00:17:56.721 bw ( KiB/s): min= 4096, max= 4096, per=41.04%, avg=4096.00, stdev= 0.00, samples=1 00:17:56.721 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:56.721 lat (usec) : 250=19.86%, 500=47.49%, 750=12.34%, 1000=17.47% 00:17:56.721 lat (msec) : 2=2.84% 00:17:56.721 cpu : usr=1.30%, sys=2.80%, ctx=2011, majf=0, minf=1 00:17:56.721 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:56.721 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:56.721 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:56.721 issued rwts: total=985,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:56.721 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:56.721 job3: (groupid=0, jobs=1): err= 0: pid=1788599: Wed Feb 14 20:18:33 2024 00:17:56.721 read: IOPS=22, BW=89.7KiB/s (91.8kB/s)(92.0KiB/1026msec) 00:17:56.721 slat (nsec): min=9168, max=24762, avg=21186.30, stdev=4369.57 00:17:56.721 clat (usec): min=396, max=42030, avg=38185.19, stdev=11924.44 00:17:56.721 lat (usec): min=408, max=42052, avg=38206.38, stdev=11927.75 00:17:56.721 clat percentiles (usec): 00:17:56.721 | 1.00th=[ 396], 5.00th=[ 429], 10.00th=[40633], 20.00th=[41157], 00:17:56.721 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:17:56.721 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:17:56.721 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:56.721 | 99.99th=[42206] 00:17:56.721 write: IOPS=499, BW=1996KiB/s (2044kB/s)(2048KiB/1026msec); 0 zone resets 00:17:56.721 slat (usec): min=7, max=1368, avg=15.83, stdev=69.14 00:17:56.721 clat (usec): min=191, max=477, avg=265.57, stdev=57.33 00:17:56.721 lat (usec): min=201, max=1609, avg=281.40, stdev=89.72 00:17:56.721 clat percentiles (usec): 00:17:56.721 | 1.00th=[ 196], 5.00th=[ 206], 10.00th=[ 210], 20.00th=[ 221], 00:17:56.721 | 30.00th=[ 229], 40.00th=[ 241], 50.00th=[ 253], 60.00th=[ 269], 00:17:56.721 | 70.00th=[ 285], 80.00th=[ 297], 90.00th=[ 322], 95.00th=[ 392], 00:17:56.721 | 99.00th=[ 461], 99.50th=[ 469], 99.90th=[ 478], 99.95th=[ 478], 00:17:56.721 | 99.99th=[ 478] 00:17:56.721 bw ( KiB/s): min= 4096, max= 4096, per=41.04%, avg=4096.00, stdev= 0.00, samples=1 00:17:56.721 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:56.721 lat (usec) : 250=45.79%, 500=50.28% 00:17:56.721 lat (msec) : 50=3.93% 00:17:56.721 cpu : usr=0.68%, sys=0.59%, ctx=539, majf=0, minf=1 00:17:56.721 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:56.721 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:56.721 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:56.721 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:56.721 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:56.721 00:17:56.721 Run status group 0 (all jobs): 00:17:56.721 READ: bw=4086KiB/s (4184kB/s), 78.8KiB/s-3936KiB/s (80.7kB/s-4031kB/s), io=4192KiB (4293kB), run=1001-1026msec 00:17:56.721 WRITE: bw=9981KiB/s (10.2MB/s), 1996KiB/s-4092KiB/s (2044kB/s-4190kB/s), io=10.0MiB (10.5MB), run=1001-1026msec 00:17:56.721 00:17:56.721 Disk stats (read/write): 00:17:56.721 nvme0n1: ios=65/512, merge=0/0, ticks=838/164, in_queue=1002, util=85.27% 00:17:56.721 nvme0n2: ios=60/512, merge=0/0, ticks=769/150, in_queue=919, util=90.94% 00:17:56.721 nvme0n3: ios=856/1024, merge=0/0, ticks=1407/308, in_queue=1715, util=95.18% 00:17:56.721 nvme0n4: ios=72/512, merge=0/0, ticks=960/133, in_queue=1093, util=95.98% 00:17:56.721 20:18:33 -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:17:56.721 [global] 00:17:56.721 thread=1 00:17:56.721 invalidate=1 00:17:56.721 rw=randwrite 00:17:56.721 time_based=1 00:17:56.721 runtime=1 00:17:56.721 ioengine=libaio 00:17:56.721 direct=1 00:17:56.721 bs=4096 00:17:56.721 iodepth=1 00:17:56.721 norandommap=0 00:17:56.721 numjobs=1 00:17:56.721 00:17:56.721 verify_dump=1 00:17:56.721 verify_backlog=512 00:17:56.721 verify_state_save=0 00:17:56.721 do_verify=1 00:17:56.721 verify=crc32c-intel 00:17:56.721 [job0] 00:17:56.721 filename=/dev/nvme0n1 00:17:56.721 [job1] 00:17:56.721 filename=/dev/nvme0n2 00:17:56.721 [job2] 00:17:56.721 filename=/dev/nvme0n3 00:17:56.721 [job3] 00:17:56.721 filename=/dev/nvme0n4 00:17:56.721 Could not set queue depth (nvme0n1) 00:17:56.721 Could not set queue depth (nvme0n2) 00:17:56.721 Could not set queue depth (nvme0n3) 00:17:56.721 Could not set queue depth (nvme0n4) 00:17:57.016 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:57.016 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:57.016 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:57.016 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:57.016 fio-3.35 00:17:57.016 Starting 4 threads 00:17:58.391 00:17:58.391 job0: (groupid=0, jobs=1): err= 0: pid=1789030: Wed Feb 14 20:18:35 2024 00:17:58.391 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:17:58.391 slat (nsec): min=6648, max=48871, avg=11431.37, stdev=6933.68 00:17:58.391 clat (usec): min=375, max=1083, avg=634.82, stdev=224.34 00:17:58.391 lat (usec): min=383, max=1090, avg=646.26, stdev=230.78 00:17:58.391 clat percentiles (usec): 00:17:58.391 | 1.00th=[ 392], 5.00th=[ 416], 10.00th=[ 453], 20.00th=[ 494], 00:17:58.391 | 30.00th=[ 502], 40.00th=[ 506], 50.00th=[ 510], 60.00th=[ 519], 00:17:58.391 | 70.00th=[ 644], 80.00th=[ 988], 90.00th=[ 1012], 95.00th=[ 1020], 00:17:58.391 | 99.00th=[ 1045], 99.50th=[ 1057], 99.90th=[ 1074], 99.95th=[ 1090], 00:17:58.391 | 99.99th=[ 1090] 00:17:58.391 write: IOPS=1302, BW=5211KiB/s (5336kB/s)(5216KiB/1001msec); 0 zone resets 00:17:58.391 slat (nsec): min=9694, max=39175, avg=10864.37, stdev=1405.95 00:17:58.391 clat (usec): min=187, max=740, avg=243.51, stdev=62.65 00:17:58.391 lat (usec): min=198, max=779, avg=254.38, stdev=62.91 00:17:58.391 clat percentiles (usec): 00:17:58.391 | 1.00th=[ 192], 5.00th=[ 196], 10.00th=[ 200], 20.00th=[ 204], 00:17:58.391 | 30.00th=[ 210], 40.00th=[ 215], 50.00th=[ 221], 60.00th=[ 227], 00:17:58.391 | 70.00th=[ 241], 80.00th=[ 269], 90.00th=[ 314], 95.00th=[ 388], 00:17:58.391 | 99.00th=[ 482], 99.50th=[ 486], 99.90th=[ 494], 99.95th=[ 742], 00:17:58.391 | 99.99th=[ 742] 00:17:58.391 bw ( KiB/s): min= 7696, max= 7696, per=51.24%, avg=7696.00, stdev= 0.00, samples=1 00:17:58.391 iops : min= 1924, max= 1924, avg=1924.00, stdev= 0.00, samples=1 00:17:58.391 lat (usec) : 250=41.19%, 500=27.71%, 750=19.03%, 1000=5.93% 00:17:58.391 lat (msec) : 2=6.14% 00:17:58.391 cpu : usr=1.20%, sys=2.90%, ctx=2331, majf=0, minf=1 00:17:58.391 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:58.391 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:58.391 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:58.391 issued rwts: total=1024,1304,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:58.391 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:58.391 job1: (groupid=0, jobs=1): err= 0: pid=1789046: Wed Feb 14 20:18:35 2024 00:17:58.391 read: IOPS=19, BW=78.4KiB/s (80.3kB/s)(80.0KiB/1020msec) 00:17:58.391 slat (nsec): min=10445, max=24489, avg=21461.20, stdev=2682.57 00:17:58.391 clat (usec): min=41072, max=42021, avg=41837.62, stdev=305.66 00:17:58.391 lat (usec): min=41083, max=42043, avg=41859.08, stdev=307.20 00:17:58.391 clat percentiles (usec): 00:17:58.391 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:17:58.391 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:17:58.391 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:17:58.391 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:58.391 | 99.99th=[42206] 00:17:58.391 write: IOPS=501, BW=2008KiB/s (2056kB/s)(2048KiB/1020msec); 0 zone resets 00:17:58.391 slat (nsec): min=4397, max=37074, avg=10671.78, stdev=2719.16 00:17:58.391 clat (usec): min=203, max=1550, avg=342.42, stdev=132.38 00:17:58.391 lat (usec): min=215, max=1556, avg=353.09, stdev=131.61 00:17:58.391 clat percentiles (usec): 00:17:58.391 | 1.00th=[ 210], 5.00th=[ 215], 10.00th=[ 225], 20.00th=[ 245], 00:17:58.391 | 30.00th=[ 277], 40.00th=[ 285], 50.00th=[ 302], 60.00th=[ 326], 00:17:58.391 | 70.00th=[ 375], 80.00th=[ 441], 90.00th=[ 490], 95.00th=[ 529], 00:17:58.391 | 99.00th=[ 848], 99.50th=[ 914], 99.90th=[ 1549], 99.95th=[ 1549], 00:17:58.391 | 99.99th=[ 1549] 00:17:58.391 bw ( KiB/s): min= 4096, max= 4096, per=27.27%, avg=4096.00, stdev= 0.00, samples=1 00:17:58.391 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:58.391 lat (usec) : 250=20.68%, 500=67.67%, 750=5.64%, 1000=2.07% 00:17:58.391 lat (msec) : 2=0.19%, 50=3.76% 00:17:58.391 cpu : usr=0.39%, sys=0.69%, ctx=533, majf=0, minf=2 00:17:58.391 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:58.391 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:58.391 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:58.391 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:58.391 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:58.391 job2: (groupid=0, jobs=1): err= 0: pid=1789067: Wed Feb 14 20:18:35 2024 00:17:58.391 read: IOPS=21, BW=85.5KiB/s (87.6kB/s)(88.0KiB/1029msec) 00:17:58.391 slat (nsec): min=7813, max=23855, avg=14277.77, stdev=6699.67 00:17:58.391 clat (usec): min=872, max=43041, avg=38369.50, stdev=12145.88 00:17:58.391 lat (usec): min=881, max=43065, avg=38383.78, stdev=12147.02 00:17:58.391 clat percentiles (usec): 00:17:58.391 | 1.00th=[ 873], 5.00th=[ 922], 10.00th=[41157], 20.00th=[41157], 00:17:58.391 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:17:58.391 | 70.00th=[42206], 80.00th=[42730], 90.00th=[43254], 95.00th=[43254], 00:17:58.391 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:17:58.391 | 99.99th=[43254] 00:17:58.391 write: IOPS=497, BW=1990KiB/s (2038kB/s)(2048KiB/1029msec); 0 zone resets 00:17:58.391 slat (nsec): min=4703, max=40910, avg=10939.75, stdev=2125.78 00:17:58.391 clat (usec): min=208, max=1118, avg=345.62, stdev=114.30 00:17:58.391 lat (usec): min=224, max=1124, avg=356.56, stdev=114.02 00:17:58.392 clat percentiles (usec): 00:17:58.392 | 1.00th=[ 219], 5.00th=[ 227], 10.00th=[ 235], 20.00th=[ 260], 00:17:58.392 | 30.00th=[ 277], 40.00th=[ 293], 50.00th=[ 326], 60.00th=[ 347], 00:17:58.392 | 70.00th=[ 375], 80.00th=[ 416], 90.00th=[ 482], 95.00th=[ 519], 00:17:58.392 | 99.00th=[ 840], 99.50th=[ 857], 99.90th=[ 1123], 99.95th=[ 1123], 00:17:58.392 | 99.99th=[ 1123] 00:17:58.392 bw ( KiB/s): min= 4096, max= 4096, per=27.27%, avg=4096.00, stdev= 0.00, samples=1 00:17:58.392 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:58.392 lat (usec) : 250=16.10%, 500=73.97%, 750=4.12%, 1000=1.69% 00:17:58.392 lat (msec) : 2=0.37%, 50=3.75% 00:17:58.392 cpu : usr=0.00%, sys=0.78%, ctx=535, majf=0, minf=1 00:17:58.392 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:58.392 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:58.392 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:58.392 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:58.392 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:58.392 job3: (groupid=0, jobs=1): err= 0: pid=1789074: Wed Feb 14 20:18:35 2024 00:17:58.392 read: IOPS=1023, BW=4096KiB/s (4194kB/s)(4100KiB/1001msec) 00:17:58.392 slat (nsec): min=7654, max=35771, avg=8675.33, stdev=1514.46 00:17:58.392 clat (usec): min=386, max=1011, avg=560.61, stdev=71.48 00:17:58.392 lat (usec): min=395, max=1020, avg=569.29, stdev=71.50 00:17:58.392 clat percentiles (usec): 00:17:58.392 | 1.00th=[ 404], 5.00th=[ 469], 10.00th=[ 510], 20.00th=[ 529], 00:17:58.392 | 30.00th=[ 537], 40.00th=[ 545], 50.00th=[ 562], 60.00th=[ 570], 00:17:58.392 | 70.00th=[ 578], 80.00th=[ 578], 90.00th=[ 594], 95.00th=[ 660], 00:17:58.392 | 99.00th=[ 938], 99.50th=[ 947], 99.90th=[ 1004], 99.95th=[ 1012], 00:17:58.392 | 99.99th=[ 1012] 00:17:58.392 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:17:58.392 slat (nsec): min=10694, max=38440, avg=12037.16, stdev=1494.25 00:17:58.392 clat (usec): min=193, max=774, avg=254.28, stdev=56.59 00:17:58.392 lat (usec): min=205, max=786, avg=266.32, stdev=56.86 00:17:58.392 clat percentiles (usec): 00:17:58.392 | 1.00th=[ 200], 5.00th=[ 206], 10.00th=[ 210], 20.00th=[ 217], 00:17:58.392 | 30.00th=[ 221], 40.00th=[ 227], 50.00th=[ 235], 60.00th=[ 249], 00:17:58.392 | 70.00th=[ 265], 80.00th=[ 289], 90.00th=[ 314], 95.00th=[ 355], 00:17:58.392 | 99.00th=[ 486], 99.50th=[ 498], 99.90th=[ 635], 99.95th=[ 775], 00:17:58.392 | 99.99th=[ 775] 00:17:58.392 bw ( KiB/s): min= 5424, max= 5424, per=36.11%, avg=5424.00, stdev= 0.00, samples=1 00:17:58.392 iops : min= 1356, max= 1356, avg=1356.00, stdev= 0.00, samples=1 00:17:58.392 lat (usec) : 250=36.59%, 500=26.08%, 750=36.16%, 1000=1.09% 00:17:58.392 lat (msec) : 2=0.08% 00:17:58.392 cpu : usr=2.00%, sys=4.50%, ctx=2562, majf=0, minf=1 00:17:58.392 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:58.392 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:58.392 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:58.392 issued rwts: total=1025,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:58.392 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:58.392 00:17:58.392 Run status group 0 (all jobs): 00:17:58.392 READ: bw=8128KiB/s (8323kB/s), 78.4KiB/s-4096KiB/s (80.3kB/s-4194kB/s), io=8364KiB (8565kB), run=1001-1029msec 00:17:58.392 WRITE: bw=14.7MiB/s (15.4MB/s), 1990KiB/s-6138KiB/s (2038kB/s-6285kB/s), io=15.1MiB (15.8MB), run=1001-1029msec 00:17:58.392 00:17:58.392 Disk stats (read/write): 00:17:58.392 nvme0n1: ios=960/1024, merge=0/0, ticks=1453/238, in_queue=1691, util=89.08% 00:17:58.392 nvme0n2: ios=38/512, merge=0/0, ticks=1559/171, in_queue=1730, util=93.08% 00:17:58.392 nvme0n3: ios=58/512, merge=0/0, ticks=1592/177, in_queue=1769, util=97.06% 00:17:58.392 nvme0n4: ios=1028/1024, merge=0/0, ticks=728/263, in_queue=991, util=97.35% 00:17:58.392 20:18:35 -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:17:58.392 [global] 00:17:58.392 thread=1 00:17:58.392 invalidate=1 00:17:58.392 rw=write 00:17:58.392 time_based=1 00:17:58.392 runtime=1 00:17:58.392 ioengine=libaio 00:17:58.392 direct=1 00:17:58.392 bs=4096 00:17:58.392 iodepth=128 00:17:58.392 norandommap=0 00:17:58.392 numjobs=1 00:17:58.392 00:17:58.392 verify_dump=1 00:17:58.392 verify_backlog=512 00:17:58.392 verify_state_save=0 00:17:58.392 do_verify=1 00:17:58.392 verify=crc32c-intel 00:17:58.392 [job0] 00:17:58.392 filename=/dev/nvme0n1 00:17:58.392 [job1] 00:17:58.392 filename=/dev/nvme0n2 00:17:58.392 [job2] 00:17:58.392 filename=/dev/nvme0n3 00:17:58.392 [job3] 00:17:58.392 filename=/dev/nvme0n4 00:17:58.392 Could not set queue depth (nvme0n1) 00:17:58.392 Could not set queue depth (nvme0n2) 00:17:58.392 Could not set queue depth (nvme0n3) 00:17:58.392 Could not set queue depth (nvme0n4) 00:17:58.650 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:58.650 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:58.650 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:58.650 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:58.650 fio-3.35 00:17:58.650 Starting 4 threads 00:18:00.041 00:18:00.041 job0: (groupid=0, jobs=1): err= 0: pid=1789453: Wed Feb 14 20:18:37 2024 00:18:00.041 read: IOPS=3548, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1010msec) 00:18:00.041 slat (nsec): min=1149, max=22899k, avg=109433.35, stdev=910411.23 00:18:00.041 clat (usec): min=2101, max=54631, avg=15994.94, stdev=8845.83 00:18:00.041 lat (usec): min=2123, max=58151, avg=16104.37, stdev=8909.96 00:18:00.041 clat percentiles (usec): 00:18:00.041 | 1.00th=[ 2507], 5.00th=[ 7832], 10.00th=[ 8979], 20.00th=[10028], 00:18:00.041 | 30.00th=[10814], 40.00th=[12125], 50.00th=[14222], 60.00th=[15270], 00:18:00.041 | 70.00th=[16909], 80.00th=[19268], 90.00th=[28705], 95.00th=[34341], 00:18:00.041 | 99.00th=[49021], 99.50th=[50070], 99.90th=[53740], 99.95th=[53740], 00:18:00.041 | 99.99th=[54789] 00:18:00.041 write: IOPS=3879, BW=15.2MiB/s (15.9MB/s)(15.3MiB/1010msec); 0 zone resets 00:18:00.041 slat (usec): min=2, max=26133, avg=110.10, stdev=766.44 00:18:00.041 clat (usec): min=1267, max=71521, avg=17265.71, stdev=12707.79 00:18:00.041 lat (usec): min=1270, max=71527, avg=17375.81, stdev=12760.43 00:18:00.041 clat percentiles (usec): 00:18:00.041 | 1.00th=[ 2573], 5.00th=[ 5538], 10.00th=[ 7308], 20.00th=[ 8586], 00:18:00.041 | 30.00th=[10421], 40.00th=[11731], 50.00th=[13829], 60.00th=[15401], 00:18:00.041 | 70.00th=[17433], 80.00th=[22152], 90.00th=[31327], 95.00th=[49546], 00:18:00.041 | 99.00th=[63701], 99.50th=[67634], 99.90th=[68682], 99.95th=[68682], 00:18:00.041 | 99.99th=[71828] 00:18:00.041 bw ( KiB/s): min=12208, max=18112, per=20.73%, avg=15160.00, stdev=4174.76, samples=2 00:18:00.041 iops : min= 3052, max= 4528, avg=3790.00, stdev=1043.69, samples=2 00:18:00.041 lat (msec) : 2=0.31%, 4=2.01%, 10=20.33%, 20=56.66%, 50=17.80% 00:18:00.041 lat (msec) : 100=2.89% 00:18:00.041 cpu : usr=2.78%, sys=3.57%, ctx=717, majf=0, minf=1 00:18:00.041 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:18:00.041 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:00.041 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:00.041 issued rwts: total=3584,3918,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:00.041 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:00.041 job1: (groupid=0, jobs=1): err= 0: pid=1789456: Wed Feb 14 20:18:37 2024 00:18:00.041 read: IOPS=4872, BW=19.0MiB/s (20.0MB/s)(19.1MiB/1005msec) 00:18:00.041 slat (nsec): min=1562, max=8731.2k, avg=88493.80, stdev=546886.72 00:18:00.041 clat (usec): min=1463, max=21794, avg=11243.37, stdev=3223.02 00:18:00.041 lat (usec): min=4340, max=22810, avg=11331.86, stdev=3240.11 00:18:00.041 clat percentiles (usec): 00:18:00.041 | 1.00th=[ 5342], 5.00th=[ 7177], 10.00th=[ 8160], 20.00th=[ 8586], 00:18:00.041 | 30.00th=[ 9372], 40.00th=[ 9765], 50.00th=[10683], 60.00th=[11338], 00:18:00.041 | 70.00th=[12125], 80.00th=[13304], 90.00th=[16450], 95.00th=[18220], 00:18:00.041 | 99.00th=[20317], 99.50th=[20841], 99.90th=[21627], 99.95th=[21890], 00:18:00.041 | 99.99th=[21890] 00:18:00.041 write: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec); 0 zone resets 00:18:00.041 slat (usec): min=2, max=26124, avg=106.65, stdev=553.35 00:18:00.041 clat (usec): min=1636, max=22811, avg=13477.10, stdev=4053.59 00:18:00.041 lat (usec): min=1649, max=27760, avg=13583.75, stdev=4065.95 00:18:00.041 clat percentiles (usec): 00:18:00.041 | 1.00th=[ 4817], 5.00th=[ 5997], 10.00th=[ 7570], 20.00th=[ 9503], 00:18:00.041 | 30.00th=[11207], 40.00th=[13042], 50.00th=[14222], 60.00th=[15533], 00:18:00.041 | 70.00th=[16450], 80.00th=[17171], 90.00th=[17957], 95.00th=[18744], 00:18:00.041 | 99.00th=[21103], 99.50th=[21365], 99.90th=[22676], 99.95th=[22938], 00:18:00.041 | 99.99th=[22938] 00:18:00.041 bw ( KiB/s): min=20480, max=20480, per=28.00%, avg=20480.00, stdev= 0.00, samples=2 00:18:00.041 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:18:00.041 lat (msec) : 2=0.03%, 4=0.14%, 10=33.03%, 20=65.04%, 50=1.76% 00:18:00.041 cpu : usr=3.59%, sys=3.98%, ctx=683, majf=0, minf=1 00:18:00.041 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:18:00.041 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:00.041 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:00.041 issued rwts: total=4897,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:00.041 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:00.041 job2: (groupid=0, jobs=1): err= 0: pid=1789457: Wed Feb 14 20:18:37 2024 00:18:00.041 read: IOPS=4517, BW=17.6MiB/s (18.5MB/s)(18.0MiB/1020msec) 00:18:00.041 slat (nsec): min=1731, max=10945k, avg=95855.97, stdev=631723.57 00:18:00.041 clat (usec): min=6529, max=26092, avg=12695.02, stdev=3676.38 00:18:00.041 lat (usec): min=6536, max=26099, avg=12790.88, stdev=3700.17 00:18:00.041 clat percentiles (usec): 00:18:00.041 | 1.00th=[ 7308], 5.00th=[ 7898], 10.00th=[ 8225], 20.00th=[ 9110], 00:18:00.041 | 30.00th=[10421], 40.00th=[11076], 50.00th=[12256], 60.00th=[13173], 00:18:00.041 | 70.00th=[14353], 80.00th=[15664], 90.00th=[18220], 95.00th=[20579], 00:18:00.041 | 99.00th=[22152], 99.50th=[22676], 99.90th=[26084], 99.95th=[26084], 00:18:00.041 | 99.99th=[26084] 00:18:00.041 write: IOPS=4907, BW=19.2MiB/s (20.1MB/s)(19.6MiB/1020msec); 0 zone resets 00:18:00.041 slat (usec): min=2, max=9806, avg=107.40, stdev=479.07 00:18:00.041 clat (usec): min=1710, max=37564, avg=14148.17, stdev=4520.99 00:18:00.041 lat (usec): min=1722, max=37569, avg=14255.57, stdev=4526.69 00:18:00.041 clat percentiles (usec): 00:18:00.041 | 1.00th=[ 5407], 5.00th=[ 7439], 10.00th=[ 8094], 20.00th=[ 9896], 00:18:00.041 | 30.00th=[11600], 40.00th=[13304], 50.00th=[14746], 60.00th=[15664], 00:18:00.041 | 70.00th=[16450], 80.00th=[17433], 90.00th=[18744], 95.00th=[19792], 00:18:00.041 | 99.00th=[30540], 99.50th=[33817], 99.90th=[37487], 99.95th=[37487], 00:18:00.041 | 99.99th=[37487] 00:18:00.041 bw ( KiB/s): min=19096, max=19967, per=26.70%, avg=19531.50, stdev=615.89, samples=2 00:18:00.041 iops : min= 4774, max= 4991, avg=4882.50, stdev=153.44, samples=2 00:18:00.041 lat (msec) : 2=0.02%, 4=0.02%, 10=23.40%, 20=71.95%, 50=4.61% 00:18:00.041 cpu : usr=2.94%, sys=4.12%, ctx=786, majf=0, minf=1 00:18:00.041 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:18:00.041 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:00.041 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:00.041 issued rwts: total=4608,5006,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:00.041 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:00.041 job3: (groupid=0, jobs=1): err= 0: pid=1789458: Wed Feb 14 20:18:37 2024 00:18:00.041 read: IOPS=4313, BW=16.8MiB/s (17.7MB/s)(16.9MiB/1005msec) 00:18:00.041 slat (nsec): min=1612, max=12334k, avg=101675.19, stdev=699804.47 00:18:00.041 clat (usec): min=3205, max=24972, avg=13784.73, stdev=3543.51 00:18:00.041 lat (usec): min=6979, max=24983, avg=13886.41, stdev=3563.19 00:18:00.041 clat percentiles (usec): 00:18:00.041 | 1.00th=[ 7111], 5.00th=[ 8848], 10.00th=[ 9503], 20.00th=[10421], 00:18:00.041 | 30.00th=[11207], 40.00th=[12649], 50.00th=[13698], 60.00th=[14746], 00:18:00.041 | 70.00th=[15533], 80.00th=[16712], 90.00th=[18744], 95.00th=[19792], 00:18:00.041 | 99.00th=[22676], 99.50th=[24773], 99.90th=[25035], 99.95th=[25035], 00:18:00.041 | 99.99th=[25035] 00:18:00.041 write: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec); 0 zone resets 00:18:00.041 slat (usec): min=2, max=25964, avg=115.75, stdev=788.41 00:18:00.041 clat (usec): min=2260, max=29062, avg=13983.79, stdev=4653.93 00:18:00.041 lat (usec): min=3672, max=29625, avg=14099.54, stdev=4664.51 00:18:00.041 clat percentiles (usec): 00:18:00.041 | 1.00th=[ 5080], 5.00th=[ 7177], 10.00th=[ 8979], 20.00th=[ 9765], 00:18:00.041 | 30.00th=[10945], 40.00th=[12125], 50.00th=[13829], 60.00th=[14877], 00:18:00.041 | 70.00th=[16057], 80.00th=[17695], 90.00th=[20055], 95.00th=[23200], 00:18:00.041 | 99.00th=[26870], 99.50th=[27395], 99.90th=[28967], 99.95th=[28967], 00:18:00.041 | 99.99th=[28967] 00:18:00.041 bw ( KiB/s): min=17544, max=19320, per=25.20%, avg=18432.00, stdev=1255.82, samples=2 00:18:00.041 iops : min= 4386, max= 4830, avg=4608.00, stdev=313.96, samples=2 00:18:00.041 lat (msec) : 4=0.11%, 10=18.57%, 20=74.05%, 50=7.27% 00:18:00.041 cpu : usr=3.39%, sys=5.68%, ctx=414, majf=0, minf=1 00:18:00.041 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:18:00.041 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:00.041 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:00.041 issued rwts: total=4335,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:00.041 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:00.041 00:18:00.041 Run status group 0 (all jobs): 00:18:00.041 READ: bw=66.7MiB/s (70.0MB/s), 13.9MiB/s-19.0MiB/s (14.5MB/s-20.0MB/s), io=68.1MiB (71.4MB), run=1005-1020msec 00:18:00.042 WRITE: bw=71.4MiB/s (74.9MB/s), 15.2MiB/s-19.9MiB/s (15.9MB/s-20.9MB/s), io=72.9MiB (76.4MB), run=1005-1020msec 00:18:00.042 00:18:00.042 Disk stats (read/write): 00:18:00.042 nvme0n1: ios=3093/3582, merge=0/0, ticks=43760/52224, in_queue=95984, util=91.08% 00:18:00.042 nvme0n2: ios=4116/4264, merge=0/0, ticks=46551/54556, in_queue=101107, util=95.33% 00:18:00.042 nvme0n3: ios=3920/4096, merge=0/0, ticks=50471/54776, in_queue=105247, util=96.44% 00:18:00.042 nvme0n4: ios=3604/3894, merge=0/0, ticks=48473/53504, in_queue=101977, util=99.68% 00:18:00.042 20:18:37 -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:18:00.042 [global] 00:18:00.042 thread=1 00:18:00.042 invalidate=1 00:18:00.042 rw=randwrite 00:18:00.042 time_based=1 00:18:00.042 runtime=1 00:18:00.042 ioengine=libaio 00:18:00.042 direct=1 00:18:00.042 bs=4096 00:18:00.042 iodepth=128 00:18:00.042 norandommap=0 00:18:00.042 numjobs=1 00:18:00.042 00:18:00.042 verify_dump=1 00:18:00.042 verify_backlog=512 00:18:00.042 verify_state_save=0 00:18:00.042 do_verify=1 00:18:00.042 verify=crc32c-intel 00:18:00.042 [job0] 00:18:00.042 filename=/dev/nvme0n1 00:18:00.042 [job1] 00:18:00.042 filename=/dev/nvme0n2 00:18:00.042 [job2] 00:18:00.042 filename=/dev/nvme0n3 00:18:00.042 [job3] 00:18:00.042 filename=/dev/nvme0n4 00:18:00.042 Could not set queue depth (nvme0n1) 00:18:00.042 Could not set queue depth (nvme0n2) 00:18:00.042 Could not set queue depth (nvme0n3) 00:18:00.042 Could not set queue depth (nvme0n4) 00:18:00.299 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:00.299 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:00.299 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:00.299 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:00.299 fio-3.35 00:18:00.299 Starting 4 threads 00:18:01.675 00:18:01.675 job0: (groupid=0, jobs=1): err= 0: pid=1789832: Wed Feb 14 20:18:38 2024 00:18:01.675 read: IOPS=3693, BW=14.4MiB/s (15.1MB/s)(14.5MiB/1005msec) 00:18:01.675 slat (nsec): min=1071, max=22766k, avg=124870.30, stdev=925777.06 00:18:01.675 clat (usec): min=849, max=54675, avg=16272.25, stdev=7492.10 00:18:01.675 lat (usec): min=866, max=54688, avg=16397.12, stdev=7534.86 00:18:01.675 clat percentiles (usec): 00:18:01.675 | 1.00th=[ 1795], 5.00th=[ 7898], 10.00th=[ 9372], 20.00th=[10683], 00:18:01.675 | 30.00th=[11731], 40.00th=[13829], 50.00th=[14484], 60.00th=[15795], 00:18:01.675 | 70.00th=[18220], 80.00th=[20579], 90.00th=[25560], 95.00th=[31065], 00:18:01.675 | 99.00th=[44303], 99.50th=[51643], 99.90th=[54789], 99.95th=[54789], 00:18:01.675 | 99.99th=[54789] 00:18:01.675 write: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec); 0 zone resets 00:18:01.675 slat (nsec): min=1894, max=15120k, avg=115465.56, stdev=671570.11 00:18:01.675 clat (usec): min=1290, max=56085, avg=16273.29, stdev=7697.97 00:18:01.675 lat (usec): min=1317, max=56089, avg=16388.76, stdev=7731.97 00:18:01.675 clat percentiles (usec): 00:18:01.675 | 1.00th=[ 5669], 5.00th=[ 7373], 10.00th=[ 8717], 20.00th=[10290], 00:18:01.675 | 30.00th=[11600], 40.00th=[13042], 50.00th=[14222], 60.00th=[15270], 00:18:01.675 | 70.00th=[17433], 80.00th=[21890], 90.00th=[27919], 95.00th=[33162], 00:18:01.675 | 99.00th=[38011], 99.50th=[38536], 99.90th=[55313], 99.95th=[55837], 00:18:01.675 | 99.99th=[55837] 00:18:01.675 bw ( KiB/s): min=15360, max=17408, per=23.74%, avg=16384.00, stdev=1448.15, samples=2 00:18:01.675 iops : min= 3840, max= 4352, avg=4096.00, stdev=362.04, samples=2 00:18:01.675 lat (usec) : 1000=0.01% 00:18:01.675 lat (msec) : 2=0.61%, 4=0.36%, 10=15.43%, 20=60.75%, 50=22.48% 00:18:01.675 lat (msec) : 100=0.36% 00:18:01.675 cpu : usr=2.29%, sys=4.58%, ctx=606, majf=0, minf=1 00:18:01.675 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:18:01.675 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:01.675 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:01.675 issued rwts: total=3712,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:01.675 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:01.675 job1: (groupid=0, jobs=1): err= 0: pid=1789833: Wed Feb 14 20:18:38 2024 00:18:01.675 read: IOPS=3930, BW=15.4MiB/s (16.1MB/s)(15.4MiB/1005msec) 00:18:01.675 slat (nsec): min=1161, max=17298k, avg=111036.82, stdev=738230.59 00:18:01.676 clat (usec): min=1407, max=33942, avg=15310.00, stdev=5820.37 00:18:01.676 lat (usec): min=1588, max=33945, avg=15421.04, stdev=5846.42 00:18:01.676 clat percentiles (usec): 00:18:01.676 | 1.00th=[ 4686], 5.00th=[ 8586], 10.00th=[ 9503], 20.00th=[10421], 00:18:01.676 | 30.00th=[11338], 40.00th=[12125], 50.00th=[13566], 60.00th=[15795], 00:18:01.676 | 70.00th=[18744], 80.00th=[20841], 90.00th=[23200], 95.00th=[25560], 00:18:01.676 | 99.00th=[33424], 99.50th=[33817], 99.90th=[33817], 99.95th=[33817], 00:18:01.676 | 99.99th=[33817] 00:18:01.676 write: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec); 0 zone resets 00:18:01.676 slat (nsec): min=1919, max=13924k, avg=127835.61, stdev=708415.73 00:18:01.676 clat (usec): min=2342, max=35237, avg=16369.01, stdev=6229.45 00:18:01.676 lat (usec): min=2353, max=35241, avg=16496.85, stdev=6258.08 00:18:01.676 clat percentiles (usec): 00:18:01.676 | 1.00th=[ 5407], 5.00th=[ 6849], 10.00th=[ 8979], 20.00th=[11338], 00:18:01.676 | 30.00th=[12125], 40.00th=[13435], 50.00th=[15795], 60.00th=[17695], 00:18:01.676 | 70.00th=[20841], 80.00th=[21890], 90.00th=[24773], 95.00th=[27395], 00:18:01.676 | 99.00th=[32113], 99.50th=[32900], 99.90th=[33817], 99.95th=[35390], 00:18:01.676 | 99.99th=[35390] 00:18:01.676 bw ( KiB/s): min=14768, max=18000, per=23.74%, avg=16384.00, stdev=2285.37, samples=2 00:18:01.676 iops : min= 3692, max= 4500, avg=4096.00, stdev=571.34, samples=2 00:18:01.676 lat (msec) : 2=0.11%, 4=0.36%, 10=15.14%, 20=55.39%, 50=29.00% 00:18:01.676 cpu : usr=2.09%, sys=3.49%, ctx=659, majf=0, minf=1 00:18:01.676 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:18:01.676 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:01.676 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:01.676 issued rwts: total=3950,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:01.676 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:01.676 job2: (groupid=0, jobs=1): err= 0: pid=1789834: Wed Feb 14 20:18:38 2024 00:18:01.676 read: IOPS=4185, BW=16.3MiB/s (17.1MB/s)(16.5MiB/1009msec) 00:18:01.676 slat (nsec): min=1134, max=11302k, avg=99733.09, stdev=618070.02 00:18:01.676 clat (usec): min=1228, max=24584, avg=12757.15, stdev=3556.93 00:18:01.676 lat (usec): min=6791, max=24592, avg=12856.88, stdev=3584.08 00:18:01.676 clat percentiles (usec): 00:18:01.676 | 1.00th=[ 7046], 5.00th=[ 8094], 10.00th=[ 8979], 20.00th=[10028], 00:18:01.676 | 30.00th=[10552], 40.00th=[11600], 50.00th=[11863], 60.00th=[12518], 00:18:01.676 | 70.00th=[13829], 80.00th=[15795], 90.00th=[17695], 95.00th=[20317], 00:18:01.676 | 99.00th=[23987], 99.50th=[24511], 99.90th=[24511], 99.95th=[24511], 00:18:01.676 | 99.99th=[24511] 00:18:01.676 write: IOPS=4566, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1009msec); 0 zone resets 00:18:01.676 slat (usec): min=2, max=10902, avg=121.24, stdev=516.92 00:18:01.676 clat (usec): min=1096, max=30732, avg=16082.51, stdev=4590.52 00:18:01.676 lat (usec): min=1101, max=30759, avg=16203.76, stdev=4603.60 00:18:01.676 clat percentiles (usec): 00:18:01.676 | 1.00th=[ 5211], 5.00th=[ 8160], 10.00th=[10028], 20.00th=[11994], 00:18:01.676 | 30.00th=[13829], 40.00th=[15270], 50.00th=[16581], 60.00th=[17433], 00:18:01.676 | 70.00th=[18220], 80.00th=[19792], 90.00th=[21365], 95.00th=[23725], 00:18:01.676 | 99.00th=[27395], 99.50th=[29754], 99.90th=[30016], 99.95th=[30802], 00:18:01.676 | 99.99th=[30802] 00:18:01.676 bw ( KiB/s): min=17504, max=19352, per=26.70%, avg=18428.00, stdev=1306.73, samples=2 00:18:01.676 iops : min= 4376, max= 4838, avg=4607.00, stdev=326.68, samples=2 00:18:01.676 lat (msec) : 2=0.11%, 4=0.17%, 10=13.97%, 20=74.20%, 50=11.54% 00:18:01.676 cpu : usr=2.28%, sys=3.67%, ctx=752, majf=0, minf=1 00:18:01.676 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:18:01.676 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:01.676 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:01.676 issued rwts: total=4223,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:01.676 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:01.676 job3: (groupid=0, jobs=1): err= 0: pid=1789835: Wed Feb 14 20:18:38 2024 00:18:01.676 read: IOPS=4547, BW=17.8MiB/s (18.6MB/s)(17.9MiB/1008msec) 00:18:01.676 slat (nsec): min=1522, max=92266k, avg=110148.24, stdev=1510450.34 00:18:01.676 clat (msec): min=3, max=105, avg=14.58, stdev=14.90 00:18:01.676 lat (msec): min=5, max=105, avg=14.69, stdev=14.96 00:18:01.676 clat percentiles (msec): 00:18:01.676 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 8], 20.00th=[ 9], 00:18:01.676 | 30.00th=[ 11], 40.00th=[ 12], 50.00th=[ 12], 60.00th=[ 13], 00:18:01.676 | 70.00th=[ 14], 80.00th=[ 15], 90.00th=[ 18], 95.00th=[ 22], 00:18:01.676 | 99.00th=[ 102], 99.50th=[ 102], 99.90th=[ 106], 99.95th=[ 106], 00:18:01.676 | 99.99th=[ 106] 00:18:01.676 write: IOPS=4571, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1008msec); 0 zone resets 00:18:01.676 slat (usec): min=2, max=9654, avg=101.77, stdev=493.43 00:18:01.676 clat (usec): min=1251, max=101005, avg=13253.77, stdev=4337.18 00:18:01.676 lat (usec): min=1266, max=101012, avg=13355.54, stdev=4339.49 00:18:01.676 clat percentiles (msec): 00:18:01.676 | 1.00th=[ 4], 5.00th=[ 7], 10.00th=[ 9], 20.00th=[ 10], 00:18:01.676 | 30.00th=[ 12], 40.00th=[ 12], 50.00th=[ 14], 60.00th=[ 15], 00:18:01.676 | 70.00th=[ 16], 80.00th=[ 17], 90.00th=[ 19], 95.00th=[ 21], 00:18:01.676 | 99.00th=[ 24], 99.50th=[ 25], 99.90th=[ 31], 99.95th=[ 31], 00:18:01.676 | 99.99th=[ 102] 00:18:01.676 bw ( KiB/s): min=16128, max=20736, per=26.71%, avg=18432.00, stdev=3258.35, samples=2 00:18:01.676 iops : min= 4032, max= 5184, avg=4608.00, stdev=814.59, samples=2 00:18:01.676 lat (msec) : 2=0.03%, 4=0.54%, 10=24.86%, 20=69.16%, 50=4.03% 00:18:01.676 lat (msec) : 100=0.45%, 250=0.94% 00:18:01.676 cpu : usr=2.78%, sys=4.07%, ctx=677, majf=0, minf=1 00:18:01.676 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:18:01.676 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:01.676 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:01.676 issued rwts: total=4584,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:01.676 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:01.676 00:18:01.676 Run status group 0 (all jobs): 00:18:01.676 READ: bw=63.8MiB/s (66.9MB/s), 14.4MiB/s-17.8MiB/s (15.1MB/s-18.6MB/s), io=64.3MiB (67.5MB), run=1005-1009msec 00:18:01.676 WRITE: bw=67.4MiB/s (70.7MB/s), 15.9MiB/s-17.9MiB/s (16.7MB/s-18.7MB/s), io=68.0MiB (71.3MB), run=1005-1009msec 00:18:01.676 00:18:01.676 Disk stats (read/write): 00:18:01.676 nvme0n1: ios=3282/3584, merge=0/0, ticks=44365/43763, in_queue=88128, util=86.17% 00:18:01.676 nvme0n2: ios=3256/3584, merge=0/0, ticks=28635/38354, in_queue=66989, util=86.76% 00:18:01.676 nvme0n3: ios=3584/3889, merge=0/0, ticks=41212/55848, in_queue=97060, util=88.76% 00:18:01.676 nvme0n4: ios=4096/4320, merge=0/0, ticks=48716/54982, in_queue=103698, util=89.61% 00:18:01.676 20:18:38 -- target/fio.sh@55 -- # sync 00:18:01.676 20:18:38 -- target/fio.sh@59 -- # fio_pid=1790067 00:18:01.676 20:18:38 -- target/fio.sh@61 -- # sleep 3 00:18:01.676 20:18:38 -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:18:01.677 [global] 00:18:01.677 thread=1 00:18:01.677 invalidate=1 00:18:01.677 rw=read 00:18:01.677 time_based=1 00:18:01.677 runtime=10 00:18:01.677 ioengine=libaio 00:18:01.677 direct=1 00:18:01.677 bs=4096 00:18:01.677 iodepth=1 00:18:01.677 norandommap=1 00:18:01.677 numjobs=1 00:18:01.677 00:18:01.677 [job0] 00:18:01.677 filename=/dev/nvme0n1 00:18:01.677 [job1] 00:18:01.677 filename=/dev/nvme0n2 00:18:01.677 [job2] 00:18:01.677 filename=/dev/nvme0n3 00:18:01.677 [job3] 00:18:01.677 filename=/dev/nvme0n4 00:18:01.677 Could not set queue depth (nvme0n1) 00:18:01.677 Could not set queue depth (nvme0n2) 00:18:01.677 Could not set queue depth (nvme0n3) 00:18:01.677 Could not set queue depth (nvme0n4) 00:18:01.677 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:01.677 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:01.677 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:01.677 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:01.677 fio-3.35 00:18:01.677 Starting 4 threads 00:18:04.962 20:18:41 -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:18:04.962 20:18:41 -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:18:04.962 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=16392192, buflen=4096 00:18:04.962 fio: pid=1790207, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:04.962 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=21843968, buflen=4096 00:18:04.962 fio: pid=1790206, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:04.962 20:18:42 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:04.962 20:18:42 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:18:04.962 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=5791744, buflen=4096 00:18:04.962 fio: pid=1790204, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:04.962 20:18:42 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:04.962 20:18:42 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:18:05.221 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=6893568, buflen=4096 00:18:05.221 fio: pid=1790205, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:05.221 20:18:42 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:05.221 20:18:42 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:18:05.221 00:18:05.221 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1790204: Wed Feb 14 20:18:42 2024 00:18:05.221 read: IOPS=465, BW=1861KiB/s (1905kB/s)(5656KiB/3040msec) 00:18:05.221 slat (usec): min=6, max=7615, avg=14.14, stdev=202.25 00:18:05.221 clat (usec): min=325, max=42028, avg=2118.75, stdev=7910.44 00:18:05.221 lat (usec): min=333, max=49007, avg=2132.88, stdev=7942.27 00:18:05.221 clat percentiles (usec): 00:18:05.221 | 1.00th=[ 351], 5.00th=[ 416], 10.00th=[ 482], 20.00th=[ 502], 00:18:05.221 | 30.00th=[ 510], 40.00th=[ 515], 50.00th=[ 519], 60.00th=[ 523], 00:18:05.221 | 70.00th=[ 529], 80.00th=[ 545], 90.00th=[ 791], 95.00th=[ 930], 00:18:05.221 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:18:05.221 | 99.99th=[42206] 00:18:05.221 bw ( KiB/s): min= 96, max= 7312, per=14.48%, avg=2241.60, stdev=3216.63, samples=5 00:18:05.221 iops : min= 24, max= 1828, avg=560.40, stdev=804.16, samples=5 00:18:05.221 lat (usec) : 500=17.88%, 750=70.60%, 1000=6.93% 00:18:05.221 lat (msec) : 2=0.64%, 4=0.07%, 50=3.82% 00:18:05.221 cpu : usr=0.16%, sys=0.86%, ctx=1417, majf=0, minf=1 00:18:05.221 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:05.221 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:05.221 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:05.221 issued rwts: total=1415,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:05.221 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:05.221 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1790205: Wed Feb 14 20:18:42 2024 00:18:05.221 read: IOPS=523, BW=2095KiB/s (2145kB/s)(6732KiB/3214msec) 00:18:05.221 slat (usec): min=6, max=16506, avg=19.41, stdev=406.65 00:18:05.221 clat (usec): min=323, max=42980, avg=1876.10, stdev=7466.40 00:18:05.221 lat (usec): min=331, max=58920, avg=1894.02, stdev=7532.60 00:18:05.221 clat percentiles (usec): 00:18:05.221 | 1.00th=[ 363], 5.00th=[ 429], 10.00th=[ 437], 20.00th=[ 441], 00:18:05.221 | 30.00th=[ 445], 40.00th=[ 453], 50.00th=[ 457], 60.00th=[ 465], 00:18:05.221 | 70.00th=[ 502], 80.00th=[ 537], 90.00th=[ 553], 95.00th=[ 627], 00:18:05.221 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:18:05.221 | 99.99th=[42730] 00:18:05.221 bw ( KiB/s): min= 87, max= 6584, per=14.46%, avg=2237.17, stdev=3045.32, samples=6 00:18:05.221 iops : min= 21, max= 1646, avg=559.17, stdev=761.44, samples=6 00:18:05.221 lat (usec) : 500=69.42%, 750=25.83%, 1000=1.19% 00:18:05.221 lat (msec) : 2=0.12%, 50=3.38% 00:18:05.221 cpu : usr=0.19%, sys=0.50%, ctx=1686, majf=0, minf=1 00:18:05.221 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:05.221 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:05.221 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:05.221 issued rwts: total=1684,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:05.222 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:05.222 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1790206: Wed Feb 14 20:18:42 2024 00:18:05.222 read: IOPS=1861, BW=7443KiB/s (7622kB/s)(20.8MiB/2866msec) 00:18:05.222 slat (nsec): min=6928, max=38058, avg=8755.53, stdev=1683.63 00:18:05.222 clat (usec): min=343, max=16717, avg=521.95, stdev=276.55 00:18:05.222 lat (usec): min=350, max=16743, avg=530.71, stdev=276.91 00:18:05.222 clat percentiles (usec): 00:18:05.222 | 1.00th=[ 396], 5.00th=[ 437], 10.00th=[ 449], 20.00th=[ 461], 00:18:05.222 | 30.00th=[ 474], 40.00th=[ 486], 50.00th=[ 502], 60.00th=[ 523], 00:18:05.222 | 70.00th=[ 545], 80.00th=[ 562], 90.00th=[ 603], 95.00th=[ 627], 00:18:05.222 | 99.00th=[ 758], 99.50th=[ 840], 99.90th=[ 1582], 99.95th=[ 5669], 00:18:05.222 | 99.99th=[16712] 00:18:05.222 bw ( KiB/s): min= 6928, max= 8168, per=48.75%, avg=7542.40, stdev=455.77, samples=5 00:18:05.222 iops : min= 1732, max= 2042, avg=1885.60, stdev=113.94, samples=5 00:18:05.222 lat (usec) : 500=47.99%, 750=50.96%, 1000=0.71% 00:18:05.222 lat (msec) : 2=0.24%, 4=0.02%, 10=0.04%, 20=0.02% 00:18:05.222 cpu : usr=1.01%, sys=3.28%, ctx=5336, majf=0, minf=1 00:18:05.222 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:05.222 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:05.222 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:05.222 issued rwts: total=5334,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:05.222 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:05.222 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1790207: Wed Feb 14 20:18:42 2024 00:18:05.222 read: IOPS=1484, BW=5938KiB/s (6080kB/s)(15.6MiB/2696msec) 00:18:05.222 slat (nsec): min=7022, max=42439, avg=9635.44, stdev=2007.18 00:18:05.222 clat (usec): min=406, max=3721, avg=652.88, stdev=150.41 00:18:05.222 lat (usec): min=414, max=3732, avg=662.51, stdev=150.74 00:18:05.222 clat percentiles (usec): 00:18:05.222 | 1.00th=[ 510], 5.00th=[ 529], 10.00th=[ 545], 20.00th=[ 553], 00:18:05.222 | 30.00th=[ 570], 40.00th=[ 578], 50.00th=[ 594], 60.00th=[ 619], 00:18:05.222 | 70.00th=[ 660], 80.00th=[ 775], 90.00th=[ 865], 95.00th=[ 906], 00:18:05.222 | 99.00th=[ 1074], 99.50th=[ 1123], 99.90th=[ 1909], 99.95th=[ 2073], 00:18:05.222 | 99.99th=[ 3720] 00:18:05.222 bw ( KiB/s): min= 5120, max= 6824, per=38.57%, avg=5968.00, stdev=729.03, samples=5 00:18:05.222 iops : min= 1280, max= 1706, avg=1492.00, stdev=182.26, samples=5 00:18:05.222 lat (usec) : 500=0.45%, 750=77.29%, 1000=20.43% 00:18:05.222 lat (msec) : 2=1.72%, 4=0.07% 00:18:05.222 cpu : usr=1.04%, sys=2.56%, ctx=4004, majf=0, minf=2 00:18:05.222 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:05.222 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:05.222 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:05.222 issued rwts: total=4003,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:05.222 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:05.222 00:18:05.222 Run status group 0 (all jobs): 00:18:05.222 READ: bw=15.1MiB/s (15.8MB/s), 1861KiB/s-7443KiB/s (1905kB/s-7622kB/s), io=48.6MiB (50.9MB), run=2696-3214msec 00:18:05.222 00:18:05.222 Disk stats (read/write): 00:18:05.222 nvme0n1: ios=1448/0, merge=0/0, ticks=3873/0, in_queue=3873, util=99.67% 00:18:05.222 nvme0n2: ios=1680/0, merge=0/0, ticks=3029/0, in_queue=3029, util=95.51% 00:18:05.222 nvme0n3: ios=5381/0, merge=0/0, ticks=3786/0, in_queue=3786, util=99.63% 00:18:05.222 nvme0n4: ios=3941/0, merge=0/0, ticks=3701/0, in_queue=3701, util=99.48% 00:18:05.481 20:18:42 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:05.481 20:18:42 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:18:05.481 20:18:42 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:05.481 20:18:42 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:18:05.739 20:18:43 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:05.739 20:18:43 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:18:05.998 20:18:43 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:05.998 20:18:43 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:18:05.998 20:18:43 -- target/fio.sh@69 -- # fio_status=0 00:18:05.998 20:18:43 -- target/fio.sh@70 -- # wait 1790067 00:18:05.998 20:18:43 -- target/fio.sh@70 -- # fio_status=4 00:18:05.998 20:18:43 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:06.257 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:06.257 20:18:43 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:06.257 20:18:43 -- common/autotest_common.sh@1196 -- # local i=0 00:18:06.257 20:18:43 -- common/autotest_common.sh@1197 -- # lsblk -o NAME,SERIAL 00:18:06.257 20:18:43 -- common/autotest_common.sh@1197 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:06.257 20:18:43 -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:18:06.257 20:18:43 -- common/autotest_common.sh@1204 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:06.257 20:18:43 -- common/autotest_common.sh@1208 -- # return 0 00:18:06.257 20:18:43 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:18:06.257 20:18:43 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:18:06.257 nvmf hotplug test: fio failed as expected 00:18:06.257 20:18:43 -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:06.516 20:18:43 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:18:06.517 20:18:43 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:18:06.517 20:18:43 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:18:06.517 20:18:43 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:18:06.517 20:18:43 -- target/fio.sh@91 -- # nvmftestfini 00:18:06.517 20:18:43 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:06.517 20:18:43 -- nvmf/common.sh@116 -- # sync 00:18:06.517 20:18:43 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:06.517 20:18:43 -- nvmf/common.sh@119 -- # set +e 00:18:06.517 20:18:43 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:06.517 20:18:43 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:06.517 rmmod nvme_tcp 00:18:06.517 rmmod nvme_fabrics 00:18:06.517 rmmod nvme_keyring 00:18:06.517 20:18:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:06.517 20:18:43 -- nvmf/common.sh@123 -- # set -e 00:18:06.517 20:18:43 -- nvmf/common.sh@124 -- # return 0 00:18:06.517 20:18:43 -- nvmf/common.sh@477 -- # '[' -n 1787137 ']' 00:18:06.517 20:18:43 -- nvmf/common.sh@478 -- # killprocess 1787137 00:18:06.517 20:18:43 -- common/autotest_common.sh@924 -- # '[' -z 1787137 ']' 00:18:06.517 20:18:43 -- common/autotest_common.sh@928 -- # kill -0 1787137 00:18:06.517 20:18:43 -- common/autotest_common.sh@929 -- # uname 00:18:06.517 20:18:43 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:18:06.517 20:18:43 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 1787137 00:18:06.517 20:18:43 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:18:06.517 20:18:43 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:18:06.517 20:18:43 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 1787137' 00:18:06.517 killing process with pid 1787137 00:18:06.517 20:18:43 -- common/autotest_common.sh@943 -- # kill 1787137 00:18:06.517 20:18:43 -- common/autotest_common.sh@948 -- # wait 1787137 00:18:06.776 20:18:44 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:06.776 20:18:44 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:06.776 20:18:44 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:06.776 20:18:44 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:06.776 20:18:44 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:06.776 20:18:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:06.776 20:18:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:06.776 20:18:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:08.680 20:18:46 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:08.680 00:18:08.680 real 0m27.123s 00:18:08.680 user 1m46.533s 00:18:08.680 sys 0m8.277s 00:18:08.680 20:18:46 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:08.680 20:18:46 -- common/autotest_common.sh@10 -- # set +x 00:18:08.680 ************************************ 00:18:08.680 END TEST nvmf_fio_target 00:18:08.680 ************************************ 00:18:08.939 20:18:46 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:18:08.939 20:18:46 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:18:08.939 20:18:46 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:18:08.939 20:18:46 -- common/autotest_common.sh@10 -- # set +x 00:18:08.939 ************************************ 00:18:08.939 START TEST nvmf_bdevio 00:18:08.939 ************************************ 00:18:08.939 20:18:46 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:18:08.939 * Looking for test storage... 00:18:08.939 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:08.939 20:18:46 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:08.939 20:18:46 -- nvmf/common.sh@7 -- # uname -s 00:18:08.939 20:18:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:08.939 20:18:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:08.939 20:18:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:08.939 20:18:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:08.939 20:18:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:08.939 20:18:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:08.939 20:18:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:08.939 20:18:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:08.939 20:18:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:08.939 20:18:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:08.939 20:18:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:08.939 20:18:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:18:08.939 20:18:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:08.939 20:18:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:08.939 20:18:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:08.939 20:18:46 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:08.939 20:18:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:08.939 20:18:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:08.939 20:18:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:08.939 20:18:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.939 20:18:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.939 20:18:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.939 20:18:46 -- paths/export.sh@5 -- # export PATH 00:18:08.939 20:18:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.939 20:18:46 -- nvmf/common.sh@46 -- # : 0 00:18:08.939 20:18:46 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:08.939 20:18:46 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:08.939 20:18:46 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:08.939 20:18:46 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:08.939 20:18:46 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:08.939 20:18:46 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:08.939 20:18:46 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:08.939 20:18:46 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:08.939 20:18:46 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:08.939 20:18:46 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:08.939 20:18:46 -- target/bdevio.sh@14 -- # nvmftestinit 00:18:08.939 20:18:46 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:08.939 20:18:46 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:08.939 20:18:46 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:08.939 20:18:46 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:08.939 20:18:46 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:08.939 20:18:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:08.939 20:18:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:08.939 20:18:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:08.939 20:18:46 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:08.939 20:18:46 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:08.939 20:18:46 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:08.939 20:18:46 -- common/autotest_common.sh@10 -- # set +x 00:18:15.506 20:18:51 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:15.506 20:18:51 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:15.506 20:18:51 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:15.506 20:18:51 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:15.506 20:18:51 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:15.506 20:18:51 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:15.506 20:18:51 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:15.506 20:18:51 -- nvmf/common.sh@294 -- # net_devs=() 00:18:15.506 20:18:51 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:15.506 20:18:51 -- nvmf/common.sh@295 -- # e810=() 00:18:15.506 20:18:51 -- nvmf/common.sh@295 -- # local -ga e810 00:18:15.506 20:18:51 -- nvmf/common.sh@296 -- # x722=() 00:18:15.506 20:18:51 -- nvmf/common.sh@296 -- # local -ga x722 00:18:15.506 20:18:51 -- nvmf/common.sh@297 -- # mlx=() 00:18:15.506 20:18:51 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:15.506 20:18:51 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:15.506 20:18:51 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:15.506 20:18:51 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:15.506 20:18:51 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:15.506 20:18:51 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:15.506 20:18:51 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:15.506 20:18:51 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:15.506 20:18:51 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:15.506 20:18:51 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:15.506 20:18:51 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:15.506 20:18:51 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:15.506 20:18:51 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:15.506 20:18:51 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:15.506 20:18:51 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:15.507 20:18:51 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:15.507 20:18:51 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:15.507 20:18:51 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:15.507 20:18:51 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:15.507 20:18:51 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:15.507 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:15.507 20:18:51 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:15.507 20:18:51 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:15.507 20:18:51 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:15.507 20:18:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:15.507 20:18:51 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:15.507 20:18:51 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:15.507 20:18:51 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:15.507 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:15.507 20:18:51 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:15.507 20:18:51 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:15.507 20:18:51 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:15.507 20:18:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:15.507 20:18:51 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:15.507 20:18:51 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:15.507 20:18:51 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:15.507 20:18:51 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:15.507 20:18:51 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:15.507 20:18:51 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:15.507 20:18:51 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:15.507 20:18:51 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:15.507 20:18:51 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:15.507 Found net devices under 0000:af:00.0: cvl_0_0 00:18:15.507 20:18:51 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:15.507 20:18:51 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:15.507 20:18:51 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:15.507 20:18:51 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:15.507 20:18:51 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:15.507 20:18:51 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:15.507 Found net devices under 0000:af:00.1: cvl_0_1 00:18:15.507 20:18:51 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:15.507 20:18:51 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:15.507 20:18:51 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:15.507 20:18:51 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:15.507 20:18:51 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:15.507 20:18:51 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:15.507 20:18:51 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:15.507 20:18:51 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:15.507 20:18:51 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:15.507 20:18:51 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:15.507 20:18:51 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:15.507 20:18:51 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:15.507 20:18:51 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:15.507 20:18:51 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:15.507 20:18:51 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:15.507 20:18:51 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:15.507 20:18:51 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:15.507 20:18:51 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:15.507 20:18:51 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:15.507 20:18:52 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:15.507 20:18:52 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:15.507 20:18:52 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:15.507 20:18:52 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:15.507 20:18:52 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:15.507 20:18:52 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:15.507 20:18:52 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:15.507 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:15.507 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:18:15.507 00:18:15.507 --- 10.0.0.2 ping statistics --- 00:18:15.507 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:15.507 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:18:15.507 20:18:52 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:15.507 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:15.507 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.388 ms 00:18:15.507 00:18:15.507 --- 10.0.0.1 ping statistics --- 00:18:15.507 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:15.507 rtt min/avg/max/mdev = 0.388/0.388/0.388/0.000 ms 00:18:15.507 20:18:52 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:15.507 20:18:52 -- nvmf/common.sh@410 -- # return 0 00:18:15.507 20:18:52 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:15.507 20:18:52 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:15.507 20:18:52 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:15.507 20:18:52 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:15.507 20:18:52 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:15.507 20:18:52 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:15.507 20:18:52 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:15.507 20:18:52 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:15.507 20:18:52 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:15.507 20:18:52 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:15.507 20:18:52 -- common/autotest_common.sh@10 -- # set +x 00:18:15.507 20:18:52 -- nvmf/common.sh@469 -- # nvmfpid=1794728 00:18:15.507 20:18:52 -- nvmf/common.sh@470 -- # waitforlisten 1794728 00:18:15.507 20:18:52 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:18:15.507 20:18:52 -- common/autotest_common.sh@817 -- # '[' -z 1794728 ']' 00:18:15.507 20:18:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:15.507 20:18:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:15.507 20:18:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:15.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:15.507 20:18:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:15.507 20:18:52 -- common/autotest_common.sh@10 -- # set +x 00:18:15.507 [2024-02-14 20:18:52.278923] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:18:15.507 [2024-02-14 20:18:52.278965] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:15.507 EAL: No free 2048 kB hugepages reported on node 1 00:18:15.507 [2024-02-14 20:18:52.341329] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:15.507 [2024-02-14 20:18:52.416444] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:15.507 [2024-02-14 20:18:52.416546] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:15.507 [2024-02-14 20:18:52.416553] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:15.507 [2024-02-14 20:18:52.416559] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:15.507 [2024-02-14 20:18:52.416710] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:18:15.507 [2024-02-14 20:18:52.416814] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:18:15.507 [2024-02-14 20:18:52.416921] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:15.507 [2024-02-14 20:18:52.416922] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:18:15.764 20:18:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:15.764 20:18:53 -- common/autotest_common.sh@850 -- # return 0 00:18:15.764 20:18:53 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:15.764 20:18:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:15.764 20:18:53 -- common/autotest_common.sh@10 -- # set +x 00:18:15.764 20:18:53 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:15.764 20:18:53 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:15.764 20:18:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:15.764 20:18:53 -- common/autotest_common.sh@10 -- # set +x 00:18:15.764 [2024-02-14 20:18:53.115899] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:15.764 20:18:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:15.764 20:18:53 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:15.764 20:18:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:15.764 20:18:53 -- common/autotest_common.sh@10 -- # set +x 00:18:15.764 Malloc0 00:18:15.764 20:18:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:15.764 20:18:53 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:15.764 20:18:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:15.764 20:18:53 -- common/autotest_common.sh@10 -- # set +x 00:18:15.764 20:18:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:15.764 20:18:53 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:15.764 20:18:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:15.764 20:18:53 -- common/autotest_common.sh@10 -- # set +x 00:18:15.764 20:18:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:15.764 20:18:53 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:15.764 20:18:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:15.764 20:18:53 -- common/autotest_common.sh@10 -- # set +x 00:18:15.764 [2024-02-14 20:18:53.167247] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:15.764 20:18:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:15.764 20:18:53 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:18:15.764 20:18:53 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:15.764 20:18:53 -- nvmf/common.sh@520 -- # config=() 00:18:15.764 20:18:53 -- nvmf/common.sh@520 -- # local subsystem config 00:18:15.764 20:18:53 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:15.764 20:18:53 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:15.764 { 00:18:15.764 "params": { 00:18:15.764 "name": "Nvme$subsystem", 00:18:15.764 "trtype": "$TEST_TRANSPORT", 00:18:15.764 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:15.764 "adrfam": "ipv4", 00:18:15.764 "trsvcid": "$NVMF_PORT", 00:18:15.764 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:15.764 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:15.764 "hdgst": ${hdgst:-false}, 00:18:15.764 "ddgst": ${ddgst:-false} 00:18:15.764 }, 00:18:15.764 "method": "bdev_nvme_attach_controller" 00:18:15.764 } 00:18:15.764 EOF 00:18:15.764 )") 00:18:15.764 20:18:53 -- nvmf/common.sh@542 -- # cat 00:18:16.033 20:18:53 -- nvmf/common.sh@544 -- # jq . 00:18:16.033 20:18:53 -- nvmf/common.sh@545 -- # IFS=, 00:18:16.033 20:18:53 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:16.033 "params": { 00:18:16.033 "name": "Nvme1", 00:18:16.033 "trtype": "tcp", 00:18:16.033 "traddr": "10.0.0.2", 00:18:16.033 "adrfam": "ipv4", 00:18:16.033 "trsvcid": "4420", 00:18:16.033 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:16.033 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:16.033 "hdgst": false, 00:18:16.033 "ddgst": false 00:18:16.033 }, 00:18:16.033 "method": "bdev_nvme_attach_controller" 00:18:16.033 }' 00:18:16.033 [2024-02-14 20:18:53.211541] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:18:16.033 [2024-02-14 20:18:53.211583] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1794973 ] 00:18:16.033 EAL: No free 2048 kB hugepages reported on node 1 00:18:16.033 [2024-02-14 20:18:53.271955] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:16.033 [2024-02-14 20:18:53.343406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:16.033 [2024-02-14 20:18:53.343505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:16.033 [2024-02-14 20:18:53.343506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:16.033 [2024-02-14 20:18:53.343590] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:18:16.296 [2024-02-14 20:18:53.498367] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:18:16.297 [2024-02-14 20:18:53.498400] rpc.c: 167:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:18:16.297 I/O targets: 00:18:16.297 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:16.297 00:18:16.297 00:18:16.297 CUnit - A unit testing framework for C - Version 2.1-3 00:18:16.297 http://cunit.sourceforge.net/ 00:18:16.297 00:18:16.297 00:18:16.297 Suite: bdevio tests on: Nvme1n1 00:18:16.297 Test: blockdev write read block ...passed 00:18:16.297 Test: blockdev write zeroes read block ...passed 00:18:16.297 Test: blockdev write zeroes read no split ...passed 00:18:16.297 Test: blockdev write zeroes read split ...passed 00:18:16.297 Test: blockdev write zeroes read split partial ...passed 00:18:16.297 Test: blockdev reset ...[2024-02-14 20:18:53.692470] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:16.297 [2024-02-14 20:18:53.692525] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a7c70 (9): Bad file descriptor 00:18:16.555 [2024-02-14 20:18:53.745458] bdev_nvme.c:2026:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:16.555 passed 00:18:16.555 Test: blockdev write read 8 blocks ...passed 00:18:16.555 Test: blockdev write read size > 128k ...passed 00:18:16.555 Test: blockdev write read invalid size ...passed 00:18:16.555 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:16.555 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:16.555 Test: blockdev write read max offset ...passed 00:18:16.555 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:16.555 Test: blockdev writev readv 8 blocks ...passed 00:18:16.555 Test: blockdev writev readv 30 x 1block ...passed 00:18:16.555 Test: blockdev writev readv block ...passed 00:18:16.555 Test: blockdev writev readv size > 128k ...passed 00:18:16.813 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:16.813 Test: blockdev comparev and writev ...[2024-02-14 20:18:53.973265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:16.813 [2024-02-14 20:18:53.973294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:16.813 [2024-02-14 20:18:53.973309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:16.813 [2024-02-14 20:18:53.973317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:16.813 [2024-02-14 20:18:53.973761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:16.813 [2024-02-14 20:18:53.973774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:16.813 [2024-02-14 20:18:53.973785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:16.813 [2024-02-14 20:18:53.973793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:16.813 [2024-02-14 20:18:53.974240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:16.813 [2024-02-14 20:18:53.974251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:16.813 [2024-02-14 20:18:53.974263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:16.813 [2024-02-14 20:18:53.974271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:16.813 [2024-02-14 20:18:53.974741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:16.813 [2024-02-14 20:18:53.974752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:16.813 [2024-02-14 20:18:53.974764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:16.813 [2024-02-14 20:18:53.974772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:16.813 passed 00:18:16.813 Test: blockdev nvme passthru rw ...passed 00:18:16.813 Test: blockdev nvme passthru vendor specific ...[2024-02-14 20:18:54.058291] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:16.813 [2024-02-14 20:18:54.058307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:16.813 [2024-02-14 20:18:54.058617] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:16.813 [2024-02-14 20:18:54.058628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:16.813 [2024-02-14 20:18:54.058936] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:16.813 [2024-02-14 20:18:54.058947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:16.813 [2024-02-14 20:18:54.059246] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:16.813 [2024-02-14 20:18:54.059257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:16.813 passed 00:18:16.813 Test: blockdev nvme admin passthru ...passed 00:18:16.813 Test: blockdev copy ...passed 00:18:16.813 00:18:16.813 Run Summary: Type Total Ran Passed Failed Inactive 00:18:16.813 suites 1 1 n/a 0 0 00:18:16.813 tests 23 23 23 0 0 00:18:16.813 asserts 152 152 152 0 n/a 00:18:16.813 00:18:16.813 Elapsed time = 1.232 seconds 00:18:16.813 [2024-02-14 20:18:54.116142] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:18:17.120 20:18:54 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:17.120 20:18:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:17.120 20:18:54 -- common/autotest_common.sh@10 -- # set +x 00:18:17.121 20:18:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:17.121 20:18:54 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:17.121 20:18:54 -- target/bdevio.sh@30 -- # nvmftestfini 00:18:17.121 20:18:54 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:17.121 20:18:54 -- nvmf/common.sh@116 -- # sync 00:18:17.121 20:18:54 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:17.121 20:18:54 -- nvmf/common.sh@119 -- # set +e 00:18:17.121 20:18:54 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:17.121 20:18:54 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:17.121 rmmod nvme_tcp 00:18:17.121 rmmod nvme_fabrics 00:18:17.121 rmmod nvme_keyring 00:18:17.121 20:18:54 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:17.121 20:18:54 -- nvmf/common.sh@123 -- # set -e 00:18:17.121 20:18:54 -- nvmf/common.sh@124 -- # return 0 00:18:17.121 20:18:54 -- nvmf/common.sh@477 -- # '[' -n 1794728 ']' 00:18:17.121 20:18:54 -- nvmf/common.sh@478 -- # killprocess 1794728 00:18:17.121 20:18:54 -- common/autotest_common.sh@924 -- # '[' -z 1794728 ']' 00:18:17.121 20:18:54 -- common/autotest_common.sh@928 -- # kill -0 1794728 00:18:17.121 20:18:54 -- common/autotest_common.sh@929 -- # uname 00:18:17.121 20:18:54 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:18:17.121 20:18:54 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 1794728 00:18:17.121 20:18:54 -- common/autotest_common.sh@930 -- # process_name=reactor_3 00:18:17.121 20:18:54 -- common/autotest_common.sh@934 -- # '[' reactor_3 = sudo ']' 00:18:17.121 20:18:54 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 1794728' 00:18:17.121 killing process with pid 1794728 00:18:17.121 20:18:54 -- common/autotest_common.sh@943 -- # kill 1794728 00:18:17.121 20:18:54 -- common/autotest_common.sh@948 -- # wait 1794728 00:18:17.399 20:18:54 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:17.399 20:18:54 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:17.399 20:18:54 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:17.399 20:18:54 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:17.399 20:18:54 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:17.399 20:18:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:17.399 20:18:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:17.399 20:18:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:19.304 20:18:56 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:19.304 00:18:19.304 real 0m10.574s 00:18:19.304 user 0m12.518s 00:18:19.304 sys 0m5.041s 00:18:19.304 20:18:56 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:19.304 20:18:56 -- common/autotest_common.sh@10 -- # set +x 00:18:19.304 ************************************ 00:18:19.304 END TEST nvmf_bdevio 00:18:19.304 ************************************ 00:18:19.563 20:18:56 -- nvmf/nvmf.sh@57 -- # '[' tcp = tcp ']' 00:18:19.563 20:18:56 -- nvmf/nvmf.sh@58 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:19.563 20:18:56 -- common/autotest_common.sh@1075 -- # '[' 4 -le 1 ']' 00:18:19.563 20:18:56 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:18:19.563 20:18:56 -- common/autotest_common.sh@10 -- # set +x 00:18:19.563 ************************************ 00:18:19.563 START TEST nvmf_bdevio_no_huge 00:18:19.563 ************************************ 00:18:19.563 20:18:56 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:19.563 * Looking for test storage... 00:18:19.563 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:19.563 20:18:56 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:19.563 20:18:56 -- nvmf/common.sh@7 -- # uname -s 00:18:19.563 20:18:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:19.563 20:18:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:19.563 20:18:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:19.563 20:18:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:19.563 20:18:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:19.563 20:18:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:19.563 20:18:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:19.563 20:18:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:19.563 20:18:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:19.563 20:18:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:19.563 20:18:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:19.563 20:18:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:18:19.563 20:18:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:19.563 20:18:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:19.563 20:18:56 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:19.563 20:18:56 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:19.563 20:18:56 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:19.563 20:18:56 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:19.563 20:18:56 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:19.563 20:18:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.563 20:18:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.563 20:18:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.563 20:18:56 -- paths/export.sh@5 -- # export PATH 00:18:19.563 20:18:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.563 20:18:56 -- nvmf/common.sh@46 -- # : 0 00:18:19.563 20:18:56 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:19.563 20:18:56 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:19.563 20:18:56 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:19.563 20:18:56 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:19.563 20:18:56 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:19.563 20:18:56 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:19.563 20:18:56 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:19.563 20:18:56 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:19.563 20:18:56 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:19.563 20:18:56 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:19.563 20:18:56 -- target/bdevio.sh@14 -- # nvmftestinit 00:18:19.563 20:18:56 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:19.563 20:18:56 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:19.563 20:18:56 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:19.563 20:18:56 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:19.563 20:18:56 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:19.563 20:18:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:19.563 20:18:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:19.563 20:18:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:19.563 20:18:56 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:19.563 20:18:56 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:19.563 20:18:56 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:19.564 20:18:56 -- common/autotest_common.sh@10 -- # set +x 00:18:26.127 20:19:02 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:26.127 20:19:02 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:26.127 20:19:02 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:26.127 20:19:02 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:26.127 20:19:02 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:26.127 20:19:02 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:26.127 20:19:02 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:26.127 20:19:02 -- nvmf/common.sh@294 -- # net_devs=() 00:18:26.127 20:19:02 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:26.127 20:19:02 -- nvmf/common.sh@295 -- # e810=() 00:18:26.127 20:19:02 -- nvmf/common.sh@295 -- # local -ga e810 00:18:26.127 20:19:02 -- nvmf/common.sh@296 -- # x722=() 00:18:26.127 20:19:02 -- nvmf/common.sh@296 -- # local -ga x722 00:18:26.127 20:19:02 -- nvmf/common.sh@297 -- # mlx=() 00:18:26.127 20:19:02 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:26.127 20:19:02 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:26.127 20:19:02 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:26.127 20:19:02 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:26.128 20:19:02 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:26.128 20:19:02 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:26.128 20:19:02 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:26.128 20:19:02 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:26.128 20:19:02 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:26.128 20:19:02 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:26.128 20:19:02 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:26.128 20:19:02 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:26.128 20:19:02 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:26.128 20:19:02 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:26.128 20:19:02 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:26.128 20:19:02 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:26.128 20:19:02 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:26.128 20:19:02 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:26.128 20:19:02 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:26.128 20:19:02 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:26.128 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:26.128 20:19:02 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:26.128 20:19:02 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:26.128 20:19:02 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:26.128 20:19:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:26.128 20:19:02 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:26.128 20:19:02 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:26.128 20:19:02 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:26.128 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:26.128 20:19:02 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:26.128 20:19:02 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:26.128 20:19:02 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:26.128 20:19:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:26.128 20:19:02 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:26.128 20:19:02 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:26.128 20:19:02 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:26.128 20:19:02 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:26.128 20:19:02 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:26.128 20:19:02 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:26.128 20:19:02 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:26.128 20:19:02 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:26.128 20:19:02 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:26.128 Found net devices under 0000:af:00.0: cvl_0_0 00:18:26.128 20:19:02 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:26.128 20:19:02 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:26.128 20:19:02 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:26.128 20:19:02 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:26.128 20:19:02 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:26.128 20:19:02 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:26.128 Found net devices under 0000:af:00.1: cvl_0_1 00:18:26.128 20:19:02 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:26.128 20:19:02 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:26.128 20:19:02 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:26.128 20:19:02 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:26.128 20:19:02 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:26.128 20:19:02 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:26.128 20:19:02 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:26.128 20:19:02 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:26.128 20:19:02 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:26.128 20:19:02 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:26.128 20:19:02 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:26.128 20:19:02 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:26.128 20:19:02 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:26.128 20:19:02 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:26.128 20:19:02 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:26.128 20:19:02 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:26.128 20:19:02 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:26.128 20:19:02 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:26.128 20:19:02 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:26.128 20:19:02 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:26.128 20:19:02 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:26.128 20:19:02 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:26.128 20:19:02 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:26.128 20:19:03 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:26.128 20:19:03 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:26.128 20:19:03 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:26.128 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:26.128 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.152 ms 00:18:26.128 00:18:26.128 --- 10.0.0.2 ping statistics --- 00:18:26.128 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:26.128 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:18:26.128 20:19:03 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:26.128 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:26.128 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:18:26.128 00:18:26.128 --- 10.0.0.1 ping statistics --- 00:18:26.128 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:26.128 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:18:26.128 20:19:03 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:26.128 20:19:03 -- nvmf/common.sh@410 -- # return 0 00:18:26.128 20:19:03 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:26.128 20:19:03 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:26.128 20:19:03 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:26.128 20:19:03 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:26.128 20:19:03 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:26.128 20:19:03 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:26.128 20:19:03 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:26.128 20:19:03 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:26.128 20:19:03 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:26.128 20:19:03 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:26.128 20:19:03 -- common/autotest_common.sh@10 -- # set +x 00:18:26.128 20:19:03 -- nvmf/common.sh@469 -- # nvmfpid=1799008 00:18:26.128 20:19:03 -- nvmf/common.sh@470 -- # waitforlisten 1799008 00:18:26.128 20:19:03 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:18:26.128 20:19:03 -- common/autotest_common.sh@817 -- # '[' -z 1799008 ']' 00:18:26.128 20:19:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:26.128 20:19:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:26.128 20:19:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:26.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:26.128 20:19:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:26.128 20:19:03 -- common/autotest_common.sh@10 -- # set +x 00:18:26.128 [2024-02-14 20:19:03.186095] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:18:26.128 [2024-02-14 20:19:03.186136] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:18:26.128 [2024-02-14 20:19:03.254307] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:26.128 [2024-02-14 20:19:03.334775] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:26.128 [2024-02-14 20:19:03.334886] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:26.128 [2024-02-14 20:19:03.334895] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:26.128 [2024-02-14 20:19:03.334901] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:26.128 [2024-02-14 20:19:03.335005] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:18:26.128 [2024-02-14 20:19:03.335114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:18:26.128 [2024-02-14 20:19:03.335220] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:26.128 [2024-02-14 20:19:03.335222] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:18:26.692 20:19:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:26.692 20:19:03 -- common/autotest_common.sh@850 -- # return 0 00:18:26.692 20:19:03 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:26.692 20:19:03 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:26.692 20:19:03 -- common/autotest_common.sh@10 -- # set +x 00:18:26.692 20:19:04 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:26.692 20:19:04 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:26.692 20:19:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:26.692 20:19:04 -- common/autotest_common.sh@10 -- # set +x 00:18:26.692 [2024-02-14 20:19:04.015128] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:26.692 20:19:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:26.692 20:19:04 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:26.692 20:19:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:26.692 20:19:04 -- common/autotest_common.sh@10 -- # set +x 00:18:26.692 Malloc0 00:18:26.692 20:19:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:26.692 20:19:04 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:26.692 20:19:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:26.692 20:19:04 -- common/autotest_common.sh@10 -- # set +x 00:18:26.692 20:19:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:26.692 20:19:04 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:26.692 20:19:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:26.692 20:19:04 -- common/autotest_common.sh@10 -- # set +x 00:18:26.692 20:19:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:26.693 20:19:04 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:26.693 20:19:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:26.693 20:19:04 -- common/autotest_common.sh@10 -- # set +x 00:18:26.693 [2024-02-14 20:19:04.055388] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:26.693 20:19:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:26.693 20:19:04 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:18:26.693 20:19:04 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:26.693 20:19:04 -- nvmf/common.sh@520 -- # config=() 00:18:26.693 20:19:04 -- nvmf/common.sh@520 -- # local subsystem config 00:18:26.693 20:19:04 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:26.693 20:19:04 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:26.693 { 00:18:26.693 "params": { 00:18:26.693 "name": "Nvme$subsystem", 00:18:26.693 "trtype": "$TEST_TRANSPORT", 00:18:26.693 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:26.693 "adrfam": "ipv4", 00:18:26.693 "trsvcid": "$NVMF_PORT", 00:18:26.693 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:26.693 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:26.693 "hdgst": ${hdgst:-false}, 00:18:26.693 "ddgst": ${ddgst:-false} 00:18:26.693 }, 00:18:26.693 "method": "bdev_nvme_attach_controller" 00:18:26.693 } 00:18:26.693 EOF 00:18:26.693 )") 00:18:26.693 20:19:04 -- nvmf/common.sh@542 -- # cat 00:18:26.693 20:19:04 -- nvmf/common.sh@544 -- # jq . 00:18:26.693 20:19:04 -- nvmf/common.sh@545 -- # IFS=, 00:18:26.693 20:19:04 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:26.693 "params": { 00:18:26.693 "name": "Nvme1", 00:18:26.693 "trtype": "tcp", 00:18:26.693 "traddr": "10.0.0.2", 00:18:26.693 "adrfam": "ipv4", 00:18:26.693 "trsvcid": "4420", 00:18:26.693 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:26.693 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:26.693 "hdgst": false, 00:18:26.693 "ddgst": false 00:18:26.693 }, 00:18:26.693 "method": "bdev_nvme_attach_controller" 00:18:26.693 }' 00:18:26.693 [2024-02-14 20:19:04.100859] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:18:26.693 [2024-02-14 20:19:04.100905] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1799255 ] 00:18:27.041 [2024-02-14 20:19:04.164760] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:27.041 [2024-02-14 20:19:04.247324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:27.041 [2024-02-14 20:19:04.247420] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:27.041 [2024-02-14 20:19:04.247422] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:27.041 [2024-02-14 20:19:04.247507] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:18:27.041 [2024-02-14 20:19:04.429449] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:18:27.041 [2024-02-14 20:19:04.429480] rpc.c: 167:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:18:27.041 I/O targets: 00:18:27.041 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:27.041 00:18:27.041 00:18:27.041 CUnit - A unit testing framework for C - Version 2.1-3 00:18:27.041 http://cunit.sourceforge.net/ 00:18:27.041 00:18:27.041 00:18:27.041 Suite: bdevio tests on: Nvme1n1 00:18:27.299 Test: blockdev write read block ...passed 00:18:27.299 Test: blockdev write zeroes read block ...passed 00:18:27.299 Test: blockdev write zeroes read no split ...passed 00:18:27.299 Test: blockdev write zeroes read split ...passed 00:18:27.299 Test: blockdev write zeroes read split partial ...passed 00:18:27.299 Test: blockdev reset ...[2024-02-14 20:19:04.630195] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:27.299 [2024-02-14 20:19:04.630257] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234b680 (9): Bad file descriptor 00:18:27.299 [2024-02-14 20:19:04.688077] bdev_nvme.c:2026:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:27.299 passed 00:18:27.299 Test: blockdev write read 8 blocks ...passed 00:18:27.299 Test: blockdev write read size > 128k ...passed 00:18:27.299 Test: blockdev write read invalid size ...passed 00:18:27.556 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:27.556 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:27.556 Test: blockdev write read max offset ...passed 00:18:27.556 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:27.556 Test: blockdev writev readv 8 blocks ...passed 00:18:27.556 Test: blockdev writev readv 30 x 1block ...passed 00:18:27.556 Test: blockdev writev readv block ...passed 00:18:27.556 Test: blockdev writev readv size > 128k ...passed 00:18:27.556 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:27.556 Test: blockdev comparev and writev ...[2024-02-14 20:19:04.917177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:27.556 [2024-02-14 20:19:04.917206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.556 [2024-02-14 20:19:04.917219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:27.556 [2024-02-14 20:19:04.917226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:27.556 [2024-02-14 20:19:04.917661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:27.556 [2024-02-14 20:19:04.917673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:27.556 [2024-02-14 20:19:04.917684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:27.556 [2024-02-14 20:19:04.917691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:27.556 [2024-02-14 20:19:04.918105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:27.556 [2024-02-14 20:19:04.918116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:27.556 [2024-02-14 20:19:04.918128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:27.556 [2024-02-14 20:19:04.918135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:27.556 [2024-02-14 20:19:04.918572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:27.556 [2024-02-14 20:19:04.918583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:27.556 [2024-02-14 20:19:04.918594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:27.556 [2024-02-14 20:19:04.918603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:27.556 passed 00:18:27.814 Test: blockdev nvme passthru rw ...passed 00:18:27.814 Test: blockdev nvme passthru vendor specific ...[2024-02-14 20:19:05.001279] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:27.814 [2024-02-14 20:19:05.001300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:27.814 [2024-02-14 20:19:05.001610] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:27.814 [2024-02-14 20:19:05.001621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:27.814 [2024-02-14 20:19:05.001930] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:27.814 [2024-02-14 20:19:05.001941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:27.814 [2024-02-14 20:19:05.002247] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:27.814 [2024-02-14 20:19:05.002258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:27.814 passed 00:18:27.814 Test: blockdev nvme admin passthru ...passed 00:18:27.814 Test: blockdev copy ...passed 00:18:27.814 00:18:27.814 Run Summary: Type Total Ran Passed Failed Inactive 00:18:27.814 suites 1 1 n/a 0 0 00:18:27.814 tests 23 23 23 0 0 00:18:27.814 asserts 152 152 152 0 n/a 00:18:27.814 00:18:27.814 Elapsed time = 1.266 seconds 00:18:27.814 [2024-02-14 20:19:05.063180] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:18:28.071 20:19:05 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:28.071 20:19:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:28.071 20:19:05 -- common/autotest_common.sh@10 -- # set +x 00:18:28.071 20:19:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:28.071 20:19:05 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:28.071 20:19:05 -- target/bdevio.sh@30 -- # nvmftestfini 00:18:28.071 20:19:05 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:28.071 20:19:05 -- nvmf/common.sh@116 -- # sync 00:18:28.071 20:19:05 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:28.071 20:19:05 -- nvmf/common.sh@119 -- # set +e 00:18:28.071 20:19:05 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:28.071 20:19:05 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:28.071 rmmod nvme_tcp 00:18:28.071 rmmod nvme_fabrics 00:18:28.071 rmmod nvme_keyring 00:18:28.071 20:19:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:28.071 20:19:05 -- nvmf/common.sh@123 -- # set -e 00:18:28.071 20:19:05 -- nvmf/common.sh@124 -- # return 0 00:18:28.071 20:19:05 -- nvmf/common.sh@477 -- # '[' -n 1799008 ']' 00:18:28.071 20:19:05 -- nvmf/common.sh@478 -- # killprocess 1799008 00:18:28.071 20:19:05 -- common/autotest_common.sh@924 -- # '[' -z 1799008 ']' 00:18:28.071 20:19:05 -- common/autotest_common.sh@928 -- # kill -0 1799008 00:18:28.071 20:19:05 -- common/autotest_common.sh@929 -- # uname 00:18:28.071 20:19:05 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:18:28.071 20:19:05 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 1799008 00:18:28.071 20:19:05 -- common/autotest_common.sh@930 -- # process_name=reactor_3 00:18:28.071 20:19:05 -- common/autotest_common.sh@934 -- # '[' reactor_3 = sudo ']' 00:18:28.071 20:19:05 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 1799008' 00:18:28.072 killing process with pid 1799008 00:18:28.072 20:19:05 -- common/autotest_common.sh@943 -- # kill 1799008 00:18:28.072 20:19:05 -- common/autotest_common.sh@948 -- # wait 1799008 00:18:28.638 20:19:05 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:28.638 20:19:05 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:28.638 20:19:05 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:28.638 20:19:05 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:28.638 20:19:05 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:28.638 20:19:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:28.638 20:19:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:28.638 20:19:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:30.544 20:19:07 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:30.544 00:18:30.544 real 0m11.097s 00:18:30.544 user 0m13.467s 00:18:30.544 sys 0m5.560s 00:18:30.544 20:19:07 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:30.544 20:19:07 -- common/autotest_common.sh@10 -- # set +x 00:18:30.544 ************************************ 00:18:30.544 END TEST nvmf_bdevio_no_huge 00:18:30.544 ************************************ 00:18:30.544 20:19:07 -- nvmf/nvmf.sh@59 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:30.544 20:19:07 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:18:30.544 20:19:07 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:18:30.544 20:19:07 -- common/autotest_common.sh@10 -- # set +x 00:18:30.544 ************************************ 00:18:30.544 START TEST nvmf_tls 00:18:30.544 ************************************ 00:18:30.544 20:19:07 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:30.803 * Looking for test storage... 00:18:30.803 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:30.803 20:19:07 -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:30.803 20:19:07 -- nvmf/common.sh@7 -- # uname -s 00:18:30.803 20:19:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:30.803 20:19:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:30.803 20:19:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:30.803 20:19:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:30.803 20:19:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:30.803 20:19:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:30.803 20:19:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:30.803 20:19:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:30.803 20:19:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:30.803 20:19:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:30.803 20:19:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:30.803 20:19:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:18:30.803 20:19:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:30.803 20:19:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:30.803 20:19:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:30.803 20:19:07 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:30.803 20:19:07 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:30.803 20:19:07 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:30.803 20:19:07 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:30.803 20:19:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:30.803 20:19:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:30.803 20:19:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:30.803 20:19:07 -- paths/export.sh@5 -- # export PATH 00:18:30.803 20:19:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:30.803 20:19:07 -- nvmf/common.sh@46 -- # : 0 00:18:30.803 20:19:07 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:30.803 20:19:07 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:30.803 20:19:07 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:30.803 20:19:07 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:30.803 20:19:07 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:30.803 20:19:07 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:30.803 20:19:07 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:30.803 20:19:07 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:30.803 20:19:08 -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:30.803 20:19:08 -- target/tls.sh@71 -- # nvmftestinit 00:18:30.803 20:19:08 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:30.803 20:19:08 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:30.803 20:19:08 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:30.803 20:19:08 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:30.803 20:19:08 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:30.803 20:19:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:30.803 20:19:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:30.803 20:19:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:30.803 20:19:08 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:30.803 20:19:08 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:30.803 20:19:08 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:30.803 20:19:08 -- common/autotest_common.sh@10 -- # set +x 00:18:36.078 20:19:13 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:36.078 20:19:13 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:36.078 20:19:13 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:36.078 20:19:13 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:36.078 20:19:13 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:36.078 20:19:13 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:36.078 20:19:13 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:36.078 20:19:13 -- nvmf/common.sh@294 -- # net_devs=() 00:18:36.078 20:19:13 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:36.078 20:19:13 -- nvmf/common.sh@295 -- # e810=() 00:18:36.078 20:19:13 -- nvmf/common.sh@295 -- # local -ga e810 00:18:36.078 20:19:13 -- nvmf/common.sh@296 -- # x722=() 00:18:36.078 20:19:13 -- nvmf/common.sh@296 -- # local -ga x722 00:18:36.078 20:19:13 -- nvmf/common.sh@297 -- # mlx=() 00:18:36.078 20:19:13 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:36.078 20:19:13 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:36.078 20:19:13 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:36.078 20:19:13 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:36.078 20:19:13 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:36.078 20:19:13 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:36.078 20:19:13 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:36.078 20:19:13 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:36.078 20:19:13 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:36.078 20:19:13 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:36.078 20:19:13 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:36.078 20:19:13 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:36.078 20:19:13 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:36.078 20:19:13 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:36.078 20:19:13 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:36.078 20:19:13 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:36.078 20:19:13 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:36.078 20:19:13 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:36.078 20:19:13 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:36.078 20:19:13 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:36.078 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:36.078 20:19:13 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:36.078 20:19:13 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:36.078 20:19:13 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:36.078 20:19:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:36.078 20:19:13 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:36.078 20:19:13 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:36.078 20:19:13 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:36.078 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:36.078 20:19:13 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:36.078 20:19:13 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:36.078 20:19:13 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:36.078 20:19:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:36.078 20:19:13 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:36.078 20:19:13 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:36.078 20:19:13 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:36.078 20:19:13 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:36.078 20:19:13 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:36.078 20:19:13 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:36.078 20:19:13 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:36.078 20:19:13 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:36.078 20:19:13 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:36.078 Found net devices under 0000:af:00.0: cvl_0_0 00:18:36.078 20:19:13 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:36.078 20:19:13 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:36.078 20:19:13 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:36.078 20:19:13 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:36.078 20:19:13 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:36.078 20:19:13 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:36.078 Found net devices under 0000:af:00.1: cvl_0_1 00:18:36.078 20:19:13 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:36.078 20:19:13 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:36.078 20:19:13 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:36.078 20:19:13 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:36.078 20:19:13 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:36.078 20:19:13 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:36.078 20:19:13 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:36.078 20:19:13 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:36.078 20:19:13 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:36.078 20:19:13 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:36.078 20:19:13 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:36.078 20:19:13 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:36.078 20:19:13 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:36.078 20:19:13 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:36.078 20:19:13 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:36.078 20:19:13 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:36.078 20:19:13 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:36.078 20:19:13 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:36.078 20:19:13 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:36.337 20:19:13 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:36.337 20:19:13 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:36.337 20:19:13 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:36.337 20:19:13 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:36.337 20:19:13 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:36.337 20:19:13 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:36.337 20:19:13 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:36.337 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:36.337 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:18:36.337 00:18:36.337 --- 10.0.0.2 ping statistics --- 00:18:36.337 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:36.337 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:18:36.337 20:19:13 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:36.337 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:36.337 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.390 ms 00:18:36.337 00:18:36.337 --- 10.0.0.1 ping statistics --- 00:18:36.337 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:36.337 rtt min/avg/max/mdev = 0.390/0.390/0.390/0.000 ms 00:18:36.337 20:19:13 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:36.337 20:19:13 -- nvmf/common.sh@410 -- # return 0 00:18:36.338 20:19:13 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:36.338 20:19:13 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:36.338 20:19:13 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:36.338 20:19:13 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:36.338 20:19:13 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:36.338 20:19:13 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:36.338 20:19:13 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:36.338 20:19:13 -- target/tls.sh@72 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:18:36.338 20:19:13 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:36.338 20:19:13 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:36.338 20:19:13 -- common/autotest_common.sh@10 -- # set +x 00:18:36.338 20:19:13 -- nvmf/common.sh@469 -- # nvmfpid=1803276 00:18:36.338 20:19:13 -- nvmf/common.sh@470 -- # waitforlisten 1803276 00:18:36.338 20:19:13 -- common/autotest_common.sh@817 -- # '[' -z 1803276 ']' 00:18:36.338 20:19:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:36.338 20:19:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:36.338 20:19:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:36.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:36.338 20:19:13 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:18:36.338 20:19:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:36.338 20:19:13 -- common/autotest_common.sh@10 -- # set +x 00:18:36.596 [2024-02-14 20:19:13.793038] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:18:36.596 [2024-02-14 20:19:13.793080] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:36.596 EAL: No free 2048 kB hugepages reported on node 1 00:18:36.596 [2024-02-14 20:19:13.855384] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:36.596 [2024-02-14 20:19:13.930243] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:36.596 [2024-02-14 20:19:13.930344] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:36.596 [2024-02-14 20:19:13.930351] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:36.596 [2024-02-14 20:19:13.930357] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:36.596 [2024-02-14 20:19:13.930376] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:37.171 20:19:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:37.171 20:19:14 -- common/autotest_common.sh@850 -- # return 0 00:18:37.171 20:19:14 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:37.171 20:19:14 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:37.171 20:19:14 -- common/autotest_common.sh@10 -- # set +x 00:18:37.430 20:19:14 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:37.430 20:19:14 -- target/tls.sh@74 -- # '[' tcp '!=' tcp ']' 00:18:37.430 20:19:14 -- target/tls.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:18:37.430 true 00:18:37.430 20:19:14 -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:37.430 20:19:14 -- target/tls.sh@82 -- # jq -r .tls_version 00:18:37.689 20:19:14 -- target/tls.sh@82 -- # version=0 00:18:37.689 20:19:14 -- target/tls.sh@83 -- # [[ 0 != \0 ]] 00:18:37.689 20:19:14 -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:37.689 20:19:15 -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:37.689 20:19:15 -- target/tls.sh@90 -- # jq -r .tls_version 00:18:37.948 20:19:15 -- target/tls.sh@90 -- # version=13 00:18:37.948 20:19:15 -- target/tls.sh@91 -- # [[ 13 != \1\3 ]] 00:18:37.948 20:19:15 -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:18:38.207 20:19:15 -- target/tls.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:38.207 20:19:15 -- target/tls.sh@98 -- # jq -r .tls_version 00:18:38.207 20:19:15 -- target/tls.sh@98 -- # version=7 00:18:38.207 20:19:15 -- target/tls.sh@99 -- # [[ 7 != \7 ]] 00:18:38.207 20:19:15 -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:38.207 20:19:15 -- target/tls.sh@105 -- # jq -r .enable_ktls 00:18:38.466 20:19:15 -- target/tls.sh@105 -- # ktls=false 00:18:38.466 20:19:15 -- target/tls.sh@106 -- # [[ false != \f\a\l\s\e ]] 00:18:38.466 20:19:15 -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:18:38.724 20:19:15 -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:38.724 20:19:15 -- target/tls.sh@113 -- # jq -r .enable_ktls 00:18:38.724 20:19:16 -- target/tls.sh@113 -- # ktls=true 00:18:38.724 20:19:16 -- target/tls.sh@114 -- # [[ true != \t\r\u\e ]] 00:18:38.724 20:19:16 -- target/tls.sh@120 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:18:38.983 20:19:16 -- target/tls.sh@121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:38.983 20:19:16 -- target/tls.sh@121 -- # jq -r .enable_ktls 00:18:38.983 20:19:16 -- target/tls.sh@121 -- # ktls=false 00:18:38.983 20:19:16 -- target/tls.sh@122 -- # [[ false != \f\a\l\s\e ]] 00:18:38.983 20:19:16 -- target/tls.sh@127 -- # format_interchange_psk 00112233445566778899aabbccddeeff 00:18:38.983 20:19:16 -- target/tls.sh@49 -- # local key hash crc 00:18:38.983 20:19:16 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff 00:18:38.983 20:19:16 -- target/tls.sh@51 -- # hash=01 00:18:38.983 20:19:16 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff 00:18:38.983 20:19:16 -- target/tls.sh@52 -- # gzip -1 -c 00:18:38.983 20:19:16 -- target/tls.sh@52 -- # tail -c8 00:18:39.242 20:19:16 -- target/tls.sh@52 -- # head -c 4 00:18:39.242 20:19:16 -- target/tls.sh@52 -- # crc='p$H�' 00:18:39.242 20:19:16 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:18:39.242 20:19:16 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeffp$H�' 00:18:39.242 20:19:16 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:39.242 20:19:16 -- target/tls.sh@127 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:39.242 20:19:16 -- target/tls.sh@128 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 00:18:39.242 20:19:16 -- target/tls.sh@49 -- # local key hash crc 00:18:39.242 20:19:16 -- target/tls.sh@51 -- # key=ffeeddccbbaa99887766554433221100 00:18:39.242 20:19:16 -- target/tls.sh@51 -- # hash=01 00:18:39.242 20:19:16 -- target/tls.sh@52 -- # echo -n ffeeddccbbaa99887766554433221100 00:18:39.242 20:19:16 -- target/tls.sh@52 -- # gzip -1 -c 00:18:39.242 20:19:16 -- target/tls.sh@52 -- # head -c 4 00:18:39.242 20:19:16 -- target/tls.sh@52 -- # tail -c8 00:18:39.242 20:19:16 -- target/tls.sh@52 -- # crc=$'_\006o\330' 00:18:39.242 20:19:16 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:18:39.242 20:19:16 -- target/tls.sh@54 -- # echo -n $'ffeeddccbbaa99887766554433221100_\006o\330' 00:18:39.242 20:19:16 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:39.242 20:19:16 -- target/tls.sh@128 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:39.242 20:19:16 -- target/tls.sh@130 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:18:39.242 20:19:16 -- target/tls.sh@131 -- # key_2_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:18:39.242 20:19:16 -- target/tls.sh@133 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:39.242 20:19:16 -- target/tls.sh@134 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:39.242 20:19:16 -- target/tls.sh@136 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:18:39.242 20:19:16 -- target/tls.sh@137 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:18:39.242 20:19:16 -- target/tls.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:39.242 20:19:16 -- target/tls.sh@140 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:18:39.501 20:19:16 -- target/tls.sh@142 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:18:39.501 20:19:16 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:18:39.501 20:19:16 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:39.760 [2024-02-14 20:19:16.974065] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:39.760 20:19:16 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:39.760 20:19:17 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:40.018 [2024-02-14 20:19:17.294892] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:40.018 [2024-02-14 20:19:17.295087] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:40.018 20:19:17 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:40.276 malloc0 00:18:40.276 20:19:17 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:40.276 20:19:17 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:18:40.535 20:19:17 -- target/tls.sh@146 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:18:40.535 EAL: No free 2048 kB hugepages reported on node 1 00:18:50.512 Initializing NVMe Controllers 00:18:50.512 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:50.512 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:50.512 Initialization complete. Launching workers. 00:18:50.512 ======================================================== 00:18:50.512 Latency(us) 00:18:50.512 Device Information : IOPS MiB/s Average min max 00:18:50.512 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 17548.58 68.55 3647.35 797.78 4896.64 00:18:50.512 ======================================================== 00:18:50.512 Total : 17548.58 68.55 3647.35 797.78 4896.64 00:18:50.512 00:18:50.512 20:19:27 -- target/tls.sh@152 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:18:50.512 20:19:27 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:50.512 20:19:27 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:50.512 20:19:27 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:50.512 20:19:27 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:18:50.512 20:19:27 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:50.512 20:19:27 -- target/tls.sh@28 -- # bdevperf_pid=1805647 00:18:50.512 20:19:27 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:50.512 20:19:27 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:50.512 20:19:27 -- target/tls.sh@31 -- # waitforlisten 1805647 /var/tmp/bdevperf.sock 00:18:50.512 20:19:27 -- common/autotest_common.sh@817 -- # '[' -z 1805647 ']' 00:18:50.512 20:19:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:50.512 20:19:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:50.512 20:19:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:50.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:50.512 20:19:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:50.513 20:19:27 -- common/autotest_common.sh@10 -- # set +x 00:18:50.772 [2024-02-14 20:19:27.929906] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:18:50.772 [2024-02-14 20:19:27.929953] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1805647 ] 00:18:50.772 EAL: No free 2048 kB hugepages reported on node 1 00:18:50.772 [2024-02-14 20:19:27.984073] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:50.772 [2024-02-14 20:19:28.050606] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:51.340 20:19:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:51.340 20:19:28 -- common/autotest_common.sh@850 -- # return 0 00:18:51.340 20:19:28 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:18:51.599 [2024-02-14 20:19:28.887080] bdev_nvme_rpc.c: 478:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:51.599 TLSTESTn1 00:18:51.599 20:19:28 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:51.857 Running I/O for 10 seconds... 00:19:01.833 00:19:01.833 Latency(us) 00:19:01.833 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:01.833 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:01.833 Verification LBA range: start 0x0 length 0x2000 00:19:01.833 TLSTESTn1 : 10.04 1774.15 6.93 0.00 0.00 72035.61 4181.82 92873.87 00:19:01.833 =================================================================================================================== 00:19:01.833 Total : 1774.15 6.93 0.00 0.00 72035.61 4181.82 92873.87 00:19:01.833 0 00:19:01.833 20:19:39 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:01.833 20:19:39 -- target/tls.sh@45 -- # killprocess 1805647 00:19:01.833 20:19:39 -- common/autotest_common.sh@924 -- # '[' -z 1805647 ']' 00:19:01.833 20:19:39 -- common/autotest_common.sh@928 -- # kill -0 1805647 00:19:01.833 20:19:39 -- common/autotest_common.sh@929 -- # uname 00:19:01.833 20:19:39 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:19:01.833 20:19:39 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 1805647 00:19:01.833 20:19:39 -- common/autotest_common.sh@930 -- # process_name=reactor_2 00:19:01.833 20:19:39 -- common/autotest_common.sh@934 -- # '[' reactor_2 = sudo ']' 00:19:01.833 20:19:39 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 1805647' 00:19:01.833 killing process with pid 1805647 00:19:01.833 20:19:39 -- common/autotest_common.sh@943 -- # kill 1805647 00:19:01.833 Received shutdown signal, test time was about 10.000000 seconds 00:19:01.833 00:19:01.833 Latency(us) 00:19:01.833 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:01.834 =================================================================================================================== 00:19:01.834 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:01.834 20:19:39 -- common/autotest_common.sh@948 -- # wait 1805647 00:19:02.092 20:19:39 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:19:02.092 20:19:39 -- common/autotest_common.sh@638 -- # local es=0 00:19:02.092 20:19:39 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:19:02.092 20:19:39 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:19:02.092 20:19:39 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:02.092 20:19:39 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:19:02.092 20:19:39 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:02.092 20:19:39 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:19:02.092 20:19:39 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:02.092 20:19:39 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:02.092 20:19:39 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:02.092 20:19:39 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt' 00:19:02.092 20:19:39 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:02.092 20:19:39 -- target/tls.sh@28 -- # bdevperf_pid=1807496 00:19:02.092 20:19:39 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:02.092 20:19:39 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:02.092 20:19:39 -- target/tls.sh@31 -- # waitforlisten 1807496 /var/tmp/bdevperf.sock 00:19:02.092 20:19:39 -- common/autotest_common.sh@817 -- # '[' -z 1807496 ']' 00:19:02.092 20:19:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:02.092 20:19:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:02.092 20:19:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:02.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:02.092 20:19:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:02.093 20:19:39 -- common/autotest_common.sh@10 -- # set +x 00:19:02.093 [2024-02-14 20:19:39.438020] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:19:02.093 [2024-02-14 20:19:39.438067] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1807496 ] 00:19:02.093 EAL: No free 2048 kB hugepages reported on node 1 00:19:02.093 [2024-02-14 20:19:39.492588] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:02.351 [2024-02-14 20:19:39.564451] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:02.917 20:19:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:02.917 20:19:40 -- common/autotest_common.sh@850 -- # return 0 00:19:02.917 20:19:40 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:19:03.175 [2024-02-14 20:19:40.374381] bdev_nvme_rpc.c: 478:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:03.175 [2024-02-14 20:19:40.379126] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:03.175 [2024-02-14 20:19:40.379763] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a2b750 (107): Transport endpoint is not connected 00:19:03.175 [2024-02-14 20:19:40.380757] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a2b750 (9): Bad file descriptor 00:19:03.175 [2024-02-14 20:19:40.381758] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:03.175 [2024-02-14 20:19:40.381768] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:03.175 [2024-02-14 20:19:40.381776] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:03.175 request: 00:19:03.175 { 00:19:03.175 "name": "TLSTEST", 00:19:03.175 "trtype": "tcp", 00:19:03.175 "traddr": "10.0.0.2", 00:19:03.176 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:03.176 "adrfam": "ipv4", 00:19:03.176 "trsvcid": "4420", 00:19:03.176 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:03.176 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt", 00:19:03.176 "method": "bdev_nvme_attach_controller", 00:19:03.176 "req_id": 1 00:19:03.176 } 00:19:03.176 Got JSON-RPC error response 00:19:03.176 response: 00:19:03.176 { 00:19:03.176 "code": -32602, 00:19:03.176 "message": "Invalid parameters" 00:19:03.176 } 00:19:03.176 20:19:40 -- target/tls.sh@36 -- # killprocess 1807496 00:19:03.176 20:19:40 -- common/autotest_common.sh@924 -- # '[' -z 1807496 ']' 00:19:03.176 20:19:40 -- common/autotest_common.sh@928 -- # kill -0 1807496 00:19:03.176 20:19:40 -- common/autotest_common.sh@929 -- # uname 00:19:03.176 20:19:40 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:19:03.176 20:19:40 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 1807496 00:19:03.176 20:19:40 -- common/autotest_common.sh@930 -- # process_name=reactor_2 00:19:03.176 20:19:40 -- common/autotest_common.sh@934 -- # '[' reactor_2 = sudo ']' 00:19:03.176 20:19:40 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 1807496' 00:19:03.176 killing process with pid 1807496 00:19:03.176 20:19:40 -- common/autotest_common.sh@943 -- # kill 1807496 00:19:03.176 Received shutdown signal, test time was about 10.000000 seconds 00:19:03.176 00:19:03.176 Latency(us) 00:19:03.176 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:03.176 =================================================================================================================== 00:19:03.176 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:03.176 20:19:40 -- common/autotest_common.sh@948 -- # wait 1807496 00:19:03.434 20:19:40 -- target/tls.sh@37 -- # return 1 00:19:03.434 20:19:40 -- common/autotest_common.sh@641 -- # es=1 00:19:03.434 20:19:40 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:03.434 20:19:40 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:03.434 20:19:40 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:03.434 20:19:40 -- target/tls.sh@158 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:03.434 20:19:40 -- common/autotest_common.sh@638 -- # local es=0 00:19:03.435 20:19:40 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:03.435 20:19:40 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:19:03.435 20:19:40 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:03.435 20:19:40 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:19:03.435 20:19:40 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:03.435 20:19:40 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:03.435 20:19:40 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:03.435 20:19:40 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:03.435 20:19:40 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:19:03.435 20:19:40 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:19:03.435 20:19:40 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:03.435 20:19:40 -- target/tls.sh@28 -- # bdevperf_pid=1807737 00:19:03.435 20:19:40 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:03.435 20:19:40 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:03.435 20:19:40 -- target/tls.sh@31 -- # waitforlisten 1807737 /var/tmp/bdevperf.sock 00:19:03.435 20:19:40 -- common/autotest_common.sh@817 -- # '[' -z 1807737 ']' 00:19:03.435 20:19:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:03.435 20:19:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:03.435 20:19:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:03.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:03.435 20:19:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:03.435 20:19:40 -- common/autotest_common.sh@10 -- # set +x 00:19:03.435 [2024-02-14 20:19:40.686963] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:19:03.435 [2024-02-14 20:19:40.687010] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1807737 ] 00:19:03.435 EAL: No free 2048 kB hugepages reported on node 1 00:19:03.435 [2024-02-14 20:19:40.741814] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:03.435 [2024-02-14 20:19:40.805531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:04.372 20:19:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:04.372 20:19:41 -- common/autotest_common.sh@850 -- # return 0 00:19:04.372 20:19:41 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:04.372 [2024-02-14 20:19:41.634428] bdev_nvme_rpc.c: 478:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:04.372 [2024-02-14 20:19:41.641866] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:04.372 [2024-02-14 20:19:41.641887] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:04.372 [2024-02-14 20:19:41.641910] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:04.373 [2024-02-14 20:19:41.642028] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a2f750 (107): Transport endpoint is not connected 00:19:04.373 [2024-02-14 20:19:41.642931] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a2f750 (9): Bad file descriptor 00:19:04.373 [2024-02-14 20:19:41.643931] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:04.373 [2024-02-14 20:19:41.643941] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:04.373 [2024-02-14 20:19:41.643950] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:04.373 request: 00:19:04.373 { 00:19:04.373 "name": "TLSTEST", 00:19:04.373 "trtype": "tcp", 00:19:04.373 "traddr": "10.0.0.2", 00:19:04.373 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:04.373 "adrfam": "ipv4", 00:19:04.373 "trsvcid": "4420", 00:19:04.373 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:04.373 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt", 00:19:04.373 "method": "bdev_nvme_attach_controller", 00:19:04.373 "req_id": 1 00:19:04.373 } 00:19:04.373 Got JSON-RPC error response 00:19:04.373 response: 00:19:04.373 { 00:19:04.373 "code": -32602, 00:19:04.373 "message": "Invalid parameters" 00:19:04.373 } 00:19:04.373 20:19:41 -- target/tls.sh@36 -- # killprocess 1807737 00:19:04.373 20:19:41 -- common/autotest_common.sh@924 -- # '[' -z 1807737 ']' 00:19:04.373 20:19:41 -- common/autotest_common.sh@928 -- # kill -0 1807737 00:19:04.373 20:19:41 -- common/autotest_common.sh@929 -- # uname 00:19:04.373 20:19:41 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:19:04.373 20:19:41 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 1807737 00:19:04.373 20:19:41 -- common/autotest_common.sh@930 -- # process_name=reactor_2 00:19:04.373 20:19:41 -- common/autotest_common.sh@934 -- # '[' reactor_2 = sudo ']' 00:19:04.373 20:19:41 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 1807737' 00:19:04.373 killing process with pid 1807737 00:19:04.373 20:19:41 -- common/autotest_common.sh@943 -- # kill 1807737 00:19:04.373 Received shutdown signal, test time was about 10.000000 seconds 00:19:04.373 00:19:04.373 Latency(us) 00:19:04.373 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:04.373 =================================================================================================================== 00:19:04.373 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:04.373 20:19:41 -- common/autotest_common.sh@948 -- # wait 1807737 00:19:04.632 20:19:41 -- target/tls.sh@37 -- # return 1 00:19:04.632 20:19:41 -- common/autotest_common.sh@641 -- # es=1 00:19:04.632 20:19:41 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:04.632 20:19:41 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:04.632 20:19:41 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:04.632 20:19:41 -- target/tls.sh@161 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:04.632 20:19:41 -- common/autotest_common.sh@638 -- # local es=0 00:19:04.632 20:19:41 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:04.632 20:19:41 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:19:04.632 20:19:41 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:04.632 20:19:41 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:19:04.632 20:19:41 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:04.632 20:19:41 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:04.632 20:19:41 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:04.632 20:19:41 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:19:04.632 20:19:41 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:04.632 20:19:41 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:19:04.632 20:19:41 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:04.632 20:19:41 -- target/tls.sh@28 -- # bdevperf_pid=1807972 00:19:04.632 20:19:41 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:04.632 20:19:41 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:04.632 20:19:41 -- target/tls.sh@31 -- # waitforlisten 1807972 /var/tmp/bdevperf.sock 00:19:04.632 20:19:41 -- common/autotest_common.sh@817 -- # '[' -z 1807972 ']' 00:19:04.632 20:19:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:04.632 20:19:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:04.632 20:19:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:04.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:04.632 20:19:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:04.632 20:19:41 -- common/autotest_common.sh@10 -- # set +x 00:19:04.632 [2024-02-14 20:19:41.954620] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:19:04.632 [2024-02-14 20:19:41.954667] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1807972 ] 00:19:04.632 EAL: No free 2048 kB hugepages reported on node 1 00:19:04.632 [2024-02-14 20:19:42.008468] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:04.891 [2024-02-14 20:19:42.073258] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:05.458 20:19:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:05.459 20:19:42 -- common/autotest_common.sh@850 -- # return 0 00:19:05.459 20:19:42 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:05.718 [2024-02-14 20:19:42.902824] bdev_nvme_rpc.c: 478:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:05.718 [2024-02-14 20:19:42.907543] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:05.718 [2024-02-14 20:19:42.907563] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:05.718 [2024-02-14 20:19:42.907585] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:05.718 [2024-02-14 20:19:42.908252] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c3750 (107): Transport endpoint is not connected 00:19:05.718 [2024-02-14 20:19:42.909243] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c3750 (9): Bad file descriptor 00:19:05.718 [2024-02-14 20:19:42.910244] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:19:05.718 [2024-02-14 20:19:42.910254] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:05.718 [2024-02-14 20:19:42.910263] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:19:05.718 request: 00:19:05.718 { 00:19:05.718 "name": "TLSTEST", 00:19:05.718 "trtype": "tcp", 00:19:05.718 "traddr": "10.0.0.2", 00:19:05.718 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:05.718 "adrfam": "ipv4", 00:19:05.718 "trsvcid": "4420", 00:19:05.718 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:05.718 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt", 00:19:05.718 "method": "bdev_nvme_attach_controller", 00:19:05.718 "req_id": 1 00:19:05.718 } 00:19:05.718 Got JSON-RPC error response 00:19:05.718 response: 00:19:05.718 { 00:19:05.718 "code": -32602, 00:19:05.718 "message": "Invalid parameters" 00:19:05.718 } 00:19:05.718 20:19:42 -- target/tls.sh@36 -- # killprocess 1807972 00:19:05.718 20:19:42 -- common/autotest_common.sh@924 -- # '[' -z 1807972 ']' 00:19:05.718 20:19:42 -- common/autotest_common.sh@928 -- # kill -0 1807972 00:19:05.718 20:19:42 -- common/autotest_common.sh@929 -- # uname 00:19:05.718 20:19:42 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:19:05.718 20:19:42 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 1807972 00:19:05.718 20:19:42 -- common/autotest_common.sh@930 -- # process_name=reactor_2 00:19:05.718 20:19:42 -- common/autotest_common.sh@934 -- # '[' reactor_2 = sudo ']' 00:19:05.718 20:19:42 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 1807972' 00:19:05.718 killing process with pid 1807972 00:19:05.718 20:19:42 -- common/autotest_common.sh@943 -- # kill 1807972 00:19:05.718 Received shutdown signal, test time was about 10.000000 seconds 00:19:05.718 00:19:05.718 Latency(us) 00:19:05.718 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:05.718 =================================================================================================================== 00:19:05.718 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:05.718 20:19:42 -- common/autotest_common.sh@948 -- # wait 1807972 00:19:05.978 20:19:43 -- target/tls.sh@37 -- # return 1 00:19:05.978 20:19:43 -- common/autotest_common.sh@641 -- # es=1 00:19:05.978 20:19:43 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:05.978 20:19:43 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:05.978 20:19:43 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:05.978 20:19:43 -- target/tls.sh@164 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:05.978 20:19:43 -- common/autotest_common.sh@638 -- # local es=0 00:19:05.978 20:19:43 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:05.978 20:19:43 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:19:05.978 20:19:43 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:05.978 20:19:43 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:19:05.978 20:19:43 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:05.978 20:19:43 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:05.978 20:19:43 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:05.978 20:19:43 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:05.978 20:19:43 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:05.978 20:19:43 -- target/tls.sh@23 -- # psk= 00:19:05.978 20:19:43 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:05.978 20:19:43 -- target/tls.sh@28 -- # bdevperf_pid=1808209 00:19:05.978 20:19:43 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:05.978 20:19:43 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:05.978 20:19:43 -- target/tls.sh@31 -- # waitforlisten 1808209 /var/tmp/bdevperf.sock 00:19:05.978 20:19:43 -- common/autotest_common.sh@817 -- # '[' -z 1808209 ']' 00:19:05.978 20:19:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:05.978 20:19:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:05.978 20:19:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:05.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:05.978 20:19:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:05.978 20:19:43 -- common/autotest_common.sh@10 -- # set +x 00:19:05.978 [2024-02-14 20:19:43.210103] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:19:05.978 [2024-02-14 20:19:43.210148] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1808209 ] 00:19:05.978 EAL: No free 2048 kB hugepages reported on node 1 00:19:05.978 [2024-02-14 20:19:43.264036] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:05.978 [2024-02-14 20:19:43.327754] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:06.914 20:19:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:06.914 20:19:44 -- common/autotest_common.sh@850 -- # return 0 00:19:06.914 20:19:44 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:06.914 [2024-02-14 20:19:44.178287] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:06.914 [2024-02-14 20:19:44.179707] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2409700 (9): Bad file descriptor 00:19:06.914 [2024-02-14 20:19:44.180707] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:06.914 [2024-02-14 20:19:44.180718] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:06.914 [2024-02-14 20:19:44.180727] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:06.914 request: 00:19:06.914 { 00:19:06.914 "name": "TLSTEST", 00:19:06.914 "trtype": "tcp", 00:19:06.914 "traddr": "10.0.0.2", 00:19:06.914 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:06.914 "adrfam": "ipv4", 00:19:06.914 "trsvcid": "4420", 00:19:06.914 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:06.914 "method": "bdev_nvme_attach_controller", 00:19:06.914 "req_id": 1 00:19:06.914 } 00:19:06.914 Got JSON-RPC error response 00:19:06.914 response: 00:19:06.914 { 00:19:06.914 "code": -32602, 00:19:06.914 "message": "Invalid parameters" 00:19:06.914 } 00:19:06.914 20:19:44 -- target/tls.sh@36 -- # killprocess 1808209 00:19:06.914 20:19:44 -- common/autotest_common.sh@924 -- # '[' -z 1808209 ']' 00:19:06.914 20:19:44 -- common/autotest_common.sh@928 -- # kill -0 1808209 00:19:06.914 20:19:44 -- common/autotest_common.sh@929 -- # uname 00:19:06.914 20:19:44 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:19:06.914 20:19:44 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 1808209 00:19:06.914 20:19:44 -- common/autotest_common.sh@930 -- # process_name=reactor_2 00:19:06.914 20:19:44 -- common/autotest_common.sh@934 -- # '[' reactor_2 = sudo ']' 00:19:06.914 20:19:44 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 1808209' 00:19:06.914 killing process with pid 1808209 00:19:06.914 20:19:44 -- common/autotest_common.sh@943 -- # kill 1808209 00:19:06.914 Received shutdown signal, test time was about 10.000000 seconds 00:19:06.914 00:19:06.914 Latency(us) 00:19:06.914 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:06.914 =================================================================================================================== 00:19:06.914 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:06.914 20:19:44 -- common/autotest_common.sh@948 -- # wait 1808209 00:19:07.173 20:19:44 -- target/tls.sh@37 -- # return 1 00:19:07.173 20:19:44 -- common/autotest_common.sh@641 -- # es=1 00:19:07.173 20:19:44 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:07.173 20:19:44 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:07.173 20:19:44 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:07.173 20:19:44 -- target/tls.sh@167 -- # killprocess 1803276 00:19:07.173 20:19:44 -- common/autotest_common.sh@924 -- # '[' -z 1803276 ']' 00:19:07.173 20:19:44 -- common/autotest_common.sh@928 -- # kill -0 1803276 00:19:07.173 20:19:44 -- common/autotest_common.sh@929 -- # uname 00:19:07.173 20:19:44 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:19:07.173 20:19:44 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 1803276 00:19:07.173 20:19:44 -- common/autotest_common.sh@930 -- # process_name=reactor_1 00:19:07.173 20:19:44 -- common/autotest_common.sh@934 -- # '[' reactor_1 = sudo ']' 00:19:07.173 20:19:44 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 1803276' 00:19:07.173 killing process with pid 1803276 00:19:07.173 20:19:44 -- common/autotest_common.sh@943 -- # kill 1803276 00:19:07.173 20:19:44 -- common/autotest_common.sh@948 -- # wait 1803276 00:19:07.432 20:19:44 -- target/tls.sh@168 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 02 00:19:07.432 20:19:44 -- target/tls.sh@49 -- # local key hash crc 00:19:07.432 20:19:44 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:19:07.432 20:19:44 -- target/tls.sh@51 -- # hash=02 00:19:07.432 20:19:44 -- target/tls.sh@52 -- # gzip -1 -c 00:19:07.432 20:19:44 -- target/tls.sh@52 -- # tail -c8 00:19:07.432 20:19:44 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff0011223344556677 00:19:07.432 20:19:44 -- target/tls.sh@52 -- # head -c 4 00:19:07.432 20:19:44 -- target/tls.sh@52 -- # crc='�e�'\''' 00:19:07.432 20:19:44 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:19:07.432 20:19:44 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeff0011223344556677�e�'\''' 00:19:07.432 20:19:44 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:07.432 20:19:44 -- target/tls.sh@168 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:07.432 20:19:44 -- target/tls.sh@169 -- # key_long_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:19:07.432 20:19:44 -- target/tls.sh@170 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:07.432 20:19:44 -- target/tls.sh@171 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:19:07.432 20:19:44 -- target/tls.sh@172 -- # nvmfappstart -m 0x2 00:19:07.432 20:19:44 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:07.432 20:19:44 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:07.432 20:19:44 -- common/autotest_common.sh@10 -- # set +x 00:19:07.432 20:19:44 -- nvmf/common.sh@469 -- # nvmfpid=1808468 00:19:07.432 20:19:44 -- nvmf/common.sh@470 -- # waitforlisten 1808468 00:19:07.432 20:19:44 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:07.432 20:19:44 -- common/autotest_common.sh@817 -- # '[' -z 1808468 ']' 00:19:07.432 20:19:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:07.432 20:19:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:07.432 20:19:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:07.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:07.432 20:19:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:07.432 20:19:44 -- common/autotest_common.sh@10 -- # set +x 00:19:07.433 [2024-02-14 20:19:44.761519] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:19:07.433 [2024-02-14 20:19:44.761565] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:07.433 EAL: No free 2048 kB hugepages reported on node 1 00:19:07.433 [2024-02-14 20:19:44.821989] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:07.691 [2024-02-14 20:19:44.898840] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:07.691 [2024-02-14 20:19:44.898940] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:07.691 [2024-02-14 20:19:44.898947] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:07.691 [2024-02-14 20:19:44.898953] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:07.691 [2024-02-14 20:19:44.898967] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:08.259 20:19:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:08.259 20:19:45 -- common/autotest_common.sh@850 -- # return 0 00:19:08.259 20:19:45 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:08.259 20:19:45 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:08.259 20:19:45 -- common/autotest_common.sh@10 -- # set +x 00:19:08.259 20:19:45 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:08.259 20:19:45 -- target/tls.sh@174 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:19:08.259 20:19:45 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:19:08.259 20:19:45 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:08.519 [2024-02-14 20:19:45.737386] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:08.519 20:19:45 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:08.519 20:19:45 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:08.844 [2024-02-14 20:19:46.046176] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:08.844 [2024-02-14 20:19:46.046358] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:08.844 20:19:46 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:08.844 malloc0 00:19:08.844 20:19:46 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:09.103 20:19:46 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:19:09.362 20:19:46 -- target/tls.sh@176 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:19:09.362 20:19:46 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:09.362 20:19:46 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:09.362 20:19:46 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:09.362 20:19:46 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt' 00:19:09.362 20:19:46 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:09.362 20:19:46 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:09.362 20:19:46 -- target/tls.sh@28 -- # bdevperf_pid=1808734 00:19:09.362 20:19:46 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:09.362 20:19:46 -- target/tls.sh@31 -- # waitforlisten 1808734 /var/tmp/bdevperf.sock 00:19:09.362 20:19:46 -- common/autotest_common.sh@817 -- # '[' -z 1808734 ']' 00:19:09.362 20:19:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:09.362 20:19:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:09.362 20:19:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:09.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:09.362 20:19:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:09.362 20:19:46 -- common/autotest_common.sh@10 -- # set +x 00:19:09.362 [2024-02-14 20:19:46.567775] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:19:09.362 [2024-02-14 20:19:46.567819] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1808734 ] 00:19:09.362 EAL: No free 2048 kB hugepages reported on node 1 00:19:09.362 [2024-02-14 20:19:46.621858] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:09.362 [2024-02-14 20:19:46.696471] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:10.298 20:19:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:10.298 20:19:47 -- common/autotest_common.sh@850 -- # return 0 00:19:10.298 20:19:47 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:19:10.298 [2024-02-14 20:19:47.534680] bdev_nvme_rpc.c: 478:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:10.298 TLSTESTn1 00:19:10.298 20:19:47 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:10.557 Running I/O for 10 seconds... 00:19:20.543 00:19:20.543 Latency(us) 00:19:20.543 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:20.543 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:20.543 Verification LBA range: start 0x0 length 0x2000 00:19:20.543 TLSTESTn1 : 10.03 1765.71 6.90 0.00 0.00 72400.12 3947.76 94371.84 00:19:20.543 =================================================================================================================== 00:19:20.543 Total : 1765.71 6.90 0.00 0.00 72400.12 3947.76 94371.84 00:19:20.543 0 00:19:20.543 20:19:57 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:20.543 20:19:57 -- target/tls.sh@45 -- # killprocess 1808734 00:19:20.543 20:19:57 -- common/autotest_common.sh@924 -- # '[' -z 1808734 ']' 00:19:20.543 20:19:57 -- common/autotest_common.sh@928 -- # kill -0 1808734 00:19:20.543 20:19:57 -- common/autotest_common.sh@929 -- # uname 00:19:20.543 20:19:57 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:19:20.543 20:19:57 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 1808734 00:19:20.543 20:19:57 -- common/autotest_common.sh@930 -- # process_name=reactor_2 00:19:20.543 20:19:57 -- common/autotest_common.sh@934 -- # '[' reactor_2 = sudo ']' 00:19:20.543 20:19:57 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 1808734' 00:19:20.543 killing process with pid 1808734 00:19:20.543 20:19:57 -- common/autotest_common.sh@943 -- # kill 1808734 00:19:20.543 Received shutdown signal, test time was about 10.000000 seconds 00:19:20.543 00:19:20.543 Latency(us) 00:19:20.543 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:20.543 =================================================================================================================== 00:19:20.543 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:20.543 20:19:57 -- common/autotest_common.sh@948 -- # wait 1808734 00:19:20.801 20:19:58 -- target/tls.sh@179 -- # chmod 0666 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:19:20.801 20:19:58 -- target/tls.sh@180 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:19:20.801 20:19:58 -- common/autotest_common.sh@638 -- # local es=0 00:19:20.801 20:19:58 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:19:20.801 20:19:58 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:19:20.801 20:19:58 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:20.801 20:19:58 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:19:20.801 20:19:58 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:20.801 20:19:58 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:19:20.801 20:19:58 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:20.801 20:19:58 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:20.801 20:19:58 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:20.801 20:19:58 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt' 00:19:20.801 20:19:58 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:20.801 20:19:58 -- target/tls.sh@28 -- # bdevperf_pid=1810736 00:19:20.801 20:19:58 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:20.801 20:19:58 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:20.801 20:19:58 -- target/tls.sh@31 -- # waitforlisten 1810736 /var/tmp/bdevperf.sock 00:19:20.802 20:19:58 -- common/autotest_common.sh@817 -- # '[' -z 1810736 ']' 00:19:20.802 20:19:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:20.802 20:19:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:20.802 20:19:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:20.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:20.802 20:19:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:20.802 20:19:58 -- common/autotest_common.sh@10 -- # set +x 00:19:20.802 [2024-02-14 20:19:58.090271] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:19:20.802 [2024-02-14 20:19:58.090323] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1810736 ] 00:19:20.802 EAL: No free 2048 kB hugepages reported on node 1 00:19:20.802 [2024-02-14 20:19:58.144326] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:21.061 [2024-02-14 20:19:58.220029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:21.628 20:19:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:21.628 20:19:58 -- common/autotest_common.sh@850 -- # return 0 00:19:21.629 20:19:58 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:19:21.887 [2024-02-14 20:19:59.045865] bdev_nvme_rpc.c: 478:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:21.887 [2024-02-14 20:19:59.045912] bdev_nvme_rpc.c: 337:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:19:21.887 request: 00:19:21.887 { 00:19:21.887 "name": "TLSTEST", 00:19:21.887 "trtype": "tcp", 00:19:21.887 "traddr": "10.0.0.2", 00:19:21.887 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:21.887 "adrfam": "ipv4", 00:19:21.887 "trsvcid": "4420", 00:19:21.887 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:21.887 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:19:21.887 "method": "bdev_nvme_attach_controller", 00:19:21.887 "req_id": 1 00:19:21.887 } 00:19:21.887 Got JSON-RPC error response 00:19:21.887 response: 00:19:21.887 { 00:19:21.887 "code": -22, 00:19:21.887 "message": "Could not retrieve PSK from file: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:19:21.887 } 00:19:21.887 20:19:59 -- target/tls.sh@36 -- # killprocess 1810736 00:19:21.887 20:19:59 -- common/autotest_common.sh@924 -- # '[' -z 1810736 ']' 00:19:21.887 20:19:59 -- common/autotest_common.sh@928 -- # kill -0 1810736 00:19:21.887 20:19:59 -- common/autotest_common.sh@929 -- # uname 00:19:21.887 20:19:59 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:19:21.887 20:19:59 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 1810736 00:19:21.887 20:19:59 -- common/autotest_common.sh@930 -- # process_name=reactor_2 00:19:21.887 20:19:59 -- common/autotest_common.sh@934 -- # '[' reactor_2 = sudo ']' 00:19:21.887 20:19:59 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 1810736' 00:19:21.887 killing process with pid 1810736 00:19:21.887 20:19:59 -- common/autotest_common.sh@943 -- # kill 1810736 00:19:21.887 Received shutdown signal, test time was about 10.000000 seconds 00:19:21.887 00:19:21.887 Latency(us) 00:19:21.887 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:21.887 =================================================================================================================== 00:19:21.887 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:21.887 20:19:59 -- common/autotest_common.sh@948 -- # wait 1810736 00:19:21.887 20:19:59 -- target/tls.sh@37 -- # return 1 00:19:21.887 20:19:59 -- common/autotest_common.sh@641 -- # es=1 00:19:21.887 20:19:59 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:21.887 20:19:59 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:21.887 20:19:59 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:21.887 20:19:59 -- target/tls.sh@183 -- # killprocess 1808468 00:19:21.887 20:19:59 -- common/autotest_common.sh@924 -- # '[' -z 1808468 ']' 00:19:21.887 20:19:59 -- common/autotest_common.sh@928 -- # kill -0 1808468 00:19:21.887 20:19:59 -- common/autotest_common.sh@929 -- # uname 00:19:21.887 20:19:59 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:19:21.887 20:19:59 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 1808468 00:19:22.146 20:19:59 -- common/autotest_common.sh@930 -- # process_name=reactor_1 00:19:22.146 20:19:59 -- common/autotest_common.sh@934 -- # '[' reactor_1 = sudo ']' 00:19:22.146 20:19:59 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 1808468' 00:19:22.146 killing process with pid 1808468 00:19:22.146 20:19:59 -- common/autotest_common.sh@943 -- # kill 1808468 00:19:22.146 20:19:59 -- common/autotest_common.sh@948 -- # wait 1808468 00:19:22.146 20:19:59 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:19:22.146 20:19:59 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:22.146 20:19:59 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:22.146 20:19:59 -- common/autotest_common.sh@10 -- # set +x 00:19:22.146 20:19:59 -- nvmf/common.sh@469 -- # nvmfpid=1811035 00:19:22.146 20:19:59 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:22.146 20:19:59 -- nvmf/common.sh@470 -- # waitforlisten 1811035 00:19:22.146 20:19:59 -- common/autotest_common.sh@817 -- # '[' -z 1811035 ']' 00:19:22.146 20:19:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:22.146 20:19:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:22.146 20:19:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:22.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:22.146 20:19:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:22.146 20:19:59 -- common/autotest_common.sh@10 -- # set +x 00:19:22.404 [2024-02-14 20:19:59.599549] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:19:22.404 [2024-02-14 20:19:59.599592] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:22.404 EAL: No free 2048 kB hugepages reported on node 1 00:19:22.404 [2024-02-14 20:19:59.662387] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:22.404 [2024-02-14 20:19:59.736835] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:22.404 [2024-02-14 20:19:59.736937] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:22.404 [2024-02-14 20:19:59.736945] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:22.404 [2024-02-14 20:19:59.736951] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:22.404 [2024-02-14 20:19:59.736968] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:22.972 20:20:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:22.972 20:20:00 -- common/autotest_common.sh@850 -- # return 0 00:19:22.972 20:20:00 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:22.972 20:20:00 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:22.972 20:20:00 -- common/autotest_common.sh@10 -- # set +x 00:19:23.231 20:20:00 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:23.231 20:20:00 -- target/tls.sh@186 -- # NOT setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:19:23.231 20:20:00 -- common/autotest_common.sh@638 -- # local es=0 00:19:23.231 20:20:00 -- common/autotest_common.sh@640 -- # valid_exec_arg setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:19:23.231 20:20:00 -- common/autotest_common.sh@626 -- # local arg=setup_nvmf_tgt 00:19:23.231 20:20:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:23.231 20:20:00 -- common/autotest_common.sh@630 -- # type -t setup_nvmf_tgt 00:19:23.231 20:20:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:23.231 20:20:00 -- common/autotest_common.sh@641 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:19:23.231 20:20:00 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:19:23.231 20:20:00 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:23.231 [2024-02-14 20:20:00.573918] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:23.231 20:20:00 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:23.490 20:20:00 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:23.748 [2024-02-14 20:20:00.930805] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:23.749 [2024-02-14 20:20:00.930992] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:23.749 20:20:00 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:23.749 malloc0 00:19:23.749 20:20:01 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:24.008 20:20:01 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:19:24.008 [2024-02-14 20:20:01.408145] tcp.c:3549:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:19:24.008 [2024-02-14 20:20:01.408171] tcp.c:3618:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:19:24.008 [2024-02-14 20:20:01.408185] subsystem.c: 840:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:19:24.008 request: 00:19:24.008 { 00:19:24.008 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:24.008 "host": "nqn.2016-06.io.spdk:host1", 00:19:24.008 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:19:24.008 "method": "nvmf_subsystem_add_host", 00:19:24.008 "req_id": 1 00:19:24.008 } 00:19:24.008 Got JSON-RPC error response 00:19:24.008 response: 00:19:24.008 { 00:19:24.008 "code": -32603, 00:19:24.008 "message": "Internal error" 00:19:24.008 } 00:19:24.008 20:20:01 -- common/autotest_common.sh@641 -- # es=1 00:19:24.008 20:20:01 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:24.008 20:20:01 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:24.008 20:20:01 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:24.008 20:20:01 -- target/tls.sh@189 -- # killprocess 1811035 00:19:24.008 20:20:01 -- common/autotest_common.sh@924 -- # '[' -z 1811035 ']' 00:19:24.008 20:20:01 -- common/autotest_common.sh@928 -- # kill -0 1811035 00:19:24.267 20:20:01 -- common/autotest_common.sh@929 -- # uname 00:19:24.267 20:20:01 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:19:24.267 20:20:01 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 1811035 00:19:24.267 20:20:01 -- common/autotest_common.sh@930 -- # process_name=reactor_1 00:19:24.267 20:20:01 -- common/autotest_common.sh@934 -- # '[' reactor_1 = sudo ']' 00:19:24.267 20:20:01 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 1811035' 00:19:24.267 killing process with pid 1811035 00:19:24.267 20:20:01 -- common/autotest_common.sh@943 -- # kill 1811035 00:19:24.267 20:20:01 -- common/autotest_common.sh@948 -- # wait 1811035 00:19:24.267 20:20:01 -- target/tls.sh@190 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:19:24.267 20:20:01 -- target/tls.sh@193 -- # nvmfappstart -m 0x2 00:19:24.268 20:20:01 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:24.268 20:20:01 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:24.268 20:20:01 -- common/autotest_common.sh@10 -- # set +x 00:19:24.528 20:20:01 -- nvmf/common.sh@469 -- # nvmfpid=1811308 00:19:24.528 20:20:01 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:24.528 20:20:01 -- nvmf/common.sh@470 -- # waitforlisten 1811308 00:19:24.528 20:20:01 -- common/autotest_common.sh@817 -- # '[' -z 1811308 ']' 00:19:24.528 20:20:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:24.528 20:20:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:24.528 20:20:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:24.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:24.528 20:20:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:24.528 20:20:01 -- common/autotest_common.sh@10 -- # set +x 00:19:24.528 [2024-02-14 20:20:01.731208] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:19:24.528 [2024-02-14 20:20:01.731254] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:24.528 EAL: No free 2048 kB hugepages reported on node 1 00:19:24.528 [2024-02-14 20:20:01.792479] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:24.528 [2024-02-14 20:20:01.867264] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:24.528 [2024-02-14 20:20:01.867366] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:24.528 [2024-02-14 20:20:01.867374] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:24.528 [2024-02-14 20:20:01.867380] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:24.528 [2024-02-14 20:20:01.867394] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:25.467 20:20:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:25.467 20:20:02 -- common/autotest_common.sh@850 -- # return 0 00:19:25.467 20:20:02 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:25.467 20:20:02 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:25.467 20:20:02 -- common/autotest_common.sh@10 -- # set +x 00:19:25.467 20:20:02 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:25.467 20:20:02 -- target/tls.sh@194 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:19:25.467 20:20:02 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:19:25.467 20:20:02 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:25.467 [2024-02-14 20:20:02.705913] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:25.467 20:20:02 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:25.727 20:20:02 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:25.727 [2024-02-14 20:20:03.034755] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:25.727 [2024-02-14 20:20:03.034942] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:25.727 20:20:03 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:25.986 malloc0 00:19:25.986 20:20:03 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:25.986 20:20:03 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:19:26.245 20:20:03 -- target/tls.sh@197 -- # bdevperf_pid=1811706 00:19:26.245 20:20:03 -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:26.245 20:20:03 -- target/tls.sh@199 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:26.245 20:20:03 -- target/tls.sh@200 -- # waitforlisten 1811706 /var/tmp/bdevperf.sock 00:19:26.245 20:20:03 -- common/autotest_common.sh@817 -- # '[' -z 1811706 ']' 00:19:26.245 20:20:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:26.245 20:20:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:26.245 20:20:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:26.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:26.245 20:20:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:26.245 20:20:03 -- common/autotest_common.sh@10 -- # set +x 00:19:26.245 [2024-02-14 20:20:03.572739] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:19:26.245 [2024-02-14 20:20:03.572786] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1811706 ] 00:19:26.245 EAL: No free 2048 kB hugepages reported on node 1 00:19:26.245 [2024-02-14 20:20:03.630796] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:26.505 [2024-02-14 20:20:03.701514] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:27.073 20:20:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:27.073 20:20:04 -- common/autotest_common.sh@850 -- # return 0 00:19:27.073 20:20:04 -- target/tls.sh@201 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:19:27.333 [2024-02-14 20:20:04.499130] bdev_nvme_rpc.c: 478:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:27.333 TLSTESTn1 00:19:27.333 20:20:04 -- target/tls.sh@205 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:19:27.593 20:20:04 -- target/tls.sh@205 -- # tgtconf='{ 00:19:27.593 "subsystems": [ 00:19:27.593 { 00:19:27.593 "subsystem": "iobuf", 00:19:27.593 "config": [ 00:19:27.593 { 00:19:27.593 "method": "iobuf_set_options", 00:19:27.593 "params": { 00:19:27.593 "small_pool_count": 8192, 00:19:27.593 "large_pool_count": 1024, 00:19:27.593 "small_bufsize": 8192, 00:19:27.593 "large_bufsize": 135168 00:19:27.593 } 00:19:27.593 } 00:19:27.593 ] 00:19:27.593 }, 00:19:27.593 { 00:19:27.593 "subsystem": "sock", 00:19:27.593 "config": [ 00:19:27.593 { 00:19:27.593 "method": "sock_impl_set_options", 00:19:27.593 "params": { 00:19:27.593 "impl_name": "posix", 00:19:27.593 "recv_buf_size": 2097152, 00:19:27.593 "send_buf_size": 2097152, 00:19:27.593 "enable_recv_pipe": true, 00:19:27.593 "enable_quickack": false, 00:19:27.593 "enable_placement_id": 0, 00:19:27.593 "enable_zerocopy_send_server": true, 00:19:27.593 "enable_zerocopy_send_client": false, 00:19:27.593 "zerocopy_threshold": 0, 00:19:27.593 "tls_version": 0, 00:19:27.593 "enable_ktls": false 00:19:27.593 } 00:19:27.593 }, 00:19:27.593 { 00:19:27.593 "method": "sock_impl_set_options", 00:19:27.593 "params": { 00:19:27.593 "impl_name": "ssl", 00:19:27.593 "recv_buf_size": 4096, 00:19:27.593 "send_buf_size": 4096, 00:19:27.593 "enable_recv_pipe": true, 00:19:27.593 "enable_quickack": false, 00:19:27.593 "enable_placement_id": 0, 00:19:27.593 "enable_zerocopy_send_server": true, 00:19:27.593 "enable_zerocopy_send_client": false, 00:19:27.593 "zerocopy_threshold": 0, 00:19:27.593 "tls_version": 0, 00:19:27.593 "enable_ktls": false 00:19:27.593 } 00:19:27.593 } 00:19:27.593 ] 00:19:27.593 }, 00:19:27.593 { 00:19:27.593 "subsystem": "vmd", 00:19:27.593 "config": [] 00:19:27.593 }, 00:19:27.593 { 00:19:27.593 "subsystem": "accel", 00:19:27.593 "config": [ 00:19:27.593 { 00:19:27.593 "method": "accel_set_options", 00:19:27.593 "params": { 00:19:27.593 "small_cache_size": 128, 00:19:27.593 "large_cache_size": 16, 00:19:27.593 "task_count": 2048, 00:19:27.593 "sequence_count": 2048, 00:19:27.593 "buf_count": 2048 00:19:27.593 } 00:19:27.593 } 00:19:27.593 ] 00:19:27.593 }, 00:19:27.593 { 00:19:27.593 "subsystem": "bdev", 00:19:27.593 "config": [ 00:19:27.593 { 00:19:27.593 "method": "bdev_set_options", 00:19:27.593 "params": { 00:19:27.593 "bdev_io_pool_size": 65535, 00:19:27.593 "bdev_io_cache_size": 256, 00:19:27.593 "bdev_auto_examine": true, 00:19:27.593 "iobuf_small_cache_size": 128, 00:19:27.593 "iobuf_large_cache_size": 16 00:19:27.593 } 00:19:27.593 }, 00:19:27.593 { 00:19:27.593 "method": "bdev_raid_set_options", 00:19:27.593 "params": { 00:19:27.593 "process_window_size_kb": 1024 00:19:27.593 } 00:19:27.593 }, 00:19:27.593 { 00:19:27.593 "method": "bdev_iscsi_set_options", 00:19:27.593 "params": { 00:19:27.593 "timeout_sec": 30 00:19:27.593 } 00:19:27.593 }, 00:19:27.593 { 00:19:27.593 "method": "bdev_nvme_set_options", 00:19:27.593 "params": { 00:19:27.593 "action_on_timeout": "none", 00:19:27.593 "timeout_us": 0, 00:19:27.593 "timeout_admin_us": 0, 00:19:27.593 "keep_alive_timeout_ms": 10000, 00:19:27.593 "arbitration_burst": 0, 00:19:27.593 "low_priority_weight": 0, 00:19:27.593 "medium_priority_weight": 0, 00:19:27.593 "high_priority_weight": 0, 00:19:27.593 "nvme_adminq_poll_period_us": 10000, 00:19:27.593 "nvme_ioq_poll_period_us": 0, 00:19:27.593 "io_queue_requests": 0, 00:19:27.593 "delay_cmd_submit": true, 00:19:27.593 "transport_retry_count": 4, 00:19:27.593 "bdev_retry_count": 3, 00:19:27.593 "transport_ack_timeout": 0, 00:19:27.593 "ctrlr_loss_timeout_sec": 0, 00:19:27.593 "reconnect_delay_sec": 0, 00:19:27.593 "fast_io_fail_timeout_sec": 0, 00:19:27.593 "disable_auto_failback": false, 00:19:27.593 "generate_uuids": false, 00:19:27.593 "transport_tos": 0, 00:19:27.593 "nvme_error_stat": false, 00:19:27.593 "rdma_srq_size": 0, 00:19:27.593 "io_path_stat": false, 00:19:27.593 "allow_accel_sequence": false, 00:19:27.593 "rdma_max_cq_size": 0, 00:19:27.593 "rdma_cm_event_timeout_ms": 0 00:19:27.593 } 00:19:27.593 }, 00:19:27.593 { 00:19:27.593 "method": "bdev_nvme_set_hotplug", 00:19:27.593 "params": { 00:19:27.593 "period_us": 100000, 00:19:27.593 "enable": false 00:19:27.593 } 00:19:27.593 }, 00:19:27.593 { 00:19:27.593 "method": "bdev_malloc_create", 00:19:27.593 "params": { 00:19:27.593 "name": "malloc0", 00:19:27.593 "num_blocks": 8192, 00:19:27.593 "block_size": 4096, 00:19:27.593 "physical_block_size": 4096, 00:19:27.593 "uuid": "36caa1de-eb69-4d26-96a4-4d17a010da26", 00:19:27.593 "optimal_io_boundary": 0 00:19:27.593 } 00:19:27.593 }, 00:19:27.593 { 00:19:27.593 "method": "bdev_wait_for_examine" 00:19:27.593 } 00:19:27.593 ] 00:19:27.593 }, 00:19:27.593 { 00:19:27.593 "subsystem": "nbd", 00:19:27.593 "config": [] 00:19:27.593 }, 00:19:27.593 { 00:19:27.593 "subsystem": "scheduler", 00:19:27.593 "config": [ 00:19:27.593 { 00:19:27.593 "method": "framework_set_scheduler", 00:19:27.593 "params": { 00:19:27.593 "name": "static" 00:19:27.593 } 00:19:27.593 } 00:19:27.593 ] 00:19:27.593 }, 00:19:27.593 { 00:19:27.593 "subsystem": "nvmf", 00:19:27.593 "config": [ 00:19:27.593 { 00:19:27.593 "method": "nvmf_set_config", 00:19:27.593 "params": { 00:19:27.593 "discovery_filter": "match_any", 00:19:27.593 "admin_cmd_passthru": { 00:19:27.593 "identify_ctrlr": false 00:19:27.593 } 00:19:27.593 } 00:19:27.593 }, 00:19:27.593 { 00:19:27.594 "method": "nvmf_set_max_subsystems", 00:19:27.594 "params": { 00:19:27.594 "max_subsystems": 1024 00:19:27.594 } 00:19:27.594 }, 00:19:27.594 { 00:19:27.594 "method": "nvmf_set_crdt", 00:19:27.594 "params": { 00:19:27.594 "crdt1": 0, 00:19:27.594 "crdt2": 0, 00:19:27.594 "crdt3": 0 00:19:27.594 } 00:19:27.594 }, 00:19:27.594 { 00:19:27.594 "method": "nvmf_create_transport", 00:19:27.594 "params": { 00:19:27.594 "trtype": "TCP", 00:19:27.594 "max_queue_depth": 128, 00:19:27.594 "max_io_qpairs_per_ctrlr": 127, 00:19:27.594 "in_capsule_data_size": 4096, 00:19:27.594 "max_io_size": 131072, 00:19:27.594 "io_unit_size": 131072, 00:19:27.594 "max_aq_depth": 128, 00:19:27.594 "num_shared_buffers": 511, 00:19:27.594 "buf_cache_size": 4294967295, 00:19:27.594 "dif_insert_or_strip": false, 00:19:27.594 "zcopy": false, 00:19:27.594 "c2h_success": false, 00:19:27.594 "sock_priority": 0, 00:19:27.594 "abort_timeout_sec": 1 00:19:27.594 } 00:19:27.594 }, 00:19:27.594 { 00:19:27.594 "method": "nvmf_create_subsystem", 00:19:27.594 "params": { 00:19:27.594 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:27.594 "allow_any_host": false, 00:19:27.594 "serial_number": "SPDK00000000000001", 00:19:27.594 "model_number": "SPDK bdev Controller", 00:19:27.594 "max_namespaces": 10, 00:19:27.594 "min_cntlid": 1, 00:19:27.594 "max_cntlid": 65519, 00:19:27.594 "ana_reporting": false 00:19:27.594 } 00:19:27.594 }, 00:19:27.594 { 00:19:27.594 "method": "nvmf_subsystem_add_host", 00:19:27.594 "params": { 00:19:27.594 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:27.594 "host": "nqn.2016-06.io.spdk:host1", 00:19:27.594 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:19:27.594 } 00:19:27.594 }, 00:19:27.594 { 00:19:27.594 "method": "nvmf_subsystem_add_ns", 00:19:27.594 "params": { 00:19:27.594 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:27.594 "namespace": { 00:19:27.594 "nsid": 1, 00:19:27.594 "bdev_name": "malloc0", 00:19:27.594 "nguid": "36CAA1DEEB694D2696A44D17A010DA26", 00:19:27.594 "uuid": "36caa1de-eb69-4d26-96a4-4d17a010da26" 00:19:27.594 } 00:19:27.594 } 00:19:27.594 }, 00:19:27.594 { 00:19:27.594 "method": "nvmf_subsystem_add_listener", 00:19:27.594 "params": { 00:19:27.594 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:27.594 "listen_address": { 00:19:27.594 "trtype": "TCP", 00:19:27.594 "adrfam": "IPv4", 00:19:27.594 "traddr": "10.0.0.2", 00:19:27.594 "trsvcid": "4420" 00:19:27.594 }, 00:19:27.594 "secure_channel": true 00:19:27.594 } 00:19:27.594 } 00:19:27.594 ] 00:19:27.594 } 00:19:27.594 ] 00:19:27.594 }' 00:19:27.594 20:20:04 -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:27.854 20:20:05 -- target/tls.sh@206 -- # bdevperfconf='{ 00:19:27.854 "subsystems": [ 00:19:27.854 { 00:19:27.854 "subsystem": "iobuf", 00:19:27.854 "config": [ 00:19:27.854 { 00:19:27.854 "method": "iobuf_set_options", 00:19:27.854 "params": { 00:19:27.854 "small_pool_count": 8192, 00:19:27.854 "large_pool_count": 1024, 00:19:27.854 "small_bufsize": 8192, 00:19:27.854 "large_bufsize": 135168 00:19:27.854 } 00:19:27.854 } 00:19:27.854 ] 00:19:27.854 }, 00:19:27.854 { 00:19:27.854 "subsystem": "sock", 00:19:27.854 "config": [ 00:19:27.854 { 00:19:27.854 "method": "sock_impl_set_options", 00:19:27.854 "params": { 00:19:27.854 "impl_name": "posix", 00:19:27.854 "recv_buf_size": 2097152, 00:19:27.854 "send_buf_size": 2097152, 00:19:27.854 "enable_recv_pipe": true, 00:19:27.854 "enable_quickack": false, 00:19:27.854 "enable_placement_id": 0, 00:19:27.854 "enable_zerocopy_send_server": true, 00:19:27.854 "enable_zerocopy_send_client": false, 00:19:27.854 "zerocopy_threshold": 0, 00:19:27.854 "tls_version": 0, 00:19:27.854 "enable_ktls": false 00:19:27.854 } 00:19:27.854 }, 00:19:27.854 { 00:19:27.854 "method": "sock_impl_set_options", 00:19:27.854 "params": { 00:19:27.854 "impl_name": "ssl", 00:19:27.854 "recv_buf_size": 4096, 00:19:27.854 "send_buf_size": 4096, 00:19:27.854 "enable_recv_pipe": true, 00:19:27.854 "enable_quickack": false, 00:19:27.854 "enable_placement_id": 0, 00:19:27.854 "enable_zerocopy_send_server": true, 00:19:27.854 "enable_zerocopy_send_client": false, 00:19:27.854 "zerocopy_threshold": 0, 00:19:27.854 "tls_version": 0, 00:19:27.854 "enable_ktls": false 00:19:27.854 } 00:19:27.854 } 00:19:27.854 ] 00:19:27.854 }, 00:19:27.854 { 00:19:27.854 "subsystem": "vmd", 00:19:27.854 "config": [] 00:19:27.854 }, 00:19:27.854 { 00:19:27.854 "subsystem": "accel", 00:19:27.854 "config": [ 00:19:27.854 { 00:19:27.854 "method": "accel_set_options", 00:19:27.854 "params": { 00:19:27.854 "small_cache_size": 128, 00:19:27.854 "large_cache_size": 16, 00:19:27.854 "task_count": 2048, 00:19:27.854 "sequence_count": 2048, 00:19:27.854 "buf_count": 2048 00:19:27.854 } 00:19:27.854 } 00:19:27.854 ] 00:19:27.854 }, 00:19:27.854 { 00:19:27.854 "subsystem": "bdev", 00:19:27.854 "config": [ 00:19:27.854 { 00:19:27.854 "method": "bdev_set_options", 00:19:27.854 "params": { 00:19:27.854 "bdev_io_pool_size": 65535, 00:19:27.854 "bdev_io_cache_size": 256, 00:19:27.854 "bdev_auto_examine": true, 00:19:27.854 "iobuf_small_cache_size": 128, 00:19:27.854 "iobuf_large_cache_size": 16 00:19:27.854 } 00:19:27.854 }, 00:19:27.854 { 00:19:27.854 "method": "bdev_raid_set_options", 00:19:27.854 "params": { 00:19:27.854 "process_window_size_kb": 1024 00:19:27.855 } 00:19:27.855 }, 00:19:27.855 { 00:19:27.855 "method": "bdev_iscsi_set_options", 00:19:27.855 "params": { 00:19:27.855 "timeout_sec": 30 00:19:27.855 } 00:19:27.855 }, 00:19:27.855 { 00:19:27.855 "method": "bdev_nvme_set_options", 00:19:27.855 "params": { 00:19:27.855 "action_on_timeout": "none", 00:19:27.855 "timeout_us": 0, 00:19:27.855 "timeout_admin_us": 0, 00:19:27.855 "keep_alive_timeout_ms": 10000, 00:19:27.855 "arbitration_burst": 0, 00:19:27.855 "low_priority_weight": 0, 00:19:27.855 "medium_priority_weight": 0, 00:19:27.855 "high_priority_weight": 0, 00:19:27.855 "nvme_adminq_poll_period_us": 10000, 00:19:27.855 "nvme_ioq_poll_period_us": 0, 00:19:27.855 "io_queue_requests": 512, 00:19:27.855 "delay_cmd_submit": true, 00:19:27.855 "transport_retry_count": 4, 00:19:27.855 "bdev_retry_count": 3, 00:19:27.855 "transport_ack_timeout": 0, 00:19:27.855 "ctrlr_loss_timeout_sec": 0, 00:19:27.855 "reconnect_delay_sec": 0, 00:19:27.855 "fast_io_fail_timeout_sec": 0, 00:19:27.855 "disable_auto_failback": false, 00:19:27.855 "generate_uuids": false, 00:19:27.855 "transport_tos": 0, 00:19:27.855 "nvme_error_stat": false, 00:19:27.855 "rdma_srq_size": 0, 00:19:27.855 "io_path_stat": false, 00:19:27.855 "allow_accel_sequence": false, 00:19:27.855 "rdma_max_cq_size": 0, 00:19:27.855 "rdma_cm_event_timeout_ms": 0 00:19:27.855 } 00:19:27.855 }, 00:19:27.855 { 00:19:27.855 "method": "bdev_nvme_attach_controller", 00:19:27.855 "params": { 00:19:27.855 "name": "TLSTEST", 00:19:27.855 "trtype": "TCP", 00:19:27.855 "adrfam": "IPv4", 00:19:27.855 "traddr": "10.0.0.2", 00:19:27.855 "trsvcid": "4420", 00:19:27.855 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:27.855 "prchk_reftag": false, 00:19:27.855 "prchk_guard": false, 00:19:27.855 "ctrlr_loss_timeout_sec": 0, 00:19:27.855 "reconnect_delay_sec": 0, 00:19:27.855 "fast_io_fail_timeout_sec": 0, 00:19:27.855 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:19:27.855 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:27.855 "hdgst": false, 00:19:27.855 "ddgst": false 00:19:27.855 } 00:19:27.855 }, 00:19:27.855 { 00:19:27.855 "method": "bdev_nvme_set_hotplug", 00:19:27.855 "params": { 00:19:27.855 "period_us": 100000, 00:19:27.855 "enable": false 00:19:27.855 } 00:19:27.855 }, 00:19:27.855 { 00:19:27.855 "method": "bdev_wait_for_examine" 00:19:27.855 } 00:19:27.855 ] 00:19:27.855 }, 00:19:27.855 { 00:19:27.855 "subsystem": "nbd", 00:19:27.855 "config": [] 00:19:27.855 } 00:19:27.855 ] 00:19:27.855 }' 00:19:27.855 20:20:05 -- target/tls.sh@208 -- # killprocess 1811706 00:19:27.855 20:20:05 -- common/autotest_common.sh@924 -- # '[' -z 1811706 ']' 00:19:27.855 20:20:05 -- common/autotest_common.sh@928 -- # kill -0 1811706 00:19:27.855 20:20:05 -- common/autotest_common.sh@929 -- # uname 00:19:27.855 20:20:05 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:19:27.855 20:20:05 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 1811706 00:19:27.855 20:20:05 -- common/autotest_common.sh@930 -- # process_name=reactor_2 00:19:27.855 20:20:05 -- common/autotest_common.sh@934 -- # '[' reactor_2 = sudo ']' 00:19:27.855 20:20:05 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 1811706' 00:19:27.855 killing process with pid 1811706 00:19:27.855 20:20:05 -- common/autotest_common.sh@943 -- # kill 1811706 00:19:27.855 Received shutdown signal, test time was about 10.000000 seconds 00:19:27.855 00:19:27.855 Latency(us) 00:19:27.855 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:27.855 =================================================================================================================== 00:19:27.855 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:27.855 20:20:05 -- common/autotest_common.sh@948 -- # wait 1811706 00:19:28.115 20:20:05 -- target/tls.sh@209 -- # killprocess 1811308 00:19:28.115 20:20:05 -- common/autotest_common.sh@924 -- # '[' -z 1811308 ']' 00:19:28.115 20:20:05 -- common/autotest_common.sh@928 -- # kill -0 1811308 00:19:28.115 20:20:05 -- common/autotest_common.sh@929 -- # uname 00:19:28.115 20:20:05 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:19:28.115 20:20:05 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 1811308 00:19:28.115 20:20:05 -- common/autotest_common.sh@930 -- # process_name=reactor_1 00:19:28.115 20:20:05 -- common/autotest_common.sh@934 -- # '[' reactor_1 = sudo ']' 00:19:28.115 20:20:05 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 1811308' 00:19:28.115 killing process with pid 1811308 00:19:28.115 20:20:05 -- common/autotest_common.sh@943 -- # kill 1811308 00:19:28.115 20:20:05 -- common/autotest_common.sh@948 -- # wait 1811308 00:19:28.375 20:20:05 -- target/tls.sh@212 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:19:28.375 20:20:05 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:28.375 20:20:05 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:28.375 20:20:05 -- target/tls.sh@212 -- # echo '{ 00:19:28.375 "subsystems": [ 00:19:28.375 { 00:19:28.375 "subsystem": "iobuf", 00:19:28.375 "config": [ 00:19:28.375 { 00:19:28.375 "method": "iobuf_set_options", 00:19:28.375 "params": { 00:19:28.375 "small_pool_count": 8192, 00:19:28.375 "large_pool_count": 1024, 00:19:28.375 "small_bufsize": 8192, 00:19:28.375 "large_bufsize": 135168 00:19:28.375 } 00:19:28.375 } 00:19:28.375 ] 00:19:28.375 }, 00:19:28.375 { 00:19:28.375 "subsystem": "sock", 00:19:28.375 "config": [ 00:19:28.375 { 00:19:28.375 "method": "sock_impl_set_options", 00:19:28.375 "params": { 00:19:28.375 "impl_name": "posix", 00:19:28.375 "recv_buf_size": 2097152, 00:19:28.375 "send_buf_size": 2097152, 00:19:28.375 "enable_recv_pipe": true, 00:19:28.375 "enable_quickack": false, 00:19:28.375 "enable_placement_id": 0, 00:19:28.375 "enable_zerocopy_send_server": true, 00:19:28.375 "enable_zerocopy_send_client": false, 00:19:28.375 "zerocopy_threshold": 0, 00:19:28.375 "tls_version": 0, 00:19:28.375 "enable_ktls": false 00:19:28.375 } 00:19:28.375 }, 00:19:28.375 { 00:19:28.375 "method": "sock_impl_set_options", 00:19:28.375 "params": { 00:19:28.375 "impl_name": "ssl", 00:19:28.375 "recv_buf_size": 4096, 00:19:28.375 "send_buf_size": 4096, 00:19:28.375 "enable_recv_pipe": true, 00:19:28.375 "enable_quickack": false, 00:19:28.375 "enable_placement_id": 0, 00:19:28.375 "enable_zerocopy_send_server": true, 00:19:28.375 "enable_zerocopy_send_client": false, 00:19:28.375 "zerocopy_threshold": 0, 00:19:28.375 "tls_version": 0, 00:19:28.375 "enable_ktls": false 00:19:28.375 } 00:19:28.375 } 00:19:28.375 ] 00:19:28.375 }, 00:19:28.375 { 00:19:28.375 "subsystem": "vmd", 00:19:28.375 "config": [] 00:19:28.375 }, 00:19:28.375 { 00:19:28.375 "subsystem": "accel", 00:19:28.375 "config": [ 00:19:28.375 { 00:19:28.375 "method": "accel_set_options", 00:19:28.375 "params": { 00:19:28.375 "small_cache_size": 128, 00:19:28.375 "large_cache_size": 16, 00:19:28.375 "task_count": 2048, 00:19:28.375 "sequence_count": 2048, 00:19:28.375 "buf_count": 2048 00:19:28.375 } 00:19:28.375 } 00:19:28.375 ] 00:19:28.375 }, 00:19:28.375 { 00:19:28.375 "subsystem": "bdev", 00:19:28.376 "config": [ 00:19:28.376 { 00:19:28.376 "method": "bdev_set_options", 00:19:28.376 "params": { 00:19:28.376 "bdev_io_pool_size": 65535, 00:19:28.376 "bdev_io_cache_size": 256, 00:19:28.376 "bdev_auto_examine": true, 00:19:28.376 "iobuf_small_cache_size": 128, 00:19:28.376 "iobuf_large_cache_size": 16 00:19:28.376 } 00:19:28.376 }, 00:19:28.376 { 00:19:28.376 "method": "bdev_raid_set_options", 00:19:28.376 "params": { 00:19:28.376 "process_window_size_kb": 1024 00:19:28.376 } 00:19:28.376 }, 00:19:28.376 { 00:19:28.376 "method": "bdev_iscsi_set_options", 00:19:28.376 "params": { 00:19:28.376 "timeout_sec": 30 00:19:28.376 } 00:19:28.376 }, 00:19:28.376 { 00:19:28.376 "method": "bdev_nvme_set_options", 00:19:28.376 "params": { 00:19:28.376 "action_on_timeout": "none", 00:19:28.376 "timeout_us": 0, 00:19:28.376 "timeout_admin_us": 0, 00:19:28.376 "keep_alive_timeout_ms": 10000, 00:19:28.376 "arbitration_burst": 0, 00:19:28.376 "low_priority_weight": 0, 00:19:28.376 "medium_priority_weight": 0, 00:19:28.376 "high_priority_weight": 0, 00:19:28.376 "nvme_adminq_poll_period_us": 10000, 00:19:28.376 "nvme_ioq_poll_period_us": 0, 00:19:28.376 "io_queue_requests": 0, 00:19:28.376 "delay_cmd_submit": true, 00:19:28.376 "transport_retry_count": 4, 00:19:28.376 "bdev_retry_count": 3, 00:19:28.376 "transport_ack_timeout": 0, 00:19:28.376 "ctrlr_loss_timeout_sec": 0, 00:19:28.376 "reconnect_delay_sec": 0, 00:19:28.376 "fast_io_fail_timeout_sec": 0, 00:19:28.376 "disable_auto_failback": false, 00:19:28.376 "generate_uuids": false, 00:19:28.376 "transport_tos": 0, 00:19:28.376 "nvme_error_stat": false, 00:19:28.376 "rdma_srq_size": 0, 00:19:28.376 "io_path_stat": false, 00:19:28.376 "allow_accel_sequence": false, 00:19:28.376 "rdma_max_cq_size": 0, 00:19:28.376 "rdma_cm_event_timeout_ms": 0 00:19:28.376 } 00:19:28.376 }, 00:19:28.376 { 00:19:28.376 "method": "bdev_nvme_set_hotplug", 00:19:28.376 "params": { 00:19:28.376 "period_us": 100000, 00:19:28.376 "enable": false 00:19:28.376 } 00:19:28.376 }, 00:19:28.376 { 00:19:28.376 "method": "bdev_malloc_create", 00:19:28.376 "params": { 00:19:28.376 "name": "malloc0", 00:19:28.376 "num_blocks": 8192, 00:19:28.376 "block_size": 4096, 00:19:28.376 "physical_block_size": 4096, 00:19:28.376 "uuid": "36caa1de-eb69-4d26-96a4-4d17a010da26", 00:19:28.376 "optimal_io_boundary": 0 00:19:28.376 } 00:19:28.376 }, 00:19:28.376 { 00:19:28.376 "method": "bdev_wait_for_examine" 00:19:28.376 } 00:19:28.376 ] 00:19:28.376 }, 00:19:28.376 { 00:19:28.376 "subsystem": "nbd", 00:19:28.376 "config": [] 00:19:28.376 }, 00:19:28.376 { 00:19:28.376 "subsystem": "scheduler", 00:19:28.376 "config": [ 00:19:28.376 { 00:19:28.376 "method": "framework_set_scheduler", 00:19:28.376 "params": { 00:19:28.376 "name": "static" 00:19:28.376 } 00:19:28.376 } 00:19:28.376 ] 00:19:28.376 }, 00:19:28.376 { 00:19:28.376 "subsystem": "nvmf", 00:19:28.376 "config": [ 00:19:28.376 { 00:19:28.376 "method": "nvmf_set_config", 00:19:28.376 "params": { 00:19:28.376 "discovery_filter": "match_any", 00:19:28.376 "admin_cmd_passthru": { 00:19:28.376 "identify_ctrlr": false 00:19:28.376 } 00:19:28.376 } 00:19:28.376 }, 00:19:28.376 { 00:19:28.376 "method": "nvmf_set_max_subsystems", 00:19:28.376 "params": { 00:19:28.376 "max_subsystems": 1024 00:19:28.376 } 00:19:28.376 }, 00:19:28.376 { 00:19:28.376 "method": "nvmf_set_crdt", 00:19:28.376 "params": { 00:19:28.376 "crdt1": 0, 00:19:28.376 "crdt2": 0, 00:19:28.376 "crdt3": 0 00:19:28.376 } 00:19:28.376 }, 00:19:28.376 { 00:19:28.376 "method": "nvmf_create_transport", 00:19:28.376 "params": { 00:19:28.376 "trtype": "TCP", 00:19:28.376 "max_queue_depth": 128, 00:19:28.376 "max_io_qpairs_per_ctrlr": 127, 00:19:28.376 "in_capsule_data_size": 4096, 00:19:28.376 "max_io_size": 131072, 00:19:28.376 "io_unit_size": 131072, 00:19:28.376 "max_aq_depth": 128, 00:19:28.376 "num_shared_buffers": 511, 00:19:28.376 "buf_cache_size": 4294967295, 00:19:28.376 "dif_insert_or_strip": false, 00:19:28.376 "zcopy": false, 00:19:28.376 "c2h_success": false, 00:19:28.376 "sock_priority": 0, 00:19:28.376 "abort_timeout_sec": 1 00:19:28.376 } 00:19:28.376 }, 00:19:28.376 { 00:19:28.376 "method": "nvmf_create_subsystem", 00:19:28.376 "params": { 00:19:28.376 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:28.376 "allow_any_host": false, 00:19:28.376 "serial_number": "SPDK00000000000001", 00:19:28.376 "model_number": "SPDK bdev Controller", 00:19:28.376 "max_namespaces": 10, 00:19:28.376 "min_cntlid": 1, 00:19:28.376 "max_cntlid": 65519, 00:19:28.376 "ana_reporting": false 00:19:28.376 } 00:19:28.376 }, 00:19:28.376 { 00:19:28.376 "method": "nvmf_subsystem_add_host", 00:19:28.376 "params": { 00:19:28.376 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:28.376 "host": "nqn.2016-06.io.spdk:host1", 00:19:28.376 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:19:28.376 } 00:19:28.376 }, 00:19:28.376 { 00:19:28.376 "method": "nvmf_subsystem_add_ns", 00:19:28.376 "params": { 00:19:28.376 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:28.376 "namespace": { 00:19:28.376 "nsid": 1, 00:19:28.376 "bdev_name": "malloc0", 00:19:28.376 "nguid": "36CAA1DEEB694D2696A44D17A010DA26", 00:19:28.376 "uuid": "36caa1de-eb69-4d26-96a4-4d17a010da26" 00:19:28.376 } 00:19:28.376 } 00:19:28.376 }, 00:19:28.376 { 00:19:28.376 "method": "nvmf_subsystem_add_listener", 00:19:28.376 "params": { 00:19:28.376 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:28.376 "listen_address": { 00:19:28.376 "trtype": "TCP", 00:19:28.376 "adrfam": "IPv4", 00:19:28.376 "traddr": "10.0.0.2", 00:19:28.376 "trsvcid": "4420" 00:19:28.376 }, 00:19:28.376 "secure_channel": true 00:19:28.376 } 00:19:28.376 } 00:19:28.376 ] 00:19:28.376 } 00:19:28.376 ] 00:19:28.376 }' 00:19:28.376 20:20:05 -- common/autotest_common.sh@10 -- # set +x 00:19:28.376 20:20:05 -- nvmf/common.sh@469 -- # nvmfpid=1812041 00:19:28.376 20:20:05 -- nvmf/common.sh@470 -- # waitforlisten 1812041 00:19:28.376 20:20:05 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:19:28.376 20:20:05 -- common/autotest_common.sh@817 -- # '[' -z 1812041 ']' 00:19:28.376 20:20:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:28.376 20:20:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:28.376 20:20:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:28.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:28.376 20:20:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:28.376 20:20:05 -- common/autotest_common.sh@10 -- # set +x 00:19:28.376 [2024-02-14 20:20:05.628225] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:19:28.376 [2024-02-14 20:20:05.628270] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:28.376 EAL: No free 2048 kB hugepages reported on node 1 00:19:28.376 [2024-02-14 20:20:05.689302] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:28.376 [2024-02-14 20:20:05.764005] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:28.376 [2024-02-14 20:20:05.764108] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:28.376 [2024-02-14 20:20:05.764115] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:28.376 [2024-02-14 20:20:05.764121] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:28.376 [2024-02-14 20:20:05.764141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:28.376 [2024-02-14 20:20:05.764161] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:19:28.636 [2024-02-14 20:20:05.957205] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:28.636 [2024-02-14 20:20:05.989239] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:28.636 [2024-02-14 20:20:05.989420] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:29.249 20:20:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:29.249 20:20:06 -- common/autotest_common.sh@850 -- # return 0 00:19:29.249 20:20:06 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:29.249 20:20:06 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:29.249 20:20:06 -- common/autotest_common.sh@10 -- # set +x 00:19:29.249 20:20:06 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:29.249 20:20:06 -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:19:29.249 20:20:06 -- target/tls.sh@216 -- # bdevperf_pid=1812284 00:19:29.249 20:20:06 -- target/tls.sh@217 -- # waitforlisten 1812284 /var/tmp/bdevperf.sock 00:19:29.249 20:20:06 -- common/autotest_common.sh@817 -- # '[' -z 1812284 ']' 00:19:29.249 20:20:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:29.249 20:20:06 -- target/tls.sh@213 -- # echo '{ 00:19:29.249 "subsystems": [ 00:19:29.249 { 00:19:29.249 "subsystem": "iobuf", 00:19:29.249 "config": [ 00:19:29.249 { 00:19:29.249 "method": "iobuf_set_options", 00:19:29.249 "params": { 00:19:29.249 "small_pool_count": 8192, 00:19:29.249 "large_pool_count": 1024, 00:19:29.249 "small_bufsize": 8192, 00:19:29.249 "large_bufsize": 135168 00:19:29.249 } 00:19:29.249 } 00:19:29.249 ] 00:19:29.249 }, 00:19:29.249 { 00:19:29.249 "subsystem": "sock", 00:19:29.249 "config": [ 00:19:29.249 { 00:19:29.249 "method": "sock_impl_set_options", 00:19:29.249 "params": { 00:19:29.249 "impl_name": "posix", 00:19:29.249 "recv_buf_size": 2097152, 00:19:29.249 "send_buf_size": 2097152, 00:19:29.249 "enable_recv_pipe": true, 00:19:29.249 "enable_quickack": false, 00:19:29.249 "enable_placement_id": 0, 00:19:29.249 "enable_zerocopy_send_server": true, 00:19:29.249 "enable_zerocopy_send_client": false, 00:19:29.249 "zerocopy_threshold": 0, 00:19:29.249 "tls_version": 0, 00:19:29.249 "enable_ktls": false 00:19:29.249 } 00:19:29.249 }, 00:19:29.249 { 00:19:29.249 "method": "sock_impl_set_options", 00:19:29.249 "params": { 00:19:29.249 "impl_name": "ssl", 00:19:29.249 "recv_buf_size": 4096, 00:19:29.249 "send_buf_size": 4096, 00:19:29.249 "enable_recv_pipe": true, 00:19:29.249 "enable_quickack": false, 00:19:29.249 "enable_placement_id": 0, 00:19:29.249 "enable_zerocopy_send_server": true, 00:19:29.249 "enable_zerocopy_send_client": false, 00:19:29.249 "zerocopy_threshold": 0, 00:19:29.249 "tls_version": 0, 00:19:29.249 "enable_ktls": false 00:19:29.249 } 00:19:29.249 } 00:19:29.249 ] 00:19:29.249 }, 00:19:29.249 { 00:19:29.249 "subsystem": "vmd", 00:19:29.249 "config": [] 00:19:29.249 }, 00:19:29.249 { 00:19:29.249 "subsystem": "accel", 00:19:29.249 "config": [ 00:19:29.249 { 00:19:29.249 "method": "accel_set_options", 00:19:29.249 "params": { 00:19:29.249 "small_cache_size": 128, 00:19:29.249 "large_cache_size": 16, 00:19:29.249 "task_count": 2048, 00:19:29.249 "sequence_count": 2048, 00:19:29.249 "buf_count": 2048 00:19:29.249 } 00:19:29.250 } 00:19:29.250 ] 00:19:29.250 }, 00:19:29.250 { 00:19:29.250 "subsystem": "bdev", 00:19:29.250 "config": [ 00:19:29.250 { 00:19:29.250 "method": "bdev_set_options", 00:19:29.250 "params": { 00:19:29.250 "bdev_io_pool_size": 65535, 00:19:29.250 "bdev_io_cache_size": 256, 00:19:29.250 "bdev_auto_examine": true, 00:19:29.250 "iobuf_small_cache_size": 128, 00:19:29.250 "iobuf_large_cache_size": 16 00:19:29.250 } 00:19:29.250 }, 00:19:29.250 { 00:19:29.250 "method": "bdev_raid_set_options", 00:19:29.250 "params": { 00:19:29.250 "process_window_size_kb": 1024 00:19:29.250 } 00:19:29.250 }, 00:19:29.250 { 00:19:29.250 "method": "bdev_iscsi_set_options", 00:19:29.250 "params": { 00:19:29.250 "timeout_sec": 30 00:19:29.250 } 00:19:29.250 }, 00:19:29.250 { 00:19:29.250 "method": "bdev_nvme_set_options", 00:19:29.250 "params": { 00:19:29.250 "action_on_timeout": "none", 00:19:29.250 "timeout_us": 0, 00:19:29.250 "timeout_admin_us": 0, 00:19:29.250 "keep_alive_timeout_ms": 10000, 00:19:29.250 "arbitration_burst": 0, 00:19:29.250 "low_priority_weight": 0, 00:19:29.250 "medium_priority_weight": 0, 00:19:29.250 "high_priority_weight": 0, 00:19:29.250 "nvme_adminq_poll_period_us": 10000, 00:19:29.250 "nvme_ioq_poll_period_us": 0, 00:19:29.250 "io_queue_requests": 512, 00:19:29.250 "delay_cmd_submit": true, 00:19:29.250 "transport_retry_count": 4, 00:19:29.250 "bdev_retry_count": 3, 00:19:29.250 "transport_ack_timeout": 0, 00:19:29.250 "ctrlr_loss_timeout_sec": 0, 00:19:29.250 "reconnect_delay_sec": 0, 00:19:29.250 "fast_io_fail_timeout_sec": 0, 00:19:29.250 "disable_auto_failback": false, 00:19:29.250 "generate_uuids": false, 00:19:29.250 "transport_tos": 0, 00:19:29.250 "nvme_error_stat": false, 00:19:29.250 "rdma_srq_size": 0, 00:19:29.250 "io_path_stat": false, 00:19:29.250 "allow_accel_sequence": false, 00:19:29.250 "rdma_max_cq_size": 0, 00:19:29.250 "rdma_cm_event_timeout_ms": 0 00:19:29.250 } 00:19:29.250 }, 00:19:29.250 { 00:19:29.250 "method": "bdev_nvme_attach_controller", 00:19:29.250 "params": { 00:19:29.250 "name": "TLSTEST", 00:19:29.250 "trtype": "TCP", 00:19:29.250 "adrfam": "IPv4", 00:19:29.250 "traddr": "10.0.0.2", 00:19:29.250 "trsvcid": "4420", 00:19:29.250 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:29.250 "prchk_reftag": false, 00:19:29.250 "prchk_guard": false, 00:19:29.250 "ctrlr_loss_timeout_sec": 0, 00:19:29.250 "reconnect_delay_sec": 0, 00:19:29.250 "fast_io_fail_timeout_sec": 0, 00:19:29.250 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:19:29.250 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:29.250 "hdgst": false, 00:19:29.250 "ddgst": false 00:19:29.250 } 00:19:29.250 }, 00:19:29.250 { 00:19:29.250 "method": "bdev_nvme_set_hotplug", 00:19:29.250 "params": { 00:19:29.250 "period_us": 100000, 00:19:29.250 "enable": false 00:19:29.250 } 00:19:29.250 }, 00:19:29.250 { 00:19:29.250 "method": "bdev_wait_for_examine" 00:19:29.250 } 00:19:29.250 ] 00:19:29.250 }, 00:19:29.250 { 00:19:29.250 "subsystem": "nbd", 00:19:29.250 "config": [] 00:19:29.250 } 00:19:29.250 ] 00:19:29.250 }' 00:19:29.250 20:20:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:29.250 20:20:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:29.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:29.250 20:20:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:29.250 20:20:06 -- common/autotest_common.sh@10 -- # set +x 00:19:29.250 [2024-02-14 20:20:06.473895] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:19:29.250 [2024-02-14 20:20:06.473941] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1812284 ] 00:19:29.250 EAL: No free 2048 kB hugepages reported on node 1 00:19:29.250 [2024-02-14 20:20:06.527656] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:29.250 [2024-02-14 20:20:06.601838] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:29.250 [2024-02-14 20:20:06.601893] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:19:29.510 [2024-02-14 20:20:06.733470] bdev_nvme_rpc.c: 478:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:30.079 20:20:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:30.079 20:20:07 -- common/autotest_common.sh@850 -- # return 0 00:19:30.079 20:20:07 -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:30.079 Running I/O for 10 seconds... 00:19:40.060 00:19:40.060 Latency(us) 00:19:40.060 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:40.060 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:40.060 Verification LBA range: start 0x0 length 0x2000 00:19:40.060 TLSTESTn1 : 10.04 1766.67 6.90 0.00 0.00 72353.18 4306.65 99864.38 00:19:40.060 =================================================================================================================== 00:19:40.060 Total : 1766.67 6.90 0.00 0.00 72353.18 4306.65 99864.38 00:19:40.060 0 00:19:40.060 20:20:17 -- target/tls.sh@222 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:40.060 20:20:17 -- target/tls.sh@223 -- # killprocess 1812284 00:19:40.060 20:20:17 -- common/autotest_common.sh@924 -- # '[' -z 1812284 ']' 00:19:40.060 20:20:17 -- common/autotest_common.sh@928 -- # kill -0 1812284 00:19:40.060 20:20:17 -- common/autotest_common.sh@929 -- # uname 00:19:40.060 20:20:17 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:19:40.060 20:20:17 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 1812284 00:19:40.318 20:20:17 -- common/autotest_common.sh@930 -- # process_name=reactor_2 00:19:40.318 20:20:17 -- common/autotest_common.sh@934 -- # '[' reactor_2 = sudo ']' 00:19:40.318 20:20:17 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 1812284' 00:19:40.318 killing process with pid 1812284 00:19:40.318 20:20:17 -- common/autotest_common.sh@943 -- # kill 1812284 00:19:40.318 Received shutdown signal, test time was about 10.000000 seconds 00:19:40.318 00:19:40.318 Latency(us) 00:19:40.318 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:40.318 =================================================================================================================== 00:19:40.318 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:40.318 [2024-02-14 20:20:17.501891] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:19:40.318 20:20:17 -- common/autotest_common.sh@948 -- # wait 1812284 00:19:40.318 20:20:17 -- target/tls.sh@224 -- # killprocess 1812041 00:19:40.318 20:20:17 -- common/autotest_common.sh@924 -- # '[' -z 1812041 ']' 00:19:40.318 20:20:17 -- common/autotest_common.sh@928 -- # kill -0 1812041 00:19:40.318 20:20:17 -- common/autotest_common.sh@929 -- # uname 00:19:40.318 20:20:17 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:19:40.318 20:20:17 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 1812041 00:19:40.577 20:20:17 -- common/autotest_common.sh@930 -- # process_name=reactor_1 00:19:40.577 20:20:17 -- common/autotest_common.sh@934 -- # '[' reactor_1 = sudo ']' 00:19:40.577 20:20:17 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 1812041' 00:19:40.577 killing process with pid 1812041 00:19:40.577 20:20:17 -- common/autotest_common.sh@943 -- # kill 1812041 00:19:40.577 [2024-02-14 20:20:17.743560] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:19:40.577 20:20:17 -- common/autotest_common.sh@948 -- # wait 1812041 00:19:40.577 20:20:17 -- target/tls.sh@226 -- # trap - SIGINT SIGTERM EXIT 00:19:40.577 20:20:17 -- target/tls.sh@227 -- # cleanup 00:19:40.577 20:20:17 -- target/tls.sh@15 -- # process_shm --id 0 00:19:40.577 20:20:17 -- common/autotest_common.sh@794 -- # type=--id 00:19:40.577 20:20:17 -- common/autotest_common.sh@795 -- # id=0 00:19:40.577 20:20:17 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:19:40.577 20:20:17 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:40.577 20:20:17 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:19:40.577 20:20:17 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:19:40.577 20:20:17 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:19:40.577 20:20:17 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:40.577 nvmf_trace.0 00:19:40.836 20:20:18 -- common/autotest_common.sh@809 -- # return 0 00:19:40.836 20:20:18 -- target/tls.sh@16 -- # killprocess 1812284 00:19:40.836 20:20:18 -- common/autotest_common.sh@924 -- # '[' -z 1812284 ']' 00:19:40.836 20:20:18 -- common/autotest_common.sh@928 -- # kill -0 1812284 00:19:40.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 928: kill: (1812284) - No such process 00:19:40.836 20:20:18 -- common/autotest_common.sh@951 -- # echo 'Process with pid 1812284 is not found' 00:19:40.836 Process with pid 1812284 is not found 00:19:40.836 20:20:18 -- target/tls.sh@17 -- # nvmftestfini 00:19:40.836 20:20:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:40.836 20:20:18 -- nvmf/common.sh@116 -- # sync 00:19:40.836 20:20:18 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:40.836 20:20:18 -- nvmf/common.sh@119 -- # set +e 00:19:40.836 20:20:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:40.836 20:20:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:40.836 rmmod nvme_tcp 00:19:40.836 rmmod nvme_fabrics 00:19:40.836 rmmod nvme_keyring 00:19:40.836 20:20:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:40.836 20:20:18 -- nvmf/common.sh@123 -- # set -e 00:19:40.836 20:20:18 -- nvmf/common.sh@124 -- # return 0 00:19:40.836 20:20:18 -- nvmf/common.sh@477 -- # '[' -n 1812041 ']' 00:19:40.836 20:20:18 -- nvmf/common.sh@478 -- # killprocess 1812041 00:19:40.836 20:20:18 -- common/autotest_common.sh@924 -- # '[' -z 1812041 ']' 00:19:40.836 20:20:18 -- common/autotest_common.sh@928 -- # kill -0 1812041 00:19:40.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 928: kill: (1812041) - No such process 00:19:40.836 20:20:18 -- common/autotest_common.sh@951 -- # echo 'Process with pid 1812041 is not found' 00:19:40.836 Process with pid 1812041 is not found 00:19:40.836 20:20:18 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:40.836 20:20:18 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:40.836 20:20:18 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:40.836 20:20:18 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:40.836 20:20:18 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:40.836 20:20:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:40.836 20:20:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:40.836 20:20:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:42.743 20:20:20 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:42.743 20:20:20 -- target/tls.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:19:42.743 00:19:42.743 real 1m12.253s 00:19:42.743 user 1m49.825s 00:19:42.743 sys 0m24.102s 00:19:42.743 20:20:20 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:42.743 20:20:20 -- common/autotest_common.sh@10 -- # set +x 00:19:42.743 ************************************ 00:19:42.743 END TEST nvmf_tls 00:19:42.743 ************************************ 00:19:43.004 20:20:20 -- nvmf/nvmf.sh@60 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:43.004 20:20:20 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:19:43.004 20:20:20 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:19:43.004 20:20:20 -- common/autotest_common.sh@10 -- # set +x 00:19:43.004 ************************************ 00:19:43.004 START TEST nvmf_fips 00:19:43.004 ************************************ 00:19:43.004 20:20:20 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:43.004 * Looking for test storage... 00:19:43.004 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:19:43.004 20:20:20 -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:43.004 20:20:20 -- nvmf/common.sh@7 -- # uname -s 00:19:43.004 20:20:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:43.004 20:20:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:43.004 20:20:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:43.004 20:20:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:43.004 20:20:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:43.004 20:20:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:43.004 20:20:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:43.004 20:20:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:43.004 20:20:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:43.004 20:20:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:43.004 20:20:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:43.004 20:20:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:19:43.004 20:20:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:43.004 20:20:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:43.004 20:20:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:43.004 20:20:20 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:43.004 20:20:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:43.004 20:20:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:43.004 20:20:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:43.004 20:20:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.004 20:20:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.004 20:20:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.004 20:20:20 -- paths/export.sh@5 -- # export PATH 00:19:43.004 20:20:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.004 20:20:20 -- nvmf/common.sh@46 -- # : 0 00:19:43.004 20:20:20 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:43.004 20:20:20 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:43.004 20:20:20 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:43.004 20:20:20 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:43.004 20:20:20 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:43.004 20:20:20 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:43.004 20:20:20 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:43.004 20:20:20 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:43.004 20:20:20 -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:43.004 20:20:20 -- fips/fips.sh@89 -- # check_openssl_version 00:19:43.004 20:20:20 -- fips/fips.sh@83 -- # local target=3.0.0 00:19:43.004 20:20:20 -- fips/fips.sh@85 -- # openssl version 00:19:43.004 20:20:20 -- fips/fips.sh@85 -- # awk '{print $2}' 00:19:43.004 20:20:20 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:19:43.004 20:20:20 -- scripts/common.sh@375 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:19:43.004 20:20:20 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:43.004 20:20:20 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:43.004 20:20:20 -- scripts/common.sh@335 -- # IFS=.-: 00:19:43.004 20:20:20 -- scripts/common.sh@335 -- # read -ra ver1 00:19:43.004 20:20:20 -- scripts/common.sh@336 -- # IFS=.-: 00:19:43.004 20:20:20 -- scripts/common.sh@336 -- # read -ra ver2 00:19:43.004 20:20:20 -- scripts/common.sh@337 -- # local 'op=>=' 00:19:43.004 20:20:20 -- scripts/common.sh@339 -- # ver1_l=3 00:19:43.004 20:20:20 -- scripts/common.sh@340 -- # ver2_l=3 00:19:43.004 20:20:20 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:43.004 20:20:20 -- scripts/common.sh@343 -- # case "$op" in 00:19:43.004 20:20:20 -- scripts/common.sh@347 -- # : 1 00:19:43.004 20:20:20 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:43.004 20:20:20 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:43.004 20:20:20 -- scripts/common.sh@364 -- # decimal 3 00:19:43.004 20:20:20 -- scripts/common.sh@352 -- # local d=3 00:19:43.004 20:20:20 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:43.004 20:20:20 -- scripts/common.sh@354 -- # echo 3 00:19:43.004 20:20:20 -- scripts/common.sh@364 -- # ver1[v]=3 00:19:43.004 20:20:20 -- scripts/common.sh@365 -- # decimal 3 00:19:43.004 20:20:20 -- scripts/common.sh@352 -- # local d=3 00:19:43.004 20:20:20 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:43.004 20:20:20 -- scripts/common.sh@354 -- # echo 3 00:19:43.004 20:20:20 -- scripts/common.sh@365 -- # ver2[v]=3 00:19:43.004 20:20:20 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:43.004 20:20:20 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:43.004 20:20:20 -- scripts/common.sh@363 -- # (( v++ )) 00:19:43.004 20:20:20 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:43.004 20:20:20 -- scripts/common.sh@364 -- # decimal 0 00:19:43.004 20:20:20 -- scripts/common.sh@352 -- # local d=0 00:19:43.004 20:20:20 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:43.004 20:20:20 -- scripts/common.sh@354 -- # echo 0 00:19:43.004 20:20:20 -- scripts/common.sh@364 -- # ver1[v]=0 00:19:43.004 20:20:20 -- scripts/common.sh@365 -- # decimal 0 00:19:43.004 20:20:20 -- scripts/common.sh@352 -- # local d=0 00:19:43.004 20:20:20 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:43.004 20:20:20 -- scripts/common.sh@354 -- # echo 0 00:19:43.004 20:20:20 -- scripts/common.sh@365 -- # ver2[v]=0 00:19:43.004 20:20:20 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:43.004 20:20:20 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:43.005 20:20:20 -- scripts/common.sh@363 -- # (( v++ )) 00:19:43.005 20:20:20 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:43.005 20:20:20 -- scripts/common.sh@364 -- # decimal 9 00:19:43.005 20:20:20 -- scripts/common.sh@352 -- # local d=9 00:19:43.005 20:20:20 -- scripts/common.sh@353 -- # [[ 9 =~ ^[0-9]+$ ]] 00:19:43.005 20:20:20 -- scripts/common.sh@354 -- # echo 9 00:19:43.005 20:20:20 -- scripts/common.sh@364 -- # ver1[v]=9 00:19:43.005 20:20:20 -- scripts/common.sh@365 -- # decimal 0 00:19:43.005 20:20:20 -- scripts/common.sh@352 -- # local d=0 00:19:43.005 20:20:20 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:43.005 20:20:20 -- scripts/common.sh@354 -- # echo 0 00:19:43.005 20:20:20 -- scripts/common.sh@365 -- # ver2[v]=0 00:19:43.005 20:20:20 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:43.005 20:20:20 -- scripts/common.sh@366 -- # return 0 00:19:43.005 20:20:20 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:19:43.005 20:20:20 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:19:43.005 20:20:20 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:19:43.005 20:20:20 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:19:43.005 20:20:20 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:19:43.005 20:20:20 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:19:43.005 20:20:20 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:19:43.005 20:20:20 -- fips/fips.sh@105 -- # export OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:19:43.005 20:20:20 -- fips/fips.sh@105 -- # OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:19:43.005 20:20:20 -- fips/fips.sh@114 -- # build_openssl_config 00:19:43.005 20:20:20 -- fips/fips.sh@37 -- # cat 00:19:43.005 20:20:20 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:19:43.005 20:20:20 -- fips/fips.sh@58 -- # cat - 00:19:43.005 20:20:20 -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:19:43.005 20:20:20 -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:19:43.005 20:20:20 -- fips/fips.sh@117 -- # mapfile -t providers 00:19:43.005 20:20:20 -- fips/fips.sh@117 -- # OPENSSL_CONF=spdk_fips.conf 00:19:43.005 20:20:20 -- fips/fips.sh@117 -- # grep name 00:19:43.005 20:20:20 -- fips/fips.sh@117 -- # openssl list -providers 00:19:43.005 20:20:20 -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:19:43.005 20:20:20 -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:19:43.005 20:20:20 -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:19:43.005 20:20:20 -- fips/fips.sh@128 -- # : 00:19:43.005 20:20:20 -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:19:43.005 20:20:20 -- common/autotest_common.sh@638 -- # local es=0 00:19:43.005 20:20:20 -- common/autotest_common.sh@640 -- # valid_exec_arg openssl md5 /dev/fd/62 00:19:43.005 20:20:20 -- common/autotest_common.sh@626 -- # local arg=openssl 00:19:43.005 20:20:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:43.005 20:20:20 -- common/autotest_common.sh@630 -- # type -t openssl 00:19:43.005 20:20:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:43.005 20:20:20 -- common/autotest_common.sh@632 -- # type -P openssl 00:19:43.005 20:20:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:43.005 20:20:20 -- common/autotest_common.sh@632 -- # arg=/usr/bin/openssl 00:19:43.005 20:20:20 -- common/autotest_common.sh@632 -- # [[ -x /usr/bin/openssl ]] 00:19:43.005 20:20:20 -- common/autotest_common.sh@641 -- # openssl md5 /dev/fd/62 00:19:43.264 Error setting digest 00:19:43.264 00A2E10F007F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:19:43.264 00A2E10F007F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:19:43.264 20:20:20 -- common/autotest_common.sh@641 -- # es=1 00:19:43.264 20:20:20 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:43.264 20:20:20 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:43.264 20:20:20 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:43.264 20:20:20 -- fips/fips.sh@131 -- # nvmftestinit 00:19:43.264 20:20:20 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:43.264 20:20:20 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:43.264 20:20:20 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:43.264 20:20:20 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:43.264 20:20:20 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:43.264 20:20:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:43.264 20:20:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:43.264 20:20:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:43.264 20:20:20 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:43.264 20:20:20 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:43.264 20:20:20 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:43.264 20:20:20 -- common/autotest_common.sh@10 -- # set +x 00:19:49.830 20:20:26 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:49.830 20:20:26 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:49.830 20:20:26 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:49.830 20:20:26 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:49.830 20:20:26 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:49.830 20:20:26 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:49.830 20:20:26 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:49.830 20:20:26 -- nvmf/common.sh@294 -- # net_devs=() 00:19:49.830 20:20:26 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:49.830 20:20:26 -- nvmf/common.sh@295 -- # e810=() 00:19:49.830 20:20:26 -- nvmf/common.sh@295 -- # local -ga e810 00:19:49.830 20:20:26 -- nvmf/common.sh@296 -- # x722=() 00:19:49.830 20:20:26 -- nvmf/common.sh@296 -- # local -ga x722 00:19:49.830 20:20:26 -- nvmf/common.sh@297 -- # mlx=() 00:19:49.830 20:20:26 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:49.830 20:20:26 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:49.830 20:20:26 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:49.830 20:20:26 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:49.830 20:20:26 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:49.830 20:20:26 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:49.830 20:20:26 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:49.830 20:20:26 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:49.830 20:20:26 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:49.830 20:20:26 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:49.830 20:20:26 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:49.830 20:20:26 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:49.831 20:20:26 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:49.831 20:20:26 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:49.831 20:20:26 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:19:49.831 20:20:26 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:19:49.831 20:20:26 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:19:49.831 20:20:26 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:49.831 20:20:26 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:49.831 20:20:26 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:49.831 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:49.831 20:20:26 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:49.831 20:20:26 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:49.831 20:20:26 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:49.831 20:20:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:49.831 20:20:26 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:49.831 20:20:26 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:49.831 20:20:26 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:49.831 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:49.831 20:20:26 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:49.831 20:20:26 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:49.831 20:20:26 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:49.831 20:20:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:49.831 20:20:26 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:49.831 20:20:26 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:49.831 20:20:26 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:19:49.831 20:20:26 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:19:49.831 20:20:26 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:49.831 20:20:26 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:49.831 20:20:26 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:49.831 20:20:26 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:49.831 20:20:26 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:49.831 Found net devices under 0000:af:00.0: cvl_0_0 00:19:49.831 20:20:26 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:49.831 20:20:26 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:49.831 20:20:26 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:49.831 20:20:26 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:49.831 20:20:26 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:49.831 20:20:26 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:49.831 Found net devices under 0000:af:00.1: cvl_0_1 00:19:49.831 20:20:26 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:49.831 20:20:26 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:49.831 20:20:26 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:49.831 20:20:26 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:49.831 20:20:26 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:49.831 20:20:26 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:49.831 20:20:26 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:49.831 20:20:26 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:49.831 20:20:26 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:49.831 20:20:26 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:49.831 20:20:26 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:49.831 20:20:26 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:49.831 20:20:26 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:49.831 20:20:26 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:49.831 20:20:26 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:49.831 20:20:26 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:49.831 20:20:26 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:49.831 20:20:26 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:49.831 20:20:26 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:49.831 20:20:26 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:49.831 20:20:26 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:49.831 20:20:26 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:49.831 20:20:26 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:49.831 20:20:26 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:49.831 20:20:26 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:49.831 20:20:26 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:49.831 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:49.831 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:19:49.831 00:19:49.831 --- 10.0.0.2 ping statistics --- 00:19:49.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:49.831 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:19:49.831 20:20:26 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:49.831 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:49.831 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.396 ms 00:19:49.831 00:19:49.831 --- 10.0.0.1 ping statistics --- 00:19:49.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:49.831 rtt min/avg/max/mdev = 0.396/0.396/0.396/0.000 ms 00:19:49.831 20:20:26 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:49.831 20:20:26 -- nvmf/common.sh@410 -- # return 0 00:19:49.831 20:20:26 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:49.831 20:20:26 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:49.831 20:20:26 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:49.831 20:20:26 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:49.831 20:20:26 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:49.831 20:20:26 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:49.831 20:20:26 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:49.831 20:20:26 -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:19:49.831 20:20:26 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:49.831 20:20:26 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:49.831 20:20:26 -- common/autotest_common.sh@10 -- # set +x 00:19:49.831 20:20:26 -- nvmf/common.sh@469 -- # nvmfpid=1817955 00:19:49.831 20:20:26 -- nvmf/common.sh@470 -- # waitforlisten 1817955 00:19:49.831 20:20:26 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:49.831 20:20:26 -- common/autotest_common.sh@817 -- # '[' -z 1817955 ']' 00:19:49.831 20:20:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:49.831 20:20:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:49.831 20:20:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:49.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:49.831 20:20:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:49.831 20:20:26 -- common/autotest_common.sh@10 -- # set +x 00:19:49.831 [2024-02-14 20:20:26.459746] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:19:49.831 [2024-02-14 20:20:26.459790] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:49.831 EAL: No free 2048 kB hugepages reported on node 1 00:19:49.831 [2024-02-14 20:20:26.521805] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:49.831 [2024-02-14 20:20:26.594684] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:49.831 [2024-02-14 20:20:26.594792] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:49.831 [2024-02-14 20:20:26.594800] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:49.831 [2024-02-14 20:20:26.594806] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:49.831 [2024-02-14 20:20:26.594820] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:49.831 20:20:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:49.831 20:20:27 -- common/autotest_common.sh@850 -- # return 0 00:19:49.831 20:20:27 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:49.831 20:20:27 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:49.831 20:20:27 -- common/autotest_common.sh@10 -- # set +x 00:19:50.091 20:20:27 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:50.091 20:20:27 -- fips/fips.sh@134 -- # trap cleanup EXIT 00:19:50.091 20:20:27 -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:50.091 20:20:27 -- fips/fips.sh@138 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:50.091 20:20:27 -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:50.091 20:20:27 -- fips/fips.sh@140 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:50.091 20:20:27 -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:50.091 20:20:27 -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:50.091 20:20:27 -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:50.091 [2024-02-14 20:20:27.419664] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:50.091 [2024-02-14 20:20:27.435668] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:50.091 [2024-02-14 20:20:27.435831] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:50.091 malloc0 00:19:50.091 20:20:27 -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:50.091 20:20:27 -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:50.091 20:20:27 -- fips/fips.sh@148 -- # bdevperf_pid=1818201 00:19:50.091 20:20:27 -- fips/fips.sh@149 -- # waitforlisten 1818201 /var/tmp/bdevperf.sock 00:19:50.091 20:20:27 -- common/autotest_common.sh@817 -- # '[' -z 1818201 ']' 00:19:50.091 20:20:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:50.091 20:20:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:50.091 20:20:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:50.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:50.091 20:20:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:50.091 20:20:27 -- common/autotest_common.sh@10 -- # set +x 00:19:50.350 [2024-02-14 20:20:27.537433] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:19:50.350 [2024-02-14 20:20:27.537478] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1818201 ] 00:19:50.350 EAL: No free 2048 kB hugepages reported on node 1 00:19:50.350 [2024-02-14 20:20:27.589825] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:50.350 [2024-02-14 20:20:27.658089] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:50.918 20:20:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:50.918 20:20:28 -- common/autotest_common.sh@850 -- # return 0 00:19:50.918 20:20:28 -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:51.177 [2024-02-14 20:20:28.467806] bdev_nvme_rpc.c: 478:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:51.177 TLSTESTn1 00:19:51.177 20:20:28 -- fips/fips.sh@155 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:51.436 Running I/O for 10 seconds... 00:20:01.414 00:20:01.414 Latency(us) 00:20:01.414 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:01.414 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:01.414 Verification LBA range: start 0x0 length 0x2000 00:20:01.414 TLSTESTn1 : 10.03 1667.84 6.52 0.00 0.00 76644.44 11484.40 98366.42 00:20:01.414 =================================================================================================================== 00:20:01.414 Total : 1667.84 6.52 0.00 0.00 76644.44 11484.40 98366.42 00:20:01.414 0 00:20:01.414 20:20:38 -- fips/fips.sh@1 -- # cleanup 00:20:01.414 20:20:38 -- fips/fips.sh@15 -- # process_shm --id 0 00:20:01.414 20:20:38 -- common/autotest_common.sh@794 -- # type=--id 00:20:01.414 20:20:38 -- common/autotest_common.sh@795 -- # id=0 00:20:01.414 20:20:38 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:20:01.414 20:20:38 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:01.414 20:20:38 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:20:01.414 20:20:38 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:20:01.414 20:20:38 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:20:01.414 20:20:38 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:01.414 nvmf_trace.0 00:20:01.414 20:20:38 -- common/autotest_common.sh@809 -- # return 0 00:20:01.414 20:20:38 -- fips/fips.sh@16 -- # killprocess 1818201 00:20:01.414 20:20:38 -- common/autotest_common.sh@924 -- # '[' -z 1818201 ']' 00:20:01.414 20:20:38 -- common/autotest_common.sh@928 -- # kill -0 1818201 00:20:01.414 20:20:38 -- common/autotest_common.sh@929 -- # uname 00:20:01.414 20:20:38 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:20:01.414 20:20:38 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 1818201 00:20:01.414 20:20:38 -- common/autotest_common.sh@930 -- # process_name=reactor_2 00:20:01.414 20:20:38 -- common/autotest_common.sh@934 -- # '[' reactor_2 = sudo ']' 00:20:01.414 20:20:38 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 1818201' 00:20:01.414 killing process with pid 1818201 00:20:01.414 20:20:38 -- common/autotest_common.sh@943 -- # kill 1818201 00:20:01.414 Received shutdown signal, test time was about 10.000000 seconds 00:20:01.414 00:20:01.414 Latency(us) 00:20:01.414 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:01.414 =================================================================================================================== 00:20:01.414 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:01.414 20:20:38 -- common/autotest_common.sh@948 -- # wait 1818201 00:20:01.673 20:20:39 -- fips/fips.sh@17 -- # nvmftestfini 00:20:01.673 20:20:39 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:01.673 20:20:39 -- nvmf/common.sh@116 -- # sync 00:20:01.673 20:20:39 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:01.673 20:20:39 -- nvmf/common.sh@119 -- # set +e 00:20:01.673 20:20:39 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:01.673 20:20:39 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:01.673 rmmod nvme_tcp 00:20:01.673 rmmod nvme_fabrics 00:20:01.673 rmmod nvme_keyring 00:20:01.673 20:20:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:01.673 20:20:39 -- nvmf/common.sh@123 -- # set -e 00:20:01.673 20:20:39 -- nvmf/common.sh@124 -- # return 0 00:20:01.673 20:20:39 -- nvmf/common.sh@477 -- # '[' -n 1817955 ']' 00:20:01.673 20:20:39 -- nvmf/common.sh@478 -- # killprocess 1817955 00:20:01.673 20:20:39 -- common/autotest_common.sh@924 -- # '[' -z 1817955 ']' 00:20:01.673 20:20:39 -- common/autotest_common.sh@928 -- # kill -0 1817955 00:20:01.673 20:20:39 -- common/autotest_common.sh@929 -- # uname 00:20:01.932 20:20:39 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:20:01.932 20:20:39 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 1817955 00:20:01.932 20:20:39 -- common/autotest_common.sh@930 -- # process_name=reactor_1 00:20:01.932 20:20:39 -- common/autotest_common.sh@934 -- # '[' reactor_1 = sudo ']' 00:20:01.932 20:20:39 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 1817955' 00:20:01.932 killing process with pid 1817955 00:20:01.932 20:20:39 -- common/autotest_common.sh@943 -- # kill 1817955 00:20:01.932 20:20:39 -- common/autotest_common.sh@948 -- # wait 1817955 00:20:01.932 20:20:39 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:01.932 20:20:39 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:01.932 20:20:39 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:01.932 20:20:39 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:01.932 20:20:39 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:01.932 20:20:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:01.932 20:20:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:01.932 20:20:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:04.466 20:20:41 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:20:04.466 20:20:41 -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:04.466 00:20:04.466 real 0m21.214s 00:20:04.466 user 0m22.993s 00:20:04.466 sys 0m8.923s 00:20:04.466 20:20:41 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:04.466 20:20:41 -- common/autotest_common.sh@10 -- # set +x 00:20:04.466 ************************************ 00:20:04.466 END TEST nvmf_fips 00:20:04.466 ************************************ 00:20:04.466 20:20:41 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:20:04.466 20:20:41 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:20:04.466 20:20:41 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:20:04.467 20:20:41 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:20:04.467 20:20:41 -- common/autotest_common.sh@10 -- # set +x 00:20:04.467 ************************************ 00:20:04.467 START TEST nvmf_fuzz 00:20:04.467 ************************************ 00:20:04.467 20:20:41 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:20:04.467 * Looking for test storage... 00:20:04.467 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:04.467 20:20:41 -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:04.467 20:20:41 -- nvmf/common.sh@7 -- # uname -s 00:20:04.467 20:20:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:04.467 20:20:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:04.467 20:20:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:04.467 20:20:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:04.467 20:20:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:04.467 20:20:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:04.467 20:20:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:04.467 20:20:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:04.467 20:20:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:04.467 20:20:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:04.467 20:20:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:20:04.467 20:20:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:20:04.467 20:20:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:04.467 20:20:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:04.467 20:20:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:04.467 20:20:41 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:04.467 20:20:41 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:04.467 20:20:41 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:04.467 20:20:41 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:04.467 20:20:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.467 20:20:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.467 20:20:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.467 20:20:41 -- paths/export.sh@5 -- # export PATH 00:20:04.467 20:20:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.467 20:20:41 -- nvmf/common.sh@46 -- # : 0 00:20:04.467 20:20:41 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:04.467 20:20:41 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:04.467 20:20:41 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:04.467 20:20:41 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:04.467 20:20:41 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:04.467 20:20:41 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:04.467 20:20:41 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:04.467 20:20:41 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:04.467 20:20:41 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:20:04.467 20:20:41 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:04.467 20:20:41 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:04.467 20:20:41 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:04.467 20:20:41 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:04.467 20:20:41 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:04.467 20:20:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:04.467 20:20:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:04.467 20:20:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:04.467 20:20:41 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:20:04.467 20:20:41 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:20:04.467 20:20:41 -- nvmf/common.sh@284 -- # xtrace_disable 00:20:04.467 20:20:41 -- common/autotest_common.sh@10 -- # set +x 00:20:11.096 20:20:47 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:11.096 20:20:47 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:11.096 20:20:47 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:11.096 20:20:47 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:11.096 20:20:47 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:11.096 20:20:47 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:11.096 20:20:47 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:11.096 20:20:47 -- nvmf/common.sh@294 -- # net_devs=() 00:20:11.096 20:20:47 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:11.096 20:20:47 -- nvmf/common.sh@295 -- # e810=() 00:20:11.096 20:20:47 -- nvmf/common.sh@295 -- # local -ga e810 00:20:11.096 20:20:47 -- nvmf/common.sh@296 -- # x722=() 00:20:11.096 20:20:47 -- nvmf/common.sh@296 -- # local -ga x722 00:20:11.096 20:20:47 -- nvmf/common.sh@297 -- # mlx=() 00:20:11.096 20:20:47 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:11.096 20:20:47 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:11.096 20:20:47 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:11.096 20:20:47 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:11.096 20:20:47 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:11.096 20:20:47 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:11.096 20:20:47 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:11.096 20:20:47 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:11.096 20:20:47 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:11.096 20:20:47 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:11.096 20:20:47 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:11.096 20:20:47 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:11.096 20:20:47 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:11.096 20:20:47 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:20:11.096 20:20:47 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:20:11.096 20:20:47 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:20:11.096 20:20:47 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:20:11.096 20:20:47 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:11.096 20:20:47 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:11.096 20:20:47 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:11.096 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:11.096 20:20:47 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:11.096 20:20:47 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:11.096 20:20:47 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:11.096 20:20:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:11.096 20:20:47 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:11.096 20:20:47 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:11.096 20:20:47 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:11.096 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:11.096 20:20:47 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:11.096 20:20:47 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:11.096 20:20:47 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:11.096 20:20:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:11.096 20:20:47 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:11.096 20:20:47 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:11.096 20:20:47 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:20:11.096 20:20:47 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:20:11.096 20:20:47 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:11.096 20:20:47 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:11.096 20:20:47 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:11.096 20:20:47 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:11.096 20:20:47 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:11.096 Found net devices under 0000:af:00.0: cvl_0_0 00:20:11.096 20:20:47 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:11.096 20:20:47 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:11.096 20:20:47 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:11.096 20:20:47 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:11.096 20:20:47 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:11.096 20:20:47 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:11.096 Found net devices under 0000:af:00.1: cvl_0_1 00:20:11.096 20:20:47 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:11.096 20:20:47 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:11.096 20:20:47 -- nvmf/common.sh@402 -- # is_hw=yes 00:20:11.096 20:20:47 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:20:11.096 20:20:47 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:20:11.096 20:20:47 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:20:11.096 20:20:47 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:11.096 20:20:47 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:11.096 20:20:47 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:11.096 20:20:47 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:20:11.096 20:20:47 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:11.096 20:20:47 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:11.096 20:20:47 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:20:11.096 20:20:47 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:11.096 20:20:47 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:11.096 20:20:47 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:20:11.096 20:20:47 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:20:11.096 20:20:47 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:20:11.096 20:20:47 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:11.096 20:20:47 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:11.096 20:20:47 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:11.096 20:20:47 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:20:11.096 20:20:47 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:11.096 20:20:47 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:11.096 20:20:47 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:11.097 20:20:47 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:20:11.097 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:11.097 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.165 ms 00:20:11.097 00:20:11.097 --- 10.0.0.2 ping statistics --- 00:20:11.097 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:11.097 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:20:11.097 20:20:47 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:11.097 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:11.097 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.261 ms 00:20:11.097 00:20:11.097 --- 10.0.0.1 ping statistics --- 00:20:11.097 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:11.097 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:20:11.097 20:20:47 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:11.097 20:20:47 -- nvmf/common.sh@410 -- # return 0 00:20:11.097 20:20:47 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:11.097 20:20:47 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:11.097 20:20:47 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:11.097 20:20:47 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:11.097 20:20:47 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:11.097 20:20:47 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:11.097 20:20:47 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:11.097 20:20:47 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=1823846 00:20:11.097 20:20:47 -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:11.097 20:20:47 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:20:11.097 20:20:47 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 1823846 00:20:11.097 20:20:47 -- common/autotest_common.sh@817 -- # '[' -z 1823846 ']' 00:20:11.097 20:20:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:11.097 20:20:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:11.097 20:20:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:11.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:11.097 20:20:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:11.097 20:20:47 -- common/autotest_common.sh@10 -- # set +x 00:20:11.097 20:20:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:11.097 20:20:48 -- common/autotest_common.sh@850 -- # return 0 00:20:11.097 20:20:48 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:11.097 20:20:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:11.097 20:20:48 -- common/autotest_common.sh@10 -- # set +x 00:20:11.356 20:20:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:11.356 20:20:48 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:20:11.356 20:20:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:11.356 20:20:48 -- common/autotest_common.sh@10 -- # set +x 00:20:11.356 Malloc0 00:20:11.356 20:20:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:11.356 20:20:48 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:11.356 20:20:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:11.356 20:20:48 -- common/autotest_common.sh@10 -- # set +x 00:20:11.356 20:20:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:11.356 20:20:48 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:11.356 20:20:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:11.356 20:20:48 -- common/autotest_common.sh@10 -- # set +x 00:20:11.356 20:20:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:11.356 20:20:48 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:11.356 20:20:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:11.356 20:20:48 -- common/autotest_common.sh@10 -- # set +x 00:20:11.356 20:20:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:11.356 20:20:48 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:20:11.356 20:20:48 -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:20:43.440 Fuzzing completed. Shutting down the fuzz application 00:20:43.440 00:20:43.440 Dumping successful admin opcodes: 00:20:43.440 8, 9, 10, 24, 00:20:43.440 Dumping successful io opcodes: 00:20:43.440 0, 9, 00:20:43.440 NS: 0x200003aeff00 I/O qp, Total commands completed: 992004, total successful commands: 5810, random_seed: 3900259136 00:20:43.440 NS: 0x200003aeff00 admin qp, Total commands completed: 125044, total successful commands: 1025, random_seed: 1261007872 00:20:43.440 20:21:18 -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:20:43.440 Fuzzing completed. Shutting down the fuzz application 00:20:43.440 00:20:43.440 Dumping successful admin opcodes: 00:20:43.440 24, 00:20:43.440 Dumping successful io opcodes: 00:20:43.440 00:20:43.440 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 635550542 00:20:43.440 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 635625370 00:20:43.440 20:21:20 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:43.440 20:21:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:43.440 20:21:20 -- common/autotest_common.sh@10 -- # set +x 00:20:43.440 20:21:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:43.440 20:21:20 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:20:43.440 20:21:20 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:20:43.440 20:21:20 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:43.440 20:21:20 -- nvmf/common.sh@116 -- # sync 00:20:43.440 20:21:20 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:43.440 20:21:20 -- nvmf/common.sh@119 -- # set +e 00:20:43.440 20:21:20 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:43.440 20:21:20 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:43.440 rmmod nvme_tcp 00:20:43.440 rmmod nvme_fabrics 00:20:43.440 rmmod nvme_keyring 00:20:43.440 20:21:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:43.441 20:21:20 -- nvmf/common.sh@123 -- # set -e 00:20:43.441 20:21:20 -- nvmf/common.sh@124 -- # return 0 00:20:43.441 20:21:20 -- nvmf/common.sh@477 -- # '[' -n 1823846 ']' 00:20:43.441 20:21:20 -- nvmf/common.sh@478 -- # killprocess 1823846 00:20:43.441 20:21:20 -- common/autotest_common.sh@924 -- # '[' -z 1823846 ']' 00:20:43.441 20:21:20 -- common/autotest_common.sh@928 -- # kill -0 1823846 00:20:43.441 20:21:20 -- common/autotest_common.sh@929 -- # uname 00:20:43.441 20:21:20 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:20:43.441 20:21:20 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 1823846 00:20:43.441 20:21:20 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:20:43.441 20:21:20 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:20:43.441 20:21:20 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 1823846' 00:20:43.441 killing process with pid 1823846 00:20:43.441 20:21:20 -- common/autotest_common.sh@943 -- # kill 1823846 00:20:43.441 20:21:20 -- common/autotest_common.sh@948 -- # wait 1823846 00:20:43.441 20:21:20 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:43.441 20:21:20 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:43.441 20:21:20 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:43.441 20:21:20 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:43.441 20:21:20 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:43.441 20:21:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:43.441 20:21:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:43.441 20:21:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:45.349 20:21:22 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:20:45.349 20:21:22 -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:20:45.349 00:20:45.349 real 0m41.231s 00:20:45.349 user 0m54.376s 00:20:45.349 sys 0m16.032s 00:20:45.349 20:21:22 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:45.349 20:21:22 -- common/autotest_common.sh@10 -- # set +x 00:20:45.349 ************************************ 00:20:45.349 END TEST nvmf_fuzz 00:20:45.349 ************************************ 00:20:45.349 20:21:22 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:20:45.349 20:21:22 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:20:45.349 20:21:22 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:20:45.349 20:21:22 -- common/autotest_common.sh@10 -- # set +x 00:20:45.349 ************************************ 00:20:45.349 START TEST nvmf_multiconnection 00:20:45.349 ************************************ 00:20:45.349 20:21:22 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:20:45.609 * Looking for test storage... 00:20:45.609 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:45.609 20:21:22 -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:45.609 20:21:22 -- nvmf/common.sh@7 -- # uname -s 00:20:45.609 20:21:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:45.609 20:21:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:45.609 20:21:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:45.609 20:21:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:45.609 20:21:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:45.609 20:21:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:45.609 20:21:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:45.609 20:21:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:45.609 20:21:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:45.609 20:21:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:45.609 20:21:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:20:45.609 20:21:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:20:45.609 20:21:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:45.609 20:21:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:45.609 20:21:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:45.609 20:21:22 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:45.609 20:21:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:45.609 20:21:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:45.609 20:21:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:45.609 20:21:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:45.609 20:21:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:45.609 20:21:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:45.609 20:21:22 -- paths/export.sh@5 -- # export PATH 00:20:45.609 20:21:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:45.609 20:21:22 -- nvmf/common.sh@46 -- # : 0 00:20:45.609 20:21:22 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:45.609 20:21:22 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:45.609 20:21:22 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:45.609 20:21:22 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:45.609 20:21:22 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:45.609 20:21:22 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:45.609 20:21:22 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:45.609 20:21:22 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:45.609 20:21:22 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:45.609 20:21:22 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:45.609 20:21:22 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:20:45.609 20:21:22 -- target/multiconnection.sh@16 -- # nvmftestinit 00:20:45.609 20:21:22 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:45.609 20:21:22 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:45.609 20:21:22 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:45.609 20:21:22 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:45.609 20:21:22 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:45.609 20:21:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:45.609 20:21:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:45.609 20:21:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:45.609 20:21:22 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:20:45.609 20:21:22 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:20:45.609 20:21:22 -- nvmf/common.sh@284 -- # xtrace_disable 00:20:45.609 20:21:22 -- common/autotest_common.sh@10 -- # set +x 00:20:52.176 20:21:28 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:52.176 20:21:28 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:52.176 20:21:28 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:52.176 20:21:28 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:52.176 20:21:28 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:52.176 20:21:28 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:52.176 20:21:28 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:52.176 20:21:28 -- nvmf/common.sh@294 -- # net_devs=() 00:20:52.176 20:21:28 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:52.176 20:21:28 -- nvmf/common.sh@295 -- # e810=() 00:20:52.176 20:21:28 -- nvmf/common.sh@295 -- # local -ga e810 00:20:52.176 20:21:28 -- nvmf/common.sh@296 -- # x722=() 00:20:52.176 20:21:28 -- nvmf/common.sh@296 -- # local -ga x722 00:20:52.176 20:21:28 -- nvmf/common.sh@297 -- # mlx=() 00:20:52.176 20:21:28 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:52.176 20:21:28 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:52.176 20:21:28 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:52.176 20:21:28 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:52.176 20:21:28 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:52.176 20:21:28 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:52.176 20:21:28 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:52.176 20:21:28 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:52.176 20:21:28 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:52.176 20:21:28 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:52.176 20:21:28 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:52.177 20:21:28 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:52.177 20:21:28 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:52.177 20:21:28 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:20:52.177 20:21:28 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:20:52.177 20:21:28 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:20:52.177 20:21:28 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:20:52.177 20:21:28 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:52.177 20:21:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:52.177 20:21:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:52.177 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:52.177 20:21:28 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:52.177 20:21:28 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:52.177 20:21:28 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:52.177 20:21:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:52.177 20:21:28 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:52.177 20:21:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:52.177 20:21:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:52.177 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:52.177 20:21:28 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:52.177 20:21:28 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:52.177 20:21:28 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:52.177 20:21:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:52.177 20:21:28 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:52.177 20:21:28 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:52.177 20:21:28 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:20:52.177 20:21:28 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:20:52.177 20:21:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:52.177 20:21:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:52.177 20:21:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:52.177 20:21:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:52.177 20:21:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:52.177 Found net devices under 0000:af:00.0: cvl_0_0 00:20:52.177 20:21:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:52.177 20:21:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:52.177 20:21:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:52.177 20:21:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:52.177 20:21:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:52.177 20:21:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:52.177 Found net devices under 0000:af:00.1: cvl_0_1 00:20:52.177 20:21:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:52.177 20:21:28 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:52.177 20:21:28 -- nvmf/common.sh@402 -- # is_hw=yes 00:20:52.177 20:21:28 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:20:52.177 20:21:28 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:20:52.177 20:21:28 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:20:52.177 20:21:28 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:52.177 20:21:28 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:52.177 20:21:28 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:52.177 20:21:28 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:20:52.177 20:21:28 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:52.177 20:21:28 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:52.177 20:21:28 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:20:52.177 20:21:28 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:52.177 20:21:28 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:52.177 20:21:28 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:20:52.177 20:21:28 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:20:52.177 20:21:28 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:20:52.177 20:21:28 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:52.177 20:21:28 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:52.177 20:21:28 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:52.177 20:21:28 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:20:52.177 20:21:28 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:52.177 20:21:28 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:52.177 20:21:28 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:52.177 20:21:28 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:20:52.177 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:52.177 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.279 ms 00:20:52.177 00:20:52.177 --- 10.0.0.2 ping statistics --- 00:20:52.177 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:52.177 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:20:52.177 20:21:28 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:52.177 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:52.177 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.368 ms 00:20:52.177 00:20:52.177 --- 10.0.0.1 ping statistics --- 00:20:52.177 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:52.177 rtt min/avg/max/mdev = 0.368/0.368/0.368/0.000 ms 00:20:52.177 20:21:28 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:52.177 20:21:28 -- nvmf/common.sh@410 -- # return 0 00:20:52.177 20:21:28 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:52.177 20:21:28 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:52.177 20:21:28 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:52.177 20:21:28 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:52.177 20:21:28 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:52.177 20:21:28 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:52.177 20:21:28 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:52.177 20:21:28 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:20:52.177 20:21:28 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:52.177 20:21:28 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:52.177 20:21:28 -- common/autotest_common.sh@10 -- # set +x 00:20:52.177 20:21:28 -- nvmf/common.sh@469 -- # nvmfpid=1833430 00:20:52.177 20:21:28 -- nvmf/common.sh@470 -- # waitforlisten 1833430 00:20:52.177 20:21:28 -- common/autotest_common.sh@817 -- # '[' -z 1833430 ']' 00:20:52.177 20:21:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:52.177 20:21:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:52.177 20:21:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:52.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:52.177 20:21:28 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:52.177 20:21:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:52.177 20:21:28 -- common/autotest_common.sh@10 -- # set +x 00:20:52.177 [2024-02-14 20:21:28.675393] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:20:52.177 [2024-02-14 20:21:28.675436] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:52.177 EAL: No free 2048 kB hugepages reported on node 1 00:20:52.177 [2024-02-14 20:21:28.736793] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:52.177 [2024-02-14 20:21:28.814197] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:52.177 [2024-02-14 20:21:28.814304] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:52.177 [2024-02-14 20:21:28.814312] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:52.177 [2024-02-14 20:21:28.814318] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:52.177 [2024-02-14 20:21:28.814363] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:52.177 [2024-02-14 20:21:28.814477] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:52.177 [2024-02-14 20:21:28.814495] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:52.177 [2024-02-14 20:21:28.814496] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:52.177 20:21:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:52.177 20:21:29 -- common/autotest_common.sh@850 -- # return 0 00:20:52.177 20:21:29 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:52.177 20:21:29 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:52.177 20:21:29 -- common/autotest_common.sh@10 -- # set +x 00:20:52.177 20:21:29 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:52.177 20:21:29 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:52.177 20:21:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:52.177 20:21:29 -- common/autotest_common.sh@10 -- # set +x 00:20:52.177 [2024-02-14 20:21:29.507871] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:52.177 20:21:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:52.177 20:21:29 -- target/multiconnection.sh@21 -- # seq 1 11 00:20:52.177 20:21:29 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:52.177 20:21:29 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:52.177 20:21:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:52.177 20:21:29 -- common/autotest_common.sh@10 -- # set +x 00:20:52.177 Malloc1 00:20:52.177 20:21:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:52.177 20:21:29 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:20:52.177 20:21:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:52.177 20:21:29 -- common/autotest_common.sh@10 -- # set +x 00:20:52.177 20:21:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:52.177 20:21:29 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:52.177 20:21:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:52.177 20:21:29 -- common/autotest_common.sh@10 -- # set +x 00:20:52.177 20:21:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:52.177 20:21:29 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:52.177 20:21:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:52.177 20:21:29 -- common/autotest_common.sh@10 -- # set +x 00:20:52.177 [2024-02-14 20:21:29.563483] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:52.177 20:21:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:52.177 20:21:29 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:52.178 20:21:29 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:20:52.178 20:21:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:52.178 20:21:29 -- common/autotest_common.sh@10 -- # set +x 00:20:52.178 Malloc2 00:20:52.178 20:21:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:52.178 20:21:29 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:20:52.178 20:21:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:52.178 20:21:29 -- common/autotest_common.sh@10 -- # set +x 00:20:52.437 20:21:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:52.437 20:21:29 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:20:52.437 20:21:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:52.437 20:21:29 -- common/autotest_common.sh@10 -- # set +x 00:20:52.437 20:21:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:52.437 20:21:29 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:52.437 20:21:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:52.437 20:21:29 -- common/autotest_common.sh@10 -- # set +x 00:20:52.437 20:21:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:52.437 20:21:29 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:52.437 20:21:29 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:20:52.437 20:21:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:52.437 20:21:29 -- common/autotest_common.sh@10 -- # set +x 00:20:52.437 Malloc3 00:20:52.437 20:21:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:52.437 20:21:29 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:20:52.437 20:21:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:52.437 20:21:29 -- common/autotest_common.sh@10 -- # set +x 00:20:52.437 20:21:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:52.437 20:21:29 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:20:52.437 20:21:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:52.437 20:21:29 -- common/autotest_common.sh@10 -- # set +x 00:20:52.437 20:21:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:52.437 20:21:29 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:20:52.437 20:21:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:52.437 20:21:29 -- common/autotest_common.sh@10 -- # set +x 00:20:52.437 20:21:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:52.437 20:21:29 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:52.437 20:21:29 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:20:52.437 20:21:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:52.437 20:21:29 -- common/autotest_common.sh@10 -- # set +x 00:20:52.437 Malloc4 00:20:52.437 20:21:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:52.437 20:21:29 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:20:52.437 20:21:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:52.437 20:21:29 -- common/autotest_common.sh@10 -- # set +x 00:20:52.437 20:21:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:52.437 20:21:29 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:20:52.437 20:21:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:52.437 20:21:29 -- common/autotest_common.sh@10 -- # set +x 00:20:52.437 20:21:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:52.437 20:21:29 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:20:52.437 20:21:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:52.437 20:21:29 -- common/autotest_common.sh@10 -- # set +x 00:20:52.437 20:21:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:52.437 20:21:29 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:52.437 20:21:29 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:20:52.437 20:21:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:52.437 20:21:29 -- common/autotest_common.sh@10 -- # set +x 00:20:52.437 Malloc5 00:20:52.437 20:21:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:52.437 20:21:29 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:20:52.437 20:21:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:52.437 20:21:29 -- common/autotest_common.sh@10 -- # set +x 00:20:52.437 20:21:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:52.437 20:21:29 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:20:52.437 20:21:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:52.437 20:21:29 -- common/autotest_common.sh@10 -- # set +x 00:20:52.437 20:21:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:52.437 20:21:29 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:20:52.437 20:21:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:52.437 20:21:29 -- common/autotest_common.sh@10 -- # set +x 00:20:52.437 20:21:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:52.437 20:21:29 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:52.437 20:21:29 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:20:52.437 20:21:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:52.437 20:21:29 -- common/autotest_common.sh@10 -- # set +x 00:20:52.437 Malloc6 00:20:52.437 20:21:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:52.437 20:21:29 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:20:52.437 20:21:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:52.437 20:21:29 -- common/autotest_common.sh@10 -- # set +x 00:20:52.437 20:21:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:52.437 20:21:29 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:20:52.437 20:21:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:52.437 20:21:29 -- common/autotest_common.sh@10 -- # set +x 00:20:52.437 20:21:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:52.437 20:21:29 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:20:52.437 20:21:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:52.437 20:21:29 -- common/autotest_common.sh@10 -- # set +x 00:20:52.437 20:21:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:52.437 20:21:29 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:52.437 20:21:29 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:20:52.437 20:21:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:52.437 20:21:29 -- common/autotest_common.sh@10 -- # set +x 00:20:52.437 Malloc7 00:20:52.437 20:21:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:52.437 20:21:29 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:20:52.437 20:21:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:52.437 20:21:29 -- common/autotest_common.sh@10 -- # set +x 00:20:52.437 20:21:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:52.437 20:21:29 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:20:52.437 20:21:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:52.437 20:21:29 -- common/autotest_common.sh@10 -- # set +x 00:20:52.437 20:21:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:52.437 20:21:29 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:20:52.437 20:21:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:52.437 20:21:29 -- common/autotest_common.sh@10 -- # set +x 00:20:52.437 20:21:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:52.437 20:21:29 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:52.437 20:21:29 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:20:52.437 20:21:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:52.437 20:21:29 -- common/autotest_common.sh@10 -- # set +x 00:20:52.438 Malloc8 00:20:52.438 20:21:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:52.438 20:21:29 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:20:52.438 20:21:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:52.438 20:21:29 -- common/autotest_common.sh@10 -- # set +x 00:20:52.697 20:21:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:52.697 20:21:29 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:20:52.697 20:21:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:52.697 20:21:29 -- common/autotest_common.sh@10 -- # set +x 00:20:52.697 20:21:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:52.697 20:21:29 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:20:52.697 20:21:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:52.697 20:21:29 -- common/autotest_common.sh@10 -- # set +x 00:20:52.697 20:21:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:52.697 20:21:29 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:52.697 20:21:29 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:20:52.697 20:21:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:52.697 20:21:29 -- common/autotest_common.sh@10 -- # set +x 00:20:52.697 Malloc9 00:20:52.697 20:21:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:52.697 20:21:29 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:20:52.697 20:21:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:52.697 20:21:29 -- common/autotest_common.sh@10 -- # set +x 00:20:52.697 20:21:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:52.697 20:21:29 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:20:52.697 20:21:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:52.697 20:21:29 -- common/autotest_common.sh@10 -- # set +x 00:20:52.697 20:21:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:52.697 20:21:29 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:20:52.697 20:21:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:52.697 20:21:29 -- common/autotest_common.sh@10 -- # set +x 00:20:52.697 20:21:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:52.697 20:21:29 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:52.697 20:21:29 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:20:52.697 20:21:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:52.697 20:21:29 -- common/autotest_common.sh@10 -- # set +x 00:20:52.697 Malloc10 00:20:52.697 20:21:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:52.697 20:21:29 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:20:52.697 20:21:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:52.697 20:21:29 -- common/autotest_common.sh@10 -- # set +x 00:20:52.697 20:21:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:52.697 20:21:29 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:20:52.697 20:21:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:52.697 20:21:29 -- common/autotest_common.sh@10 -- # set +x 00:20:52.697 20:21:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:52.697 20:21:29 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:20:52.697 20:21:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:52.697 20:21:29 -- common/autotest_common.sh@10 -- # set +x 00:20:52.697 20:21:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:52.697 20:21:29 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:52.697 20:21:29 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:20:52.697 20:21:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:52.697 20:21:29 -- common/autotest_common.sh@10 -- # set +x 00:20:52.697 Malloc11 00:20:52.697 20:21:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:52.697 20:21:29 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:20:52.697 20:21:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:52.697 20:21:29 -- common/autotest_common.sh@10 -- # set +x 00:20:52.697 20:21:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:52.697 20:21:29 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:20:52.697 20:21:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:52.697 20:21:29 -- common/autotest_common.sh@10 -- # set +x 00:20:52.697 20:21:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:52.697 20:21:29 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:20:52.697 20:21:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:52.697 20:21:29 -- common/autotest_common.sh@10 -- # set +x 00:20:52.697 20:21:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:52.697 20:21:30 -- target/multiconnection.sh@28 -- # seq 1 11 00:20:52.697 20:21:30 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:52.697 20:21:30 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:20:54.073 20:21:31 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:20:54.073 20:21:31 -- common/autotest_common.sh@1175 -- # local i=0 00:20:54.073 20:21:31 -- common/autotest_common.sh@1176 -- # local nvme_device_counter=1 nvme_devices=0 00:20:54.073 20:21:31 -- common/autotest_common.sh@1177 -- # [[ -n '' ]] 00:20:54.073 20:21:31 -- common/autotest_common.sh@1182 -- # sleep 2 00:20:55.977 20:21:33 -- common/autotest_common.sh@1183 -- # (( i++ <= 15 )) 00:20:55.977 20:21:33 -- common/autotest_common.sh@1184 -- # lsblk -l -o NAME,SERIAL 00:20:55.977 20:21:33 -- common/autotest_common.sh@1184 -- # grep -c SPDK1 00:20:55.977 20:21:33 -- common/autotest_common.sh@1184 -- # nvme_devices=1 00:20:55.977 20:21:33 -- common/autotest_common.sh@1185 -- # (( nvme_devices == nvme_device_counter )) 00:20:55.977 20:21:33 -- common/autotest_common.sh@1185 -- # return 0 00:20:55.977 20:21:33 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:55.977 20:21:33 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:20:57.354 20:21:34 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:20:57.354 20:21:34 -- common/autotest_common.sh@1175 -- # local i=0 00:20:57.354 20:21:34 -- common/autotest_common.sh@1176 -- # local nvme_device_counter=1 nvme_devices=0 00:20:57.354 20:21:34 -- common/autotest_common.sh@1177 -- # [[ -n '' ]] 00:20:57.354 20:21:34 -- common/autotest_common.sh@1182 -- # sleep 2 00:20:59.257 20:21:36 -- common/autotest_common.sh@1183 -- # (( i++ <= 15 )) 00:20:59.257 20:21:36 -- common/autotest_common.sh@1184 -- # lsblk -l -o NAME,SERIAL 00:20:59.257 20:21:36 -- common/autotest_common.sh@1184 -- # grep -c SPDK2 00:20:59.257 20:21:36 -- common/autotest_common.sh@1184 -- # nvme_devices=1 00:20:59.257 20:21:36 -- common/autotest_common.sh@1185 -- # (( nvme_devices == nvme_device_counter )) 00:20:59.257 20:21:36 -- common/autotest_common.sh@1185 -- # return 0 00:20:59.257 20:21:36 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:59.257 20:21:36 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:21:00.257 20:21:37 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:21:00.257 20:21:37 -- common/autotest_common.sh@1175 -- # local i=0 00:21:00.257 20:21:37 -- common/autotest_common.sh@1176 -- # local nvme_device_counter=1 nvme_devices=0 00:21:00.257 20:21:37 -- common/autotest_common.sh@1177 -- # [[ -n '' ]] 00:21:00.257 20:21:37 -- common/autotest_common.sh@1182 -- # sleep 2 00:21:02.791 20:21:39 -- common/autotest_common.sh@1183 -- # (( i++ <= 15 )) 00:21:02.791 20:21:39 -- common/autotest_common.sh@1184 -- # lsblk -l -o NAME,SERIAL 00:21:02.791 20:21:39 -- common/autotest_common.sh@1184 -- # grep -c SPDK3 00:21:02.791 20:21:39 -- common/autotest_common.sh@1184 -- # nvme_devices=1 00:21:02.791 20:21:39 -- common/autotest_common.sh@1185 -- # (( nvme_devices == nvme_device_counter )) 00:21:02.791 20:21:39 -- common/autotest_common.sh@1185 -- # return 0 00:21:02.791 20:21:39 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:02.791 20:21:39 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:21:03.728 20:21:40 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:21:03.728 20:21:40 -- common/autotest_common.sh@1175 -- # local i=0 00:21:03.728 20:21:40 -- common/autotest_common.sh@1176 -- # local nvme_device_counter=1 nvme_devices=0 00:21:03.728 20:21:40 -- common/autotest_common.sh@1177 -- # [[ -n '' ]] 00:21:03.728 20:21:40 -- common/autotest_common.sh@1182 -- # sleep 2 00:21:05.631 20:21:42 -- common/autotest_common.sh@1183 -- # (( i++ <= 15 )) 00:21:05.631 20:21:42 -- common/autotest_common.sh@1184 -- # lsblk -l -o NAME,SERIAL 00:21:05.631 20:21:42 -- common/autotest_common.sh@1184 -- # grep -c SPDK4 00:21:05.631 20:21:42 -- common/autotest_common.sh@1184 -- # nvme_devices=1 00:21:05.631 20:21:42 -- common/autotest_common.sh@1185 -- # (( nvme_devices == nvme_device_counter )) 00:21:05.631 20:21:42 -- common/autotest_common.sh@1185 -- # return 0 00:21:05.631 20:21:42 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:05.631 20:21:42 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:21:07.008 20:21:44 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:21:07.008 20:21:44 -- common/autotest_common.sh@1175 -- # local i=0 00:21:07.008 20:21:44 -- common/autotest_common.sh@1176 -- # local nvme_device_counter=1 nvme_devices=0 00:21:07.008 20:21:44 -- common/autotest_common.sh@1177 -- # [[ -n '' ]] 00:21:07.008 20:21:44 -- common/autotest_common.sh@1182 -- # sleep 2 00:21:08.907 20:21:46 -- common/autotest_common.sh@1183 -- # (( i++ <= 15 )) 00:21:08.907 20:21:46 -- common/autotest_common.sh@1184 -- # lsblk -l -o NAME,SERIAL 00:21:08.907 20:21:46 -- common/autotest_common.sh@1184 -- # grep -c SPDK5 00:21:08.907 20:21:46 -- common/autotest_common.sh@1184 -- # nvme_devices=1 00:21:08.907 20:21:46 -- common/autotest_common.sh@1185 -- # (( nvme_devices == nvme_device_counter )) 00:21:08.907 20:21:46 -- common/autotest_common.sh@1185 -- # return 0 00:21:08.907 20:21:46 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:08.907 20:21:46 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:21:10.284 20:21:47 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:21:10.284 20:21:47 -- common/autotest_common.sh@1175 -- # local i=0 00:21:10.284 20:21:47 -- common/autotest_common.sh@1176 -- # local nvme_device_counter=1 nvme_devices=0 00:21:10.284 20:21:47 -- common/autotest_common.sh@1177 -- # [[ -n '' ]] 00:21:10.284 20:21:47 -- common/autotest_common.sh@1182 -- # sleep 2 00:21:12.189 20:21:49 -- common/autotest_common.sh@1183 -- # (( i++ <= 15 )) 00:21:12.189 20:21:49 -- common/autotest_common.sh@1184 -- # lsblk -l -o NAME,SERIAL 00:21:12.189 20:21:49 -- common/autotest_common.sh@1184 -- # grep -c SPDK6 00:21:12.189 20:21:49 -- common/autotest_common.sh@1184 -- # nvme_devices=1 00:21:12.189 20:21:49 -- common/autotest_common.sh@1185 -- # (( nvme_devices == nvme_device_counter )) 00:21:12.189 20:21:49 -- common/autotest_common.sh@1185 -- # return 0 00:21:12.189 20:21:49 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:12.189 20:21:49 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:21:13.566 20:21:50 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:21:13.566 20:21:50 -- common/autotest_common.sh@1175 -- # local i=0 00:21:13.566 20:21:50 -- common/autotest_common.sh@1176 -- # local nvme_device_counter=1 nvme_devices=0 00:21:13.566 20:21:50 -- common/autotest_common.sh@1177 -- # [[ -n '' ]] 00:21:13.566 20:21:50 -- common/autotest_common.sh@1182 -- # sleep 2 00:21:15.470 20:21:52 -- common/autotest_common.sh@1183 -- # (( i++ <= 15 )) 00:21:15.470 20:21:52 -- common/autotest_common.sh@1184 -- # lsblk -l -o NAME,SERIAL 00:21:15.470 20:21:52 -- common/autotest_common.sh@1184 -- # grep -c SPDK7 00:21:15.470 20:21:52 -- common/autotest_common.sh@1184 -- # nvme_devices=1 00:21:15.470 20:21:52 -- common/autotest_common.sh@1185 -- # (( nvme_devices == nvme_device_counter )) 00:21:15.470 20:21:52 -- common/autotest_common.sh@1185 -- # return 0 00:21:15.470 20:21:52 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:15.470 20:21:52 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:21:16.848 20:21:54 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:21:16.848 20:21:54 -- common/autotest_common.sh@1175 -- # local i=0 00:21:16.848 20:21:54 -- common/autotest_common.sh@1176 -- # local nvme_device_counter=1 nvme_devices=0 00:21:16.848 20:21:54 -- common/autotest_common.sh@1177 -- # [[ -n '' ]] 00:21:16.848 20:21:54 -- common/autotest_common.sh@1182 -- # sleep 2 00:21:18.751 20:21:56 -- common/autotest_common.sh@1183 -- # (( i++ <= 15 )) 00:21:18.751 20:21:56 -- common/autotest_common.sh@1184 -- # lsblk -l -o NAME,SERIAL 00:21:18.751 20:21:56 -- common/autotest_common.sh@1184 -- # grep -c SPDK8 00:21:19.010 20:21:56 -- common/autotest_common.sh@1184 -- # nvme_devices=1 00:21:19.010 20:21:56 -- common/autotest_common.sh@1185 -- # (( nvme_devices == nvme_device_counter )) 00:21:19.010 20:21:56 -- common/autotest_common.sh@1185 -- # return 0 00:21:19.010 20:21:56 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:19.010 20:21:56 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:21:20.386 20:21:57 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:21:20.387 20:21:57 -- common/autotest_common.sh@1175 -- # local i=0 00:21:20.387 20:21:57 -- common/autotest_common.sh@1176 -- # local nvme_device_counter=1 nvme_devices=0 00:21:20.387 20:21:57 -- common/autotest_common.sh@1177 -- # [[ -n '' ]] 00:21:20.387 20:21:57 -- common/autotest_common.sh@1182 -- # sleep 2 00:21:22.290 20:21:59 -- common/autotest_common.sh@1183 -- # (( i++ <= 15 )) 00:21:22.290 20:21:59 -- common/autotest_common.sh@1184 -- # lsblk -l -o NAME,SERIAL 00:21:22.290 20:21:59 -- common/autotest_common.sh@1184 -- # grep -c SPDK9 00:21:22.550 20:21:59 -- common/autotest_common.sh@1184 -- # nvme_devices=1 00:21:22.550 20:21:59 -- common/autotest_common.sh@1185 -- # (( nvme_devices == nvme_device_counter )) 00:21:22.550 20:21:59 -- common/autotest_common.sh@1185 -- # return 0 00:21:22.550 20:21:59 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:22.550 20:21:59 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:21:23.962 20:22:01 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:21:23.962 20:22:01 -- common/autotest_common.sh@1175 -- # local i=0 00:21:23.962 20:22:01 -- common/autotest_common.sh@1176 -- # local nvme_device_counter=1 nvme_devices=0 00:21:23.962 20:22:01 -- common/autotest_common.sh@1177 -- # [[ -n '' ]] 00:21:23.962 20:22:01 -- common/autotest_common.sh@1182 -- # sleep 2 00:21:25.863 20:22:03 -- common/autotest_common.sh@1183 -- # (( i++ <= 15 )) 00:21:25.863 20:22:03 -- common/autotest_common.sh@1184 -- # lsblk -l -o NAME,SERIAL 00:21:25.863 20:22:03 -- common/autotest_common.sh@1184 -- # grep -c SPDK10 00:21:25.863 20:22:03 -- common/autotest_common.sh@1184 -- # nvme_devices=1 00:21:25.863 20:22:03 -- common/autotest_common.sh@1185 -- # (( nvme_devices == nvme_device_counter )) 00:21:25.863 20:22:03 -- common/autotest_common.sh@1185 -- # return 0 00:21:25.863 20:22:03 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:25.863 20:22:03 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:21:27.238 20:22:04 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:21:27.238 20:22:04 -- common/autotest_common.sh@1175 -- # local i=0 00:21:27.238 20:22:04 -- common/autotest_common.sh@1176 -- # local nvme_device_counter=1 nvme_devices=0 00:21:27.238 20:22:04 -- common/autotest_common.sh@1177 -- # [[ -n '' ]] 00:21:27.238 20:22:04 -- common/autotest_common.sh@1182 -- # sleep 2 00:21:29.140 20:22:06 -- common/autotest_common.sh@1183 -- # (( i++ <= 15 )) 00:21:29.140 20:22:06 -- common/autotest_common.sh@1184 -- # lsblk -l -o NAME,SERIAL 00:21:29.140 20:22:06 -- common/autotest_common.sh@1184 -- # grep -c SPDK11 00:21:29.140 20:22:06 -- common/autotest_common.sh@1184 -- # nvme_devices=1 00:21:29.140 20:22:06 -- common/autotest_common.sh@1185 -- # (( nvme_devices == nvme_device_counter )) 00:21:29.140 20:22:06 -- common/autotest_common.sh@1185 -- # return 0 00:21:29.140 20:22:06 -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:21:29.140 [global] 00:21:29.140 thread=1 00:21:29.140 invalidate=1 00:21:29.140 rw=read 00:21:29.140 time_based=1 00:21:29.140 runtime=10 00:21:29.140 ioengine=libaio 00:21:29.140 direct=1 00:21:29.140 bs=262144 00:21:29.140 iodepth=64 00:21:29.140 norandommap=1 00:21:29.140 numjobs=1 00:21:29.140 00:21:29.140 [job0] 00:21:29.140 filename=/dev/nvme0n1 00:21:29.140 [job1] 00:21:29.140 filename=/dev/nvme10n1 00:21:29.140 [job2] 00:21:29.140 filename=/dev/nvme11n1 00:21:29.140 [job3] 00:21:29.140 filename=/dev/nvme2n1 00:21:29.140 [job4] 00:21:29.140 filename=/dev/nvme3n1 00:21:29.140 [job5] 00:21:29.140 filename=/dev/nvme4n1 00:21:29.140 [job6] 00:21:29.140 filename=/dev/nvme5n1 00:21:29.140 [job7] 00:21:29.140 filename=/dev/nvme6n1 00:21:29.140 [job8] 00:21:29.140 filename=/dev/nvme7n1 00:21:29.140 [job9] 00:21:29.140 filename=/dev/nvme8n1 00:21:29.140 [job10] 00:21:29.140 filename=/dev/nvme9n1 00:21:29.399 Could not set queue depth (nvme0n1) 00:21:29.399 Could not set queue depth (nvme10n1) 00:21:29.399 Could not set queue depth (nvme11n1) 00:21:29.399 Could not set queue depth (nvme2n1) 00:21:29.399 Could not set queue depth (nvme3n1) 00:21:29.399 Could not set queue depth (nvme4n1) 00:21:29.399 Could not set queue depth (nvme5n1) 00:21:29.399 Could not set queue depth (nvme6n1) 00:21:29.399 Could not set queue depth (nvme7n1) 00:21:29.399 Could not set queue depth (nvme8n1) 00:21:29.399 Could not set queue depth (nvme9n1) 00:21:29.657 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:29.657 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:29.657 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:29.657 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:29.658 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:29.658 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:29.658 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:29.658 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:29.658 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:29.658 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:29.658 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:29.658 fio-3.35 00:21:29.658 Starting 11 threads 00:21:41.872 00:21:41.872 job0: (groupid=0, jobs=1): err= 0: pid=1840132: Wed Feb 14 20:22:17 2024 00:21:41.872 read: IOPS=698, BW=175MiB/s (183MB/s)(1767MiB/10118msec) 00:21:41.872 slat (usec): min=9, max=115471, avg=918.27, stdev=4426.19 00:21:41.872 clat (usec): min=967, max=267590, avg=90568.67, stdev=52542.66 00:21:41.872 lat (usec): min=993, max=269787, avg=91486.94, stdev=53170.39 00:21:41.872 clat percentiles (msec): 00:21:41.872 | 1.00th=[ 6], 5.00th=[ 15], 10.00th=[ 23], 20.00th=[ 41], 00:21:41.872 | 30.00th=[ 54], 40.00th=[ 71], 50.00th=[ 87], 60.00th=[ 105], 00:21:41.872 | 70.00th=[ 121], 80.00th=[ 136], 90.00th=[ 159], 95.00th=[ 190], 00:21:41.872 | 99.00th=[ 230], 99.50th=[ 243], 99.90th=[ 266], 99.95th=[ 266], 00:21:41.872 | 99.99th=[ 268] 00:21:41.872 bw ( KiB/s): min=80896, max=359936, per=8.95%, avg=179353.60, stdev=81136.99, samples=20 00:21:41.872 iops : min= 316, max= 1406, avg=700.60, stdev=316.94, samples=20 00:21:41.872 lat (usec) : 1000=0.01% 00:21:41.872 lat (msec) : 2=0.20%, 4=0.55%, 10=1.15%, 20=6.86%, 50=18.25% 00:21:41.872 lat (msec) : 100=30.20%, 250=42.38%, 500=0.40% 00:21:41.872 cpu : usr=0.27%, sys=2.40%, ctx=2082, majf=0, minf=3347 00:21:41.872 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:21:41.872 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:41.872 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:41.872 issued rwts: total=7069,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:41.872 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:41.872 job1: (groupid=0, jobs=1): err= 0: pid=1840133: Wed Feb 14 20:22:17 2024 00:21:41.872 read: IOPS=663, BW=166MiB/s (174MB/s)(1680MiB/10122msec) 00:21:41.872 slat (usec): min=11, max=88566, avg=1378.30, stdev=4256.81 00:21:41.872 clat (msec): min=5, max=270, avg=94.90, stdev=41.55 00:21:41.872 lat (msec): min=5, max=270, avg=96.28, stdev=42.14 00:21:41.872 clat percentiles (msec): 00:21:41.872 | 1.00th=[ 14], 5.00th=[ 31], 10.00th=[ 35], 20.00th=[ 51], 00:21:41.872 | 30.00th=[ 68], 40.00th=[ 86], 50.00th=[ 101], 60.00th=[ 111], 00:21:41.872 | 70.00th=[ 123], 80.00th=[ 136], 90.00th=[ 146], 95.00th=[ 157], 00:21:41.872 | 99.00th=[ 171], 99.50th=[ 178], 99.90th=[ 255], 99.95th=[ 266], 00:21:41.872 | 99.99th=[ 271] 00:21:41.872 bw ( KiB/s): min=108544, max=333824, per=8.51%, avg=170443.40, stdev=63548.44, samples=20 00:21:41.872 iops : min= 424, max= 1304, avg=665.75, stdev=248.18, samples=20 00:21:41.872 lat (msec) : 10=0.10%, 20=1.46%, 50=18.17%, 100=30.13%, 250=50.00% 00:21:41.872 lat (msec) : 500=0.13% 00:21:41.872 cpu : usr=0.32%, sys=2.71%, ctx=1474, majf=0, minf=4097 00:21:41.872 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:21:41.872 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:41.872 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:41.872 issued rwts: total=6720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:41.872 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:41.872 job2: (groupid=0, jobs=1): err= 0: pid=1840135: Wed Feb 14 20:22:17 2024 00:21:41.872 read: IOPS=704, BW=176MiB/s (185MB/s)(1781MiB/10115msec) 00:21:41.872 slat (usec): min=10, max=143817, avg=1002.06, stdev=4066.38 00:21:41.872 clat (msec): min=4, max=308, avg=89.76, stdev=44.66 00:21:41.872 lat (msec): min=4, max=336, avg=90.76, stdev=45.15 00:21:41.872 clat percentiles (msec): 00:21:41.872 | 1.00th=[ 11], 5.00th=[ 26], 10.00th=[ 35], 20.00th=[ 51], 00:21:41.872 | 30.00th=[ 62], 40.00th=[ 73], 50.00th=[ 86], 60.00th=[ 100], 00:21:41.872 | 70.00th=[ 113], 80.00th=[ 129], 90.00th=[ 144], 95.00th=[ 161], 00:21:41.872 | 99.00th=[ 224], 99.50th=[ 239], 99.90th=[ 249], 99.95th=[ 271], 00:21:41.872 | 99.99th=[ 309] 00:21:41.872 bw ( KiB/s): min=103424, max=330752, per=9.03%, avg=180774.00, stdev=62855.29, samples=20 00:21:41.872 iops : min= 404, max= 1292, avg=706.10, stdev=245.57, samples=20 00:21:41.872 lat (msec) : 10=0.81%, 20=2.61%, 50=16.28%, 100=40.55%, 250=39.65% 00:21:41.872 lat (msec) : 500=0.10% 00:21:41.872 cpu : usr=0.32%, sys=2.55%, ctx=1803, majf=0, minf=4097 00:21:41.872 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:21:41.872 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:41.872 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:41.872 issued rwts: total=7125,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:41.872 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:41.872 job3: (groupid=0, jobs=1): err= 0: pid=1840141: Wed Feb 14 20:22:17 2024 00:21:41.872 read: IOPS=892, BW=223MiB/s (234MB/s)(2242MiB/10049msec) 00:21:41.872 slat (usec): min=9, max=175397, avg=885.30, stdev=4403.45 00:21:41.872 clat (msec): min=4, max=266, avg=70.75, stdev=40.15 00:21:41.872 lat (msec): min=4, max=341, avg=71.63, stdev=40.68 00:21:41.872 clat percentiles (msec): 00:21:41.872 | 1.00th=[ 16], 5.00th=[ 26], 10.00th=[ 35], 20.00th=[ 43], 00:21:41.872 | 30.00th=[ 47], 40.00th=[ 54], 50.00th=[ 62], 60.00th=[ 69], 00:21:41.872 | 70.00th=[ 80], 80.00th=[ 90], 90.00th=[ 125], 95.00th=[ 153], 00:21:41.872 | 99.00th=[ 211], 99.50th=[ 234], 99.90th=[ 259], 99.95th=[ 259], 00:21:41.872 | 99.99th=[ 268] 00:21:41.872 bw ( KiB/s): min=134144, max=340992, per=11.38%, avg=227968.00, stdev=61522.56, samples=20 00:21:41.872 iops : min= 524, max= 1332, avg=890.50, stdev=240.32, samples=20 00:21:41.872 lat (msec) : 10=0.17%, 20=2.62%, 50=32.26%, 100=50.27%, 250=14.43% 00:21:41.872 lat (msec) : 500=0.26% 00:21:41.872 cpu : usr=0.26%, sys=3.40%, ctx=2139, majf=0, minf=4097 00:21:41.872 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:21:41.872 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:41.872 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:41.872 issued rwts: total=8968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:41.872 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:41.872 job4: (groupid=0, jobs=1): err= 0: pid=1840143: Wed Feb 14 20:22:17 2024 00:21:41.872 read: IOPS=626, BW=157MiB/s (164MB/s)(1571MiB/10038msec) 00:21:41.872 slat (usec): min=11, max=101681, avg=1315.67, stdev=4416.97 00:21:41.872 clat (msec): min=2, max=246, avg=100.80, stdev=41.90 00:21:41.872 lat (msec): min=2, max=284, avg=102.12, stdev=42.22 00:21:41.872 clat percentiles (msec): 00:21:41.872 | 1.00th=[ 12], 5.00th=[ 36], 10.00th=[ 57], 20.00th=[ 72], 00:21:41.872 | 30.00th=[ 81], 40.00th=[ 87], 50.00th=[ 95], 60.00th=[ 104], 00:21:41.872 | 70.00th=[ 115], 80.00th=[ 132], 90.00th=[ 155], 95.00th=[ 188], 00:21:41.872 | 99.00th=[ 224], 99.50th=[ 239], 99.90th=[ 243], 99.95th=[ 245], 00:21:41.872 | 99.99th=[ 247] 00:21:41.872 bw ( KiB/s): min=91648, max=260608, per=7.95%, avg=159271.05, stdev=41152.01, samples=20 00:21:41.872 iops : min= 358, max= 1018, avg=622.15, stdev=160.75, samples=20 00:21:41.872 lat (msec) : 4=0.05%, 10=0.70%, 20=1.75%, 50=5.97%, 100=48.44% 00:21:41.872 lat (msec) : 250=43.09% 00:21:41.872 cpu : usr=0.32%, sys=2.54%, ctx=1531, majf=0, minf=4097 00:21:41.872 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:21:41.872 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:41.872 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:41.872 issued rwts: total=6284,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:41.872 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:41.872 job5: (groupid=0, jobs=1): err= 0: pid=1840157: Wed Feb 14 20:22:17 2024 00:21:41.872 read: IOPS=827, BW=207MiB/s (217MB/s)(2073MiB/10015msec) 00:21:41.872 slat (usec): min=8, max=86003, avg=1037.25, stdev=3656.37 00:21:41.872 clat (msec): min=3, max=238, avg=76.19, stdev=39.62 00:21:41.872 lat (msec): min=3, max=254, avg=77.22, stdev=40.17 00:21:41.872 clat percentiles (msec): 00:21:41.872 | 1.00th=[ 12], 5.00th=[ 24], 10.00th=[ 32], 20.00th=[ 44], 00:21:41.872 | 30.00th=[ 52], 40.00th=[ 58], 50.00th=[ 67], 60.00th=[ 81], 00:21:41.872 | 70.00th=[ 92], 80.00th=[ 109], 90.00th=[ 134], 95.00th=[ 150], 00:21:41.873 | 99.00th=[ 188], 99.50th=[ 226], 99.90th=[ 226], 99.95th=[ 228], 00:21:41.873 | 99.99th=[ 239] 00:21:41.873 bw ( KiB/s): min=107008, max=381440, per=10.52%, avg=210636.80, stdev=80380.89, samples=20 00:21:41.873 iops : min= 418, max= 1490, avg=822.80, stdev=313.99, samples=20 00:21:41.873 lat (msec) : 4=0.01%, 10=0.36%, 20=2.80%, 50=24.95%, 100=47.30% 00:21:41.873 lat (msec) : 250=24.57% 00:21:41.873 cpu : usr=0.47%, sys=3.11%, ctx=1877, majf=0, minf=4097 00:21:41.873 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:21:41.873 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:41.873 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:41.873 issued rwts: total=8291,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:41.873 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:41.873 job6: (groupid=0, jobs=1): err= 0: pid=1840162: Wed Feb 14 20:22:17 2024 00:21:41.873 read: IOPS=774, BW=194MiB/s (203MB/s)(1959MiB/10116msec) 00:21:41.873 slat (usec): min=9, max=84399, avg=1069.05, stdev=4054.65 00:21:41.873 clat (msec): min=5, max=298, avg=81.44, stdev=50.80 00:21:41.873 lat (msec): min=5, max=298, avg=82.51, stdev=51.42 00:21:41.873 clat percentiles (msec): 00:21:41.873 | 1.00th=[ 11], 5.00th=[ 16], 10.00th=[ 27], 20.00th=[ 39], 00:21:41.873 | 30.00th=[ 49], 40.00th=[ 58], 50.00th=[ 68], 60.00th=[ 85], 00:21:41.873 | 70.00th=[ 101], 80.00th=[ 123], 90.00th=[ 155], 95.00th=[ 184], 00:21:41.873 | 99.00th=[ 232], 99.50th=[ 245], 99.90th=[ 255], 99.95th=[ 259], 00:21:41.873 | 99.99th=[ 300] 00:21:41.873 bw ( KiB/s): min=79360, max=388608, per=9.93%, avg=198988.80, stdev=89278.79, samples=20 00:21:41.873 iops : min= 310, max= 1518, avg=777.30, stdev=348.75, samples=20 00:21:41.873 lat (msec) : 10=1.08%, 20=5.96%, 50=25.38%, 100=37.51%, 250=29.86% 00:21:41.873 lat (msec) : 500=0.20% 00:21:41.873 cpu : usr=0.33%, sys=2.94%, ctx=1858, majf=0, minf=4097 00:21:41.873 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:21:41.873 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:41.873 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:41.873 issued rwts: total=7837,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:41.873 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:41.873 job7: (groupid=0, jobs=1): err= 0: pid=1840166: Wed Feb 14 20:22:17 2024 00:21:41.873 read: IOPS=765, BW=191MiB/s (201MB/s)(1939MiB/10128msec) 00:21:41.873 slat (usec): min=8, max=212472, avg=1024.74, stdev=5021.44 00:21:41.873 clat (msec): min=4, max=270, avg=82.47, stdev=47.04 00:21:41.873 lat (msec): min=4, max=372, avg=83.49, stdev=47.73 00:21:41.873 clat percentiles (msec): 00:21:41.873 | 1.00th=[ 13], 5.00th=[ 22], 10.00th=[ 33], 20.00th=[ 45], 00:21:41.873 | 30.00th=[ 51], 40.00th=[ 62], 50.00th=[ 74], 60.00th=[ 86], 00:21:41.873 | 70.00th=[ 101], 80.00th=[ 121], 90.00th=[ 142], 95.00th=[ 167], 00:21:41.873 | 99.00th=[ 241], 99.50th=[ 259], 99.90th=[ 268], 99.95th=[ 271], 00:21:41.873 | 99.99th=[ 271] 00:21:41.873 bw ( KiB/s): min=97280, max=347136, per=9.83%, avg=196940.80, stdev=75419.25, samples=20 00:21:41.873 iops : min= 380, max= 1356, avg=769.30, stdev=294.61, samples=20 00:21:41.873 lat (msec) : 10=0.62%, 20=3.75%, 50=24.17%, 100=41.67%, 250=28.99% 00:21:41.873 lat (msec) : 500=0.80% 00:21:41.873 cpu : usr=0.29%, sys=2.92%, ctx=1900, majf=0, minf=4097 00:21:41.873 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:21:41.873 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:41.873 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:41.873 issued rwts: total=7757,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:41.873 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:41.873 job8: (groupid=0, jobs=1): err= 0: pid=1840180: Wed Feb 14 20:22:17 2024 00:21:41.873 read: IOPS=639, BW=160MiB/s (168MB/s)(1608MiB/10061msec) 00:21:41.873 slat (usec): min=8, max=92425, avg=1032.33, stdev=4114.95 00:21:41.873 clat (usec): min=1833, max=267140, avg=98993.77, stdev=45531.10 00:21:41.873 lat (usec): min=1864, max=267892, avg=100026.10, stdev=45978.19 00:21:41.873 clat percentiles (msec): 00:21:41.873 | 1.00th=[ 11], 5.00th=[ 35], 10.00th=[ 47], 20.00th=[ 64], 00:21:41.873 | 30.00th=[ 75], 40.00th=[ 83], 50.00th=[ 92], 60.00th=[ 102], 00:21:41.873 | 70.00th=[ 115], 80.00th=[ 136], 90.00th=[ 163], 95.00th=[ 188], 00:21:41.873 | 99.00th=[ 226], 99.50th=[ 243], 99.90th=[ 249], 99.95th=[ 249], 00:21:41.873 | 99.99th=[ 268] 00:21:41.873 bw ( KiB/s): min=90112, max=232960, per=8.14%, avg=163015.30, stdev=41588.24, samples=20 00:21:41.873 iops : min= 352, max= 910, avg=636.75, stdev=162.43, samples=20 00:21:41.873 lat (msec) : 2=0.06%, 4=0.25%, 10=0.54%, 20=1.35%, 50=9.97% 00:21:41.873 lat (msec) : 100=46.56%, 250=41.21%, 500=0.05% 00:21:41.873 cpu : usr=0.17%, sys=2.21%, ctx=1871, majf=0, minf=4097 00:21:41.873 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:21:41.873 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:41.873 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:41.873 issued rwts: total=6430,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:41.873 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:41.873 job9: (groupid=0, jobs=1): err= 0: pid=1840193: Wed Feb 14 20:22:17 2024 00:21:41.873 read: IOPS=598, BW=150MiB/s (157MB/s)(1517MiB/10127msec) 00:21:41.873 slat (usec): min=9, max=227548, avg=1151.93, stdev=5918.50 00:21:41.873 clat (msec): min=3, max=424, avg=105.56, stdev=51.84 00:21:41.873 lat (msec): min=3, max=425, avg=106.72, stdev=52.60 00:21:41.873 clat percentiles (msec): 00:21:41.873 | 1.00th=[ 12], 5.00th=[ 26], 10.00th=[ 45], 20.00th=[ 73], 00:21:41.873 | 30.00th=[ 83], 40.00th=[ 92], 50.00th=[ 101], 60.00th=[ 110], 00:21:41.873 | 70.00th=[ 123], 80.00th=[ 134], 90.00th=[ 153], 95.00th=[ 205], 00:21:41.873 | 99.00th=[ 296], 99.50th=[ 334], 99.90th=[ 351], 99.95th=[ 359], 00:21:41.873 | 99.99th=[ 426] 00:21:41.873 bw ( KiB/s): min=72704, max=212480, per=7.67%, avg=153651.20, stdev=41783.68, samples=20 00:21:41.873 iops : min= 284, max= 830, avg=600.20, stdev=163.22, samples=20 00:21:41.873 lat (msec) : 4=0.02%, 10=0.61%, 20=2.74%, 50=7.27%, 100=39.19% 00:21:41.873 lat (msec) : 250=47.33%, 500=2.85% 00:21:41.873 cpu : usr=0.19%, sys=1.98%, ctx=1732, majf=0, minf=4097 00:21:41.873 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:21:41.873 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:41.873 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:41.873 issued rwts: total=6066,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:41.873 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:41.873 job10: (groupid=0, jobs=1): err= 0: pid=1840204: Wed Feb 14 20:22:17 2024 00:21:41.873 read: IOPS=661, BW=165MiB/s (173MB/s)(1674MiB/10117msec) 00:21:41.873 slat (usec): min=10, max=144117, avg=1248.35, stdev=4357.39 00:21:41.873 clat (msec): min=8, max=276, avg=95.35, stdev=45.70 00:21:41.873 lat (msec): min=8, max=342, avg=96.60, stdev=46.15 00:21:41.873 clat percentiles (msec): 00:21:41.873 | 1.00th=[ 17], 5.00th=[ 31], 10.00th=[ 39], 20.00th=[ 57], 00:21:41.873 | 30.00th=[ 73], 40.00th=[ 83], 50.00th=[ 92], 60.00th=[ 101], 00:21:41.873 | 70.00th=[ 111], 80.00th=[ 126], 90.00th=[ 148], 95.00th=[ 180], 00:21:41.873 | 99.00th=[ 247], 99.50th=[ 262], 99.90th=[ 271], 99.95th=[ 271], 00:21:41.873 | 99.99th=[ 275] 00:21:41.873 bw ( KiB/s): min=65024, max=297984, per=8.48%, avg=169779.20, stdev=57921.04, samples=20 00:21:41.874 iops : min= 254, max= 1164, avg=663.20, stdev=226.25, samples=20 00:21:41.874 lat (msec) : 10=0.01%, 20=2.30%, 50=12.59%, 100=44.21%, 250=39.99% 00:21:41.874 lat (msec) : 500=0.90% 00:21:41.874 cpu : usr=0.29%, sys=2.51%, ctx=1603, majf=0, minf=4097 00:21:41.874 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:21:41.874 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:41.874 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:41.874 issued rwts: total=6695,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:41.874 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:41.874 00:21:41.874 Run status group 0 (all jobs): 00:21:41.874 READ: bw=1956MiB/s (2051MB/s), 150MiB/s-223MiB/s (157MB/s-234MB/s), io=19.3GiB (20.8GB), run=10015-10128msec 00:21:41.874 00:21:41.874 Disk stats (read/write): 00:21:41.874 nvme0n1: ios=13980/0, merge=0/0, ticks=1237132/0, in_queue=1237132, util=97.31% 00:21:41.874 nvme10n1: ios=13239/0, merge=0/0, ticks=1221853/0, in_queue=1221853, util=97.49% 00:21:41.874 nvme11n1: ios=14094/0, merge=0/0, ticks=1231367/0, in_queue=1231367, util=97.59% 00:21:41.874 nvme2n1: ios=17714/0, merge=0/0, ticks=1234386/0, in_queue=1234386, util=97.74% 00:21:41.874 nvme3n1: ios=12403/0, merge=0/0, ticks=1233255/0, in_queue=1233255, util=97.83% 00:21:41.874 nvme4n1: ios=16242/0, merge=0/0, ticks=1233540/0, in_queue=1233540, util=98.24% 00:21:41.874 nvme5n1: ios=15544/0, merge=0/0, ticks=1229703/0, in_queue=1229703, util=98.36% 00:21:41.874 nvme6n1: ios=15298/0, merge=0/0, ticks=1233964/0, in_queue=1233964, util=98.47% 00:21:41.874 nvme7n1: ios=12688/0, merge=0/0, ticks=1236803/0, in_queue=1236803, util=98.92% 00:21:41.874 nvme8n1: ios=11918/0, merge=0/0, ticks=1230958/0, in_queue=1230958, util=99.07% 00:21:41.874 nvme9n1: ios=13224/0, merge=0/0, ticks=1228408/0, in_queue=1228408, util=99.18% 00:21:41.874 20:22:17 -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:21:41.874 [global] 00:21:41.874 thread=1 00:21:41.874 invalidate=1 00:21:41.874 rw=randwrite 00:21:41.874 time_based=1 00:21:41.874 runtime=10 00:21:41.874 ioengine=libaio 00:21:41.874 direct=1 00:21:41.874 bs=262144 00:21:41.874 iodepth=64 00:21:41.874 norandommap=1 00:21:41.874 numjobs=1 00:21:41.874 00:21:41.874 [job0] 00:21:41.874 filename=/dev/nvme0n1 00:21:41.874 [job1] 00:21:41.874 filename=/dev/nvme10n1 00:21:41.874 [job2] 00:21:41.874 filename=/dev/nvme11n1 00:21:41.874 [job3] 00:21:41.874 filename=/dev/nvme2n1 00:21:41.874 [job4] 00:21:41.874 filename=/dev/nvme3n1 00:21:41.874 [job5] 00:21:41.874 filename=/dev/nvme4n1 00:21:41.874 [job6] 00:21:41.874 filename=/dev/nvme5n1 00:21:41.874 [job7] 00:21:41.874 filename=/dev/nvme6n1 00:21:41.874 [job8] 00:21:41.874 filename=/dev/nvme7n1 00:21:41.874 [job9] 00:21:41.874 filename=/dev/nvme8n1 00:21:41.874 [job10] 00:21:41.874 filename=/dev/nvme9n1 00:21:41.874 Could not set queue depth (nvme0n1) 00:21:41.874 Could not set queue depth (nvme10n1) 00:21:41.874 Could not set queue depth (nvme11n1) 00:21:41.874 Could not set queue depth (nvme2n1) 00:21:41.874 Could not set queue depth (nvme3n1) 00:21:41.874 Could not set queue depth (nvme4n1) 00:21:41.874 Could not set queue depth (nvme5n1) 00:21:41.874 Could not set queue depth (nvme6n1) 00:21:41.874 Could not set queue depth (nvme7n1) 00:21:41.874 Could not set queue depth (nvme8n1) 00:21:41.874 Could not set queue depth (nvme9n1) 00:21:41.874 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:41.874 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:41.874 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:41.874 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:41.874 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:41.874 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:41.874 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:41.874 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:41.874 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:41.874 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:41.874 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:41.874 fio-3.35 00:21:41.874 Starting 11 threads 00:21:51.856 00:21:51.856 job0: (groupid=0, jobs=1): err= 0: pid=1841738: Wed Feb 14 20:22:28 2024 00:21:51.856 write: IOPS=359, BW=89.9MiB/s (94.3MB/s)(908MiB/10098msec); 0 zone resets 00:21:51.856 slat (usec): min=24, max=225130, avg=2348.42, stdev=7884.02 00:21:51.856 clat (msec): min=9, max=452, avg=175.03, stdev=90.12 00:21:51.856 lat (msec): min=9, max=452, avg=177.38, stdev=91.65 00:21:51.856 clat percentiles (msec): 00:21:51.856 | 1.00th=[ 25], 5.00th=[ 57], 10.00th=[ 75], 20.00th=[ 99], 00:21:51.856 | 30.00th=[ 114], 40.00th=[ 126], 50.00th=[ 148], 60.00th=[ 184], 00:21:51.856 | 70.00th=[ 239], 80.00th=[ 264], 90.00th=[ 300], 95.00th=[ 338], 00:21:51.856 | 99.00th=[ 393], 99.50th=[ 397], 99.90th=[ 409], 99.95th=[ 451], 00:21:51.856 | 99.99th=[ 451] 00:21:51.856 bw ( KiB/s): min=40960, max=196096, per=7.05%, avg=91355.40, stdev=41544.08, samples=20 00:21:51.856 iops : min= 160, max= 766, avg=356.85, stdev=162.27, samples=20 00:21:51.856 lat (msec) : 10=0.03%, 20=0.50%, 50=3.50%, 100=16.97%, 250=52.74% 00:21:51.856 lat (msec) : 500=26.27% 00:21:51.856 cpu : usr=1.19%, sys=1.30%, ctx=1588, majf=0, minf=1 00:21:51.856 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:21:51.856 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:51.856 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:51.856 issued rwts: total=0,3631,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:51.856 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:51.856 job1: (groupid=0, jobs=1): err= 0: pid=1841758: Wed Feb 14 20:22:28 2024 00:21:51.856 write: IOPS=533, BW=133MiB/s (140MB/s)(1352MiB/10128msec); 0 zone resets 00:21:51.856 slat (usec): min=20, max=73872, avg=1412.89, stdev=3954.20 00:21:51.856 clat (usec): min=1749, max=315601, avg=118418.78, stdev=62283.73 00:21:51.856 lat (usec): min=1797, max=315666, avg=119831.67, stdev=63118.70 00:21:51.856 clat percentiles (msec): 00:21:51.856 | 1.00th=[ 13], 5.00th=[ 22], 10.00th=[ 31], 20.00th=[ 57], 00:21:51.856 | 30.00th=[ 91], 40.00th=[ 109], 50.00th=[ 124], 60.00th=[ 134], 00:21:51.856 | 70.00th=[ 142], 80.00th=[ 155], 90.00th=[ 194], 95.00th=[ 251], 00:21:51.856 | 99.00th=[ 271], 99.50th=[ 279], 99.90th=[ 309], 99.95th=[ 313], 00:21:51.856 | 99.99th=[ 317] 00:21:51.856 bw ( KiB/s): min=67584, max=334848, per=10.56%, avg=136780.80, stdev=55490.84, samples=20 00:21:51.856 iops : min= 264, max= 1308, avg=534.30, stdev=216.76, samples=20 00:21:51.856 lat (msec) : 2=0.02%, 4=0.11%, 10=0.41%, 20=3.48%, 50=14.22% 00:21:51.856 lat (msec) : 100=15.35%, 250=61.55%, 500=4.86% 00:21:51.856 cpu : usr=1.10%, sys=1.58%, ctx=2947, majf=0, minf=1 00:21:51.856 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:21:51.856 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:51.856 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:51.856 issued rwts: total=0,5407,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:51.856 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:51.856 job2: (groupid=0, jobs=1): err= 0: pid=1841770: Wed Feb 14 20:22:28 2024 00:21:51.856 write: IOPS=510, BW=128MiB/s (134MB/s)(1290MiB/10113msec); 0 zone resets 00:21:51.856 slat (usec): min=21, max=159133, avg=1551.69, stdev=4940.52 00:21:51.856 clat (msec): min=8, max=375, avg=123.89, stdev=75.90 00:21:51.856 lat (msec): min=8, max=418, avg=125.44, stdev=76.99 00:21:51.856 clat percentiles (msec): 00:21:51.856 | 1.00th=[ 21], 5.00th=[ 32], 10.00th=[ 45], 20.00th=[ 57], 00:21:51.856 | 30.00th=[ 72], 40.00th=[ 81], 50.00th=[ 103], 60.00th=[ 127], 00:21:51.856 | 70.00th=[ 153], 80.00th=[ 205], 90.00th=[ 243], 95.00th=[ 266], 00:21:51.856 | 99.00th=[ 313], 99.50th=[ 334], 99.90th=[ 372], 99.95th=[ 376], 00:21:51.856 | 99.99th=[ 376] 00:21:51.856 bw ( KiB/s): min=57344, max=268800, per=10.07%, avg=130432.00, stdev=60327.53, samples=20 00:21:51.856 iops : min= 224, max= 1050, avg=509.50, stdev=235.65, samples=20 00:21:51.856 lat (msec) : 10=0.02%, 20=0.70%, 50=12.62%, 100=35.65%, 250=44.22% 00:21:51.856 lat (msec) : 500=6.79% 00:21:51.856 cpu : usr=1.04%, sys=1.86%, ctx=2488, majf=0, minf=1 00:21:51.856 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:21:51.856 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:51.856 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:51.856 issued rwts: total=0,5158,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:51.856 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:51.856 job3: (groupid=0, jobs=1): err= 0: pid=1841782: Wed Feb 14 20:22:28 2024 00:21:51.856 write: IOPS=393, BW=98.3MiB/s (103MB/s)(1001MiB/10184msec); 0 zone resets 00:21:51.856 slat (usec): min=20, max=122910, avg=2090.93, stdev=6175.42 00:21:51.856 clat (msec): min=2, max=663, avg=160.67, stdev=94.10 00:21:51.856 lat (msec): min=2, max=663, avg=162.76, stdev=95.05 00:21:51.856 clat percentiles (msec): 00:21:51.856 | 1.00th=[ 22], 5.00th=[ 43], 10.00th=[ 78], 20.00th=[ 106], 00:21:51.856 | 30.00th=[ 118], 40.00th=[ 129], 50.00th=[ 138], 60.00th=[ 150], 00:21:51.856 | 70.00th=[ 171], 80.00th=[ 207], 90.00th=[ 271], 95.00th=[ 317], 00:21:51.856 | 99.00th=[ 575], 99.50th=[ 634], 99.90th=[ 651], 99.95th=[ 659], 00:21:51.856 | 99.99th=[ 667] 00:21:51.856 bw ( KiB/s): min=20480, max=173056, per=7.78%, avg=100838.40, stdev=37960.57, samples=20 00:21:51.856 iops : min= 80, max= 676, avg=393.90, stdev=148.28, samples=20 00:21:51.856 lat (msec) : 4=0.02%, 10=0.15%, 20=0.70%, 50=5.25%, 100=10.62% 00:21:51.856 lat (msec) : 250=70.27%, 500=11.37%, 750=1.62% 00:21:51.856 cpu : usr=0.85%, sys=1.11%, ctx=1776, majf=0, minf=1 00:21:51.856 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:21:51.856 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:51.856 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:51.856 issued rwts: total=0,4003,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:51.856 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:51.856 job4: (groupid=0, jobs=1): err= 0: pid=1841786: Wed Feb 14 20:22:28 2024 00:21:51.856 write: IOPS=434, BW=109MiB/s (114MB/s)(1101MiB/10136msec); 0 zone resets 00:21:51.856 slat (usec): min=22, max=110635, avg=2161.10, stdev=5463.68 00:21:51.856 clat (msec): min=26, max=312, avg=144.91, stdev=63.41 00:21:51.856 lat (msec): min=26, max=312, avg=147.07, stdev=64.12 00:21:51.856 clat percentiles (msec): 00:21:51.856 | 1.00th=[ 57], 5.00th=[ 64], 10.00th=[ 68], 20.00th=[ 78], 00:21:51.856 | 30.00th=[ 104], 40.00th=[ 123], 50.00th=[ 136], 60.00th=[ 155], 00:21:51.856 | 70.00th=[ 182], 80.00th=[ 209], 90.00th=[ 241], 95.00th=[ 262], 00:21:51.856 | 99.00th=[ 284], 99.50th=[ 292], 99.90th=[ 305], 99.95th=[ 313], 00:21:51.856 | 99.99th=[ 313] 00:21:51.856 bw ( KiB/s): min=63488, max=232448, per=8.57%, avg=111061.75, stdev=45287.31, samples=20 00:21:51.856 iops : min= 248, max= 908, avg=433.80, stdev=176.92, samples=20 00:21:51.856 lat (msec) : 50=0.80%, 100=28.21%, 250=63.31%, 500=7.68% 00:21:51.856 cpu : usr=1.15%, sys=1.23%, ctx=1273, majf=0, minf=1 00:21:51.856 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:21:51.856 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:51.856 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:51.856 issued rwts: total=0,4402,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:51.856 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:51.856 job5: (groupid=0, jobs=1): err= 0: pid=1841810: Wed Feb 14 20:22:28 2024 00:21:51.857 write: IOPS=424, BW=106MiB/s (111MB/s)(1074MiB/10118msec); 0 zone resets 00:21:51.857 slat (usec): min=27, max=176134, avg=1959.67, stdev=5362.95 00:21:51.857 clat (msec): min=6, max=337, avg=148.38, stdev=63.30 00:21:51.857 lat (msec): min=6, max=338, avg=150.34, stdev=64.11 00:21:51.857 clat percentiles (msec): 00:21:51.857 | 1.00th=[ 21], 5.00th=[ 56], 10.00th=[ 77], 20.00th=[ 95], 00:21:51.857 | 30.00th=[ 109], 40.00th=[ 122], 50.00th=[ 133], 60.00th=[ 165], 00:21:51.857 | 70.00th=[ 186], 80.00th=[ 207], 90.00th=[ 241], 95.00th=[ 255], 00:21:51.857 | 99.00th=[ 313], 99.50th=[ 321], 99.90th=[ 338], 99.95th=[ 338], 00:21:51.857 | 99.99th=[ 338] 00:21:51.857 bw ( KiB/s): min=53248, max=173402, per=8.37%, avg=108407.70, stdev=36343.71, samples=20 00:21:51.857 iops : min= 208, max= 677, avg=423.45, stdev=141.93, samples=20 00:21:51.857 lat (msec) : 10=0.14%, 20=0.79%, 50=3.26%, 100=19.22%, 250=69.86% 00:21:51.857 lat (msec) : 500=6.73% 00:21:51.857 cpu : usr=1.26%, sys=1.39%, ctx=1727, majf=0, minf=1 00:21:51.857 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:21:51.857 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:51.857 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:51.857 issued rwts: total=0,4297,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:51.857 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:51.857 job6: (groupid=0, jobs=1): err= 0: pid=1841829: Wed Feb 14 20:22:28 2024 00:21:51.857 write: IOPS=512, BW=128MiB/s (134MB/s)(1297MiB/10116msec); 0 zone resets 00:21:51.857 slat (usec): min=19, max=150353, avg=1346.43, stdev=4198.91 00:21:51.857 clat (msec): min=3, max=312, avg=123.24, stdev=62.88 00:21:51.857 lat (msec): min=3, max=312, avg=124.59, stdev=63.65 00:21:51.857 clat percentiles (msec): 00:21:51.857 | 1.00th=[ 12], 5.00th=[ 23], 10.00th=[ 43], 20.00th=[ 72], 00:21:51.857 | 30.00th=[ 91], 40.00th=[ 110], 50.00th=[ 122], 60.00th=[ 130], 00:21:51.857 | 70.00th=[ 140], 80.00th=[ 165], 90.00th=[ 226], 95.00th=[ 253], 00:21:51.857 | 99.00th=[ 275], 99.50th=[ 288], 99.90th=[ 313], 99.95th=[ 313], 00:21:51.857 | 99.99th=[ 313] 00:21:51.857 bw ( KiB/s): min=70656, max=224256, per=10.12%, avg=131159.50, stdev=40597.86, samples=20 00:21:51.857 iops : min= 276, max= 876, avg=512.30, stdev=158.61, samples=20 00:21:51.857 lat (msec) : 4=0.02%, 10=0.67%, 20=3.20%, 50=7.96%, 100=22.27% 00:21:51.857 lat (msec) : 250=60.53%, 500=5.34% 00:21:51.857 cpu : usr=1.05%, sys=1.51%, ctx=2944, majf=0, minf=1 00:21:51.857 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:21:51.857 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:51.857 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:51.857 issued rwts: total=0,5186,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:51.857 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:51.857 job7: (groupid=0, jobs=1): err= 0: pid=1841838: Wed Feb 14 20:22:28 2024 00:21:51.857 write: IOPS=353, BW=88.3MiB/s (92.6MB/s)(889MiB/10064msec); 0 zone resets 00:21:51.857 slat (usec): min=31, max=60310, avg=2467.12, stdev=6553.41 00:21:51.857 clat (msec): min=20, max=439, avg=178.32, stdev=109.79 00:21:51.857 lat (msec): min=20, max=459, avg=180.79, stdev=111.39 00:21:51.857 clat percentiles (msec): 00:21:51.857 | 1.00th=[ 31], 5.00th=[ 46], 10.00th=[ 56], 20.00th=[ 74], 00:21:51.857 | 30.00th=[ 90], 40.00th=[ 112], 50.00th=[ 142], 60.00th=[ 211], 00:21:51.857 | 70.00th=[ 241], 80.00th=[ 279], 90.00th=[ 355], 95.00th=[ 384], 00:21:51.857 | 99.00th=[ 418], 99.50th=[ 426], 99.90th=[ 439], 99.95th=[ 439], 00:21:51.857 | 99.99th=[ 439] 00:21:51.857 bw ( KiB/s): min=39424, max=250880, per=6.90%, avg=89435.55, stdev=55069.16, samples=20 00:21:51.857 iops : min= 154, max= 980, avg=349.35, stdev=215.11, samples=20 00:21:51.857 lat (msec) : 50=6.86%, 100=28.66%, 250=39.23%, 500=25.25% 00:21:51.857 cpu : usr=0.97%, sys=1.23%, ctx=1514, majf=0, minf=1 00:21:51.857 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.2% 00:21:51.857 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:51.857 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:51.857 issued rwts: total=0,3556,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:51.857 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:51.857 job8: (groupid=0, jobs=1): err= 0: pid=1841870: Wed Feb 14 20:22:28 2024 00:21:51.857 write: IOPS=418, BW=105MiB/s (110MB/s)(1060MiB/10136msec); 0 zone resets 00:21:51.857 slat (usec): min=22, max=261471, avg=1904.80, stdev=6996.16 00:21:51.857 clat (msec): min=3, max=441, avg=150.82, stdev=80.95 00:21:51.857 lat (msec): min=3, max=441, avg=152.73, stdev=81.89 00:21:51.857 clat percentiles (msec): 00:21:51.857 | 1.00th=[ 14], 5.00th=[ 41], 10.00th=[ 64], 20.00th=[ 87], 00:21:51.857 | 30.00th=[ 106], 40.00th=[ 115], 50.00th=[ 127], 60.00th=[ 142], 00:21:51.857 | 70.00th=[ 192], 80.00th=[ 228], 90.00th=[ 268], 95.00th=[ 300], 00:21:51.857 | 99.00th=[ 372], 99.50th=[ 397], 99.90th=[ 439], 99.95th=[ 439], 00:21:51.857 | 99.99th=[ 443] 00:21:51.857 bw ( KiB/s): min=57344, max=185344, per=8.25%, avg=106918.40, stdev=39835.45, samples=20 00:21:51.857 iops : min= 224, max= 724, avg=417.65, stdev=155.61, samples=20 00:21:51.857 lat (msec) : 4=0.05%, 10=0.64%, 20=1.70%, 50=3.77%, 100=19.44% 00:21:51.857 lat (msec) : 250=60.93%, 500=13.47% 00:21:51.857 cpu : usr=0.98%, sys=1.31%, ctx=1909, majf=0, minf=1 00:21:51.857 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:21:51.857 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:51.857 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:51.857 issued rwts: total=0,4239,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:51.857 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:51.857 job9: (groupid=0, jobs=1): err= 0: pid=1841882: Wed Feb 14 20:22:28 2024 00:21:51.857 write: IOPS=392, BW=98.0MiB/s (103MB/s)(989MiB/10094msec); 0 zone resets 00:21:51.857 slat (usec): min=23, max=122327, avg=1990.18, stdev=5318.15 00:21:51.857 clat (msec): min=10, max=369, avg=161.22, stdev=80.18 00:21:51.857 lat (msec): min=10, max=369, avg=163.21, stdev=81.22 00:21:51.857 clat percentiles (msec): 00:21:51.857 | 1.00th=[ 18], 5.00th=[ 47], 10.00th=[ 67], 20.00th=[ 97], 00:21:51.857 | 30.00th=[ 103], 40.00th=[ 120], 50.00th=[ 144], 60.00th=[ 174], 00:21:51.857 | 70.00th=[ 209], 80.00th=[ 243], 90.00th=[ 279], 95.00th=[ 305], 00:21:51.857 | 99.00th=[ 334], 99.50th=[ 342], 99.90th=[ 355], 99.95th=[ 368], 00:21:51.857 | 99.99th=[ 372] 00:21:51.857 bw ( KiB/s): min=50176, max=173056, per=7.70%, avg=99686.40, stdev=37190.08, samples=20 00:21:51.857 iops : min= 196, max= 676, avg=389.40, stdev=145.27, samples=20 00:21:51.857 lat (msec) : 20=1.47%, 50=4.42%, 100=20.39%, 250=55.37%, 500=18.35% 00:21:51.857 cpu : usr=0.77%, sys=1.19%, ctx=1847, majf=0, minf=1 00:21:51.857 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:21:51.857 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:51.857 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:51.857 issued rwts: total=0,3957,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:51.857 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:51.857 job10: (groupid=0, jobs=1): err= 0: pid=1841891: Wed Feb 14 20:22:28 2024 00:21:51.857 write: IOPS=762, BW=191MiB/s (200MB/s)(1925MiB/10100msec); 0 zone resets 00:21:51.857 slat (usec): min=42, max=100943, avg=1230.47, stdev=2719.41 00:21:51.857 clat (msec): min=7, max=249, avg=82.55, stdev=32.35 00:21:51.857 lat (msec): min=10, max=249, avg=83.78, stdev=32.68 00:21:51.857 clat percentiles (msec): 00:21:51.857 | 1.00th=[ 49], 5.00th=[ 56], 10.00th=[ 57], 20.00th=[ 60], 00:21:51.857 | 30.00th=[ 62], 40.00th=[ 65], 50.00th=[ 70], 60.00th=[ 75], 00:21:51.857 | 70.00th=[ 94], 80.00th=[ 106], 90.00th=[ 126], 95.00th=[ 144], 00:21:51.857 | 99.00th=[ 207], 99.50th=[ 220], 99.90th=[ 245], 99.95th=[ 247], 00:21:51.857 | 99.99th=[ 251] 00:21:51.857 bw ( KiB/s): min=85504, max=279040, per=15.09%, avg=195456.00, stdev=61188.17, samples=20 00:21:51.857 iops : min= 334, max= 1090, avg=763.50, stdev=239.02, samples=20 00:21:51.857 lat (msec) : 10=0.01%, 20=0.08%, 50=1.01%, 100=74.43%, 250=24.46% 00:21:51.857 cpu : usr=2.78%, sys=2.17%, ctx=2239, majf=0, minf=1 00:21:51.857 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:21:51.857 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:51.857 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:51.857 issued rwts: total=0,7698,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:51.857 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:51.857 00:21:51.857 Run status group 0 (all jobs): 00:21:51.857 WRITE: bw=1265MiB/s (1327MB/s), 88.3MiB/s-191MiB/s (92.6MB/s-200MB/s), io=12.6GiB (13.5GB), run=10064-10184msec 00:21:51.857 00:21:51.857 Disk stats (read/write): 00:21:51.857 nvme0n1: ios=51/7062, merge=0/0, ticks=1552/1206646, in_queue=1208198, util=98.87% 00:21:51.857 nvme10n1: ios=49/10614, merge=0/0, ticks=114/1209909, in_queue=1210023, util=97.69% 00:21:51.857 nvme11n1: ios=50/10096, merge=0/0, ticks=152/1213134, in_queue=1213286, util=97.85% 00:21:51.857 nvme2n1: ios=49/7958, merge=0/0, ticks=117/1237200, in_queue=1237317, util=98.32% 00:21:51.857 nvme3n1: ios=44/8589, merge=0/0, ticks=1755/1180216, in_queue=1181971, util=99.94% 00:21:51.857 nvme4n1: ios=49/8383, merge=0/0, ticks=1982/1180766, in_queue=1182748, util=99.86% 00:21:51.857 nvme5n1: ios=45/10174, merge=0/0, ticks=870/1217683, in_queue=1218553, util=99.95% 00:21:51.857 nvme6n1: ios=30/6802, merge=0/0, ticks=1560/1213872, in_queue=1215432, util=100.00% 00:21:51.857 nvme7n1: ios=51/8316, merge=0/0, ticks=3473/1202598, in_queue=1206071, util=99.92% 00:21:51.857 nvme8n1: ios=0/7690, merge=0/0, ticks=0/1215704, in_queue=1215704, util=98.87% 00:21:51.857 nvme9n1: ios=53/15186, merge=0/0, ticks=1083/1203738, in_queue=1204821, util=100.00% 00:21:51.857 20:22:28 -- target/multiconnection.sh@36 -- # sync 00:21:51.857 20:22:28 -- target/multiconnection.sh@37 -- # seq 1 11 00:21:51.857 20:22:28 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:51.857 20:22:28 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:51.857 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:51.857 20:22:28 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:21:51.857 20:22:28 -- common/autotest_common.sh@1196 -- # local i=0 00:21:51.857 20:22:28 -- common/autotest_common.sh@1197 -- # lsblk -o NAME,SERIAL 00:21:51.857 20:22:28 -- common/autotest_common.sh@1197 -- # grep -q -w SPDK1 00:21:51.857 20:22:28 -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:21:51.857 20:22:28 -- common/autotest_common.sh@1204 -- # grep -q -w SPDK1 00:21:51.857 20:22:28 -- common/autotest_common.sh@1208 -- # return 0 00:21:51.857 20:22:28 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:51.858 20:22:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:51.858 20:22:28 -- common/autotest_common.sh@10 -- # set +x 00:21:51.858 20:22:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:51.858 20:22:28 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:51.858 20:22:28 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:21:51.858 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:21:51.858 20:22:29 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:21:51.858 20:22:29 -- common/autotest_common.sh@1196 -- # local i=0 00:21:51.858 20:22:29 -- common/autotest_common.sh@1197 -- # lsblk -o NAME,SERIAL 00:21:51.858 20:22:29 -- common/autotest_common.sh@1197 -- # grep -q -w SPDK2 00:21:51.858 20:22:29 -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:21:51.858 20:22:29 -- common/autotest_common.sh@1204 -- # grep -q -w SPDK2 00:21:51.858 20:22:29 -- common/autotest_common.sh@1208 -- # return 0 00:21:51.858 20:22:29 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:51.858 20:22:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:51.858 20:22:29 -- common/autotest_common.sh@10 -- # set +x 00:21:51.858 20:22:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:51.858 20:22:29 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:51.858 20:22:29 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:21:52.117 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:21:52.117 20:22:29 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:21:52.117 20:22:29 -- common/autotest_common.sh@1196 -- # local i=0 00:21:52.117 20:22:29 -- common/autotest_common.sh@1197 -- # lsblk -o NAME,SERIAL 00:21:52.117 20:22:29 -- common/autotest_common.sh@1197 -- # grep -q -w SPDK3 00:21:52.117 20:22:29 -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:21:52.117 20:22:29 -- common/autotest_common.sh@1204 -- # grep -q -w SPDK3 00:21:52.376 20:22:29 -- common/autotest_common.sh@1208 -- # return 0 00:21:52.376 20:22:29 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:21:52.376 20:22:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:52.376 20:22:29 -- common/autotest_common.sh@10 -- # set +x 00:21:52.376 20:22:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:52.376 20:22:29 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:52.376 20:22:29 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:21:52.635 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:21:52.635 20:22:29 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:21:52.635 20:22:29 -- common/autotest_common.sh@1196 -- # local i=0 00:21:52.635 20:22:29 -- common/autotest_common.sh@1197 -- # lsblk -o NAME,SERIAL 00:21:52.635 20:22:29 -- common/autotest_common.sh@1197 -- # grep -q -w SPDK4 00:21:52.635 20:22:29 -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:21:52.635 20:22:29 -- common/autotest_common.sh@1204 -- # grep -q -w SPDK4 00:21:52.635 20:22:29 -- common/autotest_common.sh@1208 -- # return 0 00:21:52.635 20:22:29 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:21:52.635 20:22:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:52.635 20:22:29 -- common/autotest_common.sh@10 -- # set +x 00:21:52.635 20:22:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:52.635 20:22:29 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:52.635 20:22:29 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:21:52.894 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:21:52.894 20:22:30 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:21:52.894 20:22:30 -- common/autotest_common.sh@1196 -- # local i=0 00:21:52.894 20:22:30 -- common/autotest_common.sh@1197 -- # lsblk -o NAME,SERIAL 00:21:52.894 20:22:30 -- common/autotest_common.sh@1197 -- # grep -q -w SPDK5 00:21:52.894 20:22:30 -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:21:52.894 20:22:30 -- common/autotest_common.sh@1204 -- # grep -q -w SPDK5 00:21:52.894 20:22:30 -- common/autotest_common.sh@1208 -- # return 0 00:21:52.894 20:22:30 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:21:52.894 20:22:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:52.894 20:22:30 -- common/autotest_common.sh@10 -- # set +x 00:21:52.894 20:22:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:52.894 20:22:30 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:52.894 20:22:30 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:21:53.154 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:21:53.154 20:22:30 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:21:53.154 20:22:30 -- common/autotest_common.sh@1196 -- # local i=0 00:21:53.154 20:22:30 -- common/autotest_common.sh@1197 -- # lsblk -o NAME,SERIAL 00:21:53.154 20:22:30 -- common/autotest_common.sh@1197 -- # grep -q -w SPDK6 00:21:53.154 20:22:30 -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:21:53.154 20:22:30 -- common/autotest_common.sh@1204 -- # grep -q -w SPDK6 00:21:53.154 20:22:30 -- common/autotest_common.sh@1208 -- # return 0 00:21:53.154 20:22:30 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:21:53.154 20:22:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:53.154 20:22:30 -- common/autotest_common.sh@10 -- # set +x 00:21:53.154 20:22:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:53.154 20:22:30 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:53.154 20:22:30 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:21:53.413 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:21:53.413 20:22:30 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:21:53.413 20:22:30 -- common/autotest_common.sh@1196 -- # local i=0 00:21:53.413 20:22:30 -- common/autotest_common.sh@1197 -- # lsblk -o NAME,SERIAL 00:21:53.413 20:22:30 -- common/autotest_common.sh@1197 -- # grep -q -w SPDK7 00:21:53.413 20:22:30 -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:21:53.413 20:22:30 -- common/autotest_common.sh@1204 -- # grep -q -w SPDK7 00:21:53.413 20:22:30 -- common/autotest_common.sh@1208 -- # return 0 00:21:53.413 20:22:30 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:21:53.413 20:22:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:53.413 20:22:30 -- common/autotest_common.sh@10 -- # set +x 00:21:53.413 20:22:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:53.413 20:22:30 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:53.413 20:22:30 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:21:53.673 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:21:53.673 20:22:30 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:21:53.673 20:22:30 -- common/autotest_common.sh@1196 -- # local i=0 00:21:53.673 20:22:30 -- common/autotest_common.sh@1197 -- # lsblk -o NAME,SERIAL 00:21:53.673 20:22:30 -- common/autotest_common.sh@1197 -- # grep -q -w SPDK8 00:21:53.673 20:22:30 -- common/autotest_common.sh@1204 -- # grep -q -w SPDK8 00:21:53.673 20:22:30 -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:21:53.673 20:22:30 -- common/autotest_common.sh@1208 -- # return 0 00:21:53.673 20:22:30 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:21:53.673 20:22:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:53.673 20:22:30 -- common/autotest_common.sh@10 -- # set +x 00:21:53.673 20:22:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:53.673 20:22:30 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:53.673 20:22:30 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:21:53.673 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:21:53.673 20:22:30 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:21:53.673 20:22:30 -- common/autotest_common.sh@1196 -- # local i=0 00:21:53.673 20:22:30 -- common/autotest_common.sh@1197 -- # lsblk -o NAME,SERIAL 00:21:53.673 20:22:30 -- common/autotest_common.sh@1197 -- # grep -q -w SPDK9 00:21:53.673 20:22:30 -- common/autotest_common.sh@1204 -- # grep -q -w SPDK9 00:21:53.673 20:22:30 -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:21:53.673 20:22:31 -- common/autotest_common.sh@1208 -- # return 0 00:21:53.673 20:22:31 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:21:53.673 20:22:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:53.673 20:22:31 -- common/autotest_common.sh@10 -- # set +x 00:21:53.673 20:22:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:53.673 20:22:31 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:53.673 20:22:31 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:21:53.673 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:21:53.673 20:22:31 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:21:53.673 20:22:31 -- common/autotest_common.sh@1196 -- # local i=0 00:21:53.673 20:22:31 -- common/autotest_common.sh@1197 -- # grep -q -w SPDK10 00:21:53.673 20:22:31 -- common/autotest_common.sh@1197 -- # lsblk -o NAME,SERIAL 00:21:53.932 20:22:31 -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:21:53.932 20:22:31 -- common/autotest_common.sh@1204 -- # grep -q -w SPDK10 00:21:53.932 20:22:31 -- common/autotest_common.sh@1208 -- # return 0 00:21:53.932 20:22:31 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:21:53.932 20:22:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:53.932 20:22:31 -- common/autotest_common.sh@10 -- # set +x 00:21:53.932 20:22:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:53.932 20:22:31 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:53.932 20:22:31 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:21:53.932 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:21:53.932 20:22:31 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:21:53.932 20:22:31 -- common/autotest_common.sh@1196 -- # local i=0 00:21:53.932 20:22:31 -- common/autotest_common.sh@1197 -- # lsblk -o NAME,SERIAL 00:21:53.932 20:22:31 -- common/autotest_common.sh@1197 -- # grep -q -w SPDK11 00:21:53.932 20:22:31 -- common/autotest_common.sh@1204 -- # grep -q -w SPDK11 00:21:53.932 20:22:31 -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:21:53.932 20:22:31 -- common/autotest_common.sh@1208 -- # return 0 00:21:53.932 20:22:31 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:21:53.932 20:22:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:53.932 20:22:31 -- common/autotest_common.sh@10 -- # set +x 00:21:53.932 20:22:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:53.932 20:22:31 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:21:53.932 20:22:31 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:21:53.933 20:22:31 -- target/multiconnection.sh@47 -- # nvmftestfini 00:21:53.933 20:22:31 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:53.933 20:22:31 -- nvmf/common.sh@116 -- # sync 00:21:53.933 20:22:31 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:53.933 20:22:31 -- nvmf/common.sh@119 -- # set +e 00:21:53.933 20:22:31 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:53.933 20:22:31 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:53.933 rmmod nvme_tcp 00:21:53.933 rmmod nvme_fabrics 00:21:53.933 rmmod nvme_keyring 00:21:53.933 20:22:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:53.933 20:22:31 -- nvmf/common.sh@123 -- # set -e 00:21:53.933 20:22:31 -- nvmf/common.sh@124 -- # return 0 00:21:53.933 20:22:31 -- nvmf/common.sh@477 -- # '[' -n 1833430 ']' 00:21:53.933 20:22:31 -- nvmf/common.sh@478 -- # killprocess 1833430 00:21:53.933 20:22:31 -- common/autotest_common.sh@924 -- # '[' -z 1833430 ']' 00:21:53.933 20:22:31 -- common/autotest_common.sh@928 -- # kill -0 1833430 00:21:53.933 20:22:31 -- common/autotest_common.sh@929 -- # uname 00:21:53.933 20:22:31 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:21:54.192 20:22:31 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 1833430 00:21:54.192 20:22:31 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:21:54.192 20:22:31 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:21:54.192 20:22:31 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 1833430' 00:21:54.192 killing process with pid 1833430 00:21:54.192 20:22:31 -- common/autotest_common.sh@943 -- # kill 1833430 00:21:54.192 20:22:31 -- common/autotest_common.sh@948 -- # wait 1833430 00:21:54.452 20:22:31 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:54.452 20:22:31 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:54.452 20:22:31 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:54.452 20:22:31 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:54.452 20:22:31 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:54.452 20:22:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:54.452 20:22:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:54.452 20:22:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:56.993 20:22:33 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:21:56.993 00:21:56.993 real 1m11.184s 00:21:56.993 user 4m14.568s 00:21:56.993 sys 0m22.325s 00:21:56.993 20:22:33 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:56.993 20:22:33 -- common/autotest_common.sh@10 -- # set +x 00:21:56.993 ************************************ 00:21:56.993 END TEST nvmf_multiconnection 00:21:56.993 ************************************ 00:21:56.993 20:22:33 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:21:56.993 20:22:33 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:21:56.993 20:22:33 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:21:56.993 20:22:33 -- common/autotest_common.sh@10 -- # set +x 00:21:56.993 ************************************ 00:21:56.993 START TEST nvmf_initiator_timeout 00:21:56.993 ************************************ 00:21:56.993 20:22:33 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:21:56.993 * Looking for test storage... 00:21:56.993 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:56.993 20:22:33 -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:56.993 20:22:33 -- nvmf/common.sh@7 -- # uname -s 00:21:56.993 20:22:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:56.993 20:22:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:56.993 20:22:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:56.993 20:22:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:56.993 20:22:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:56.993 20:22:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:56.993 20:22:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:56.993 20:22:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:56.993 20:22:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:56.993 20:22:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:56.993 20:22:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:21:56.993 20:22:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:21:56.993 20:22:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:56.993 20:22:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:56.993 20:22:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:56.993 20:22:34 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:56.993 20:22:34 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:56.993 20:22:34 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:56.993 20:22:34 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:56.993 20:22:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.993 20:22:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.993 20:22:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.993 20:22:34 -- paths/export.sh@5 -- # export PATH 00:21:56.993 20:22:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.993 20:22:34 -- nvmf/common.sh@46 -- # : 0 00:21:56.993 20:22:34 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:56.993 20:22:34 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:56.993 20:22:34 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:56.993 20:22:34 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:56.993 20:22:34 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:56.993 20:22:34 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:56.993 20:22:34 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:56.993 20:22:34 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:56.993 20:22:34 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:56.993 20:22:34 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:56.993 20:22:34 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:21:56.993 20:22:34 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:56.993 20:22:34 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:56.993 20:22:34 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:56.993 20:22:34 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:56.993 20:22:34 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:56.993 20:22:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:56.993 20:22:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:56.993 20:22:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:56.993 20:22:34 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:21:56.993 20:22:34 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:56.993 20:22:34 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:56.993 20:22:34 -- common/autotest_common.sh@10 -- # set +x 00:22:03.597 20:22:40 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:03.597 20:22:40 -- nvmf/common.sh@290 -- # pci_devs=() 00:22:03.597 20:22:40 -- nvmf/common.sh@290 -- # local -a pci_devs 00:22:03.597 20:22:40 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:22:03.597 20:22:40 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:22:03.597 20:22:40 -- nvmf/common.sh@292 -- # pci_drivers=() 00:22:03.597 20:22:40 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:22:03.597 20:22:40 -- nvmf/common.sh@294 -- # net_devs=() 00:22:03.597 20:22:40 -- nvmf/common.sh@294 -- # local -ga net_devs 00:22:03.597 20:22:40 -- nvmf/common.sh@295 -- # e810=() 00:22:03.597 20:22:40 -- nvmf/common.sh@295 -- # local -ga e810 00:22:03.597 20:22:40 -- nvmf/common.sh@296 -- # x722=() 00:22:03.597 20:22:40 -- nvmf/common.sh@296 -- # local -ga x722 00:22:03.597 20:22:40 -- nvmf/common.sh@297 -- # mlx=() 00:22:03.597 20:22:40 -- nvmf/common.sh@297 -- # local -ga mlx 00:22:03.597 20:22:40 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:03.597 20:22:40 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:03.597 20:22:40 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:03.597 20:22:40 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:03.597 20:22:40 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:03.597 20:22:40 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:03.597 20:22:40 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:03.597 20:22:40 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:03.597 20:22:40 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:03.597 20:22:40 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:03.597 20:22:40 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:03.597 20:22:40 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:22:03.597 20:22:40 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:22:03.597 20:22:40 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:22:03.597 20:22:40 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:22:03.597 20:22:40 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:22:03.597 20:22:40 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:22:03.597 20:22:40 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:03.597 20:22:40 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:03.597 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:03.597 20:22:40 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:03.597 20:22:40 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:03.597 20:22:40 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:03.597 20:22:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:03.597 20:22:40 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:03.597 20:22:40 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:03.597 20:22:40 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:03.597 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:03.597 20:22:40 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:03.597 20:22:40 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:03.597 20:22:40 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:03.597 20:22:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:03.597 20:22:40 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:03.597 20:22:40 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:22:03.597 20:22:40 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:22:03.597 20:22:40 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:22:03.597 20:22:40 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:03.597 20:22:40 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:03.597 20:22:40 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:03.597 20:22:40 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:03.597 20:22:40 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:03.597 Found net devices under 0000:af:00.0: cvl_0_0 00:22:03.597 20:22:40 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:03.597 20:22:40 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:03.597 20:22:40 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:03.597 20:22:40 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:03.597 20:22:40 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:03.597 20:22:40 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:03.597 Found net devices under 0000:af:00.1: cvl_0_1 00:22:03.597 20:22:40 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:03.597 20:22:40 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:22:03.597 20:22:40 -- nvmf/common.sh@402 -- # is_hw=yes 00:22:03.597 20:22:40 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:22:03.597 20:22:40 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:22:03.597 20:22:40 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:22:03.597 20:22:40 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:03.597 20:22:40 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:03.597 20:22:40 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:03.597 20:22:40 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:22:03.597 20:22:40 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:03.597 20:22:40 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:03.597 20:22:40 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:22:03.597 20:22:40 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:03.597 20:22:40 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:03.597 20:22:40 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:22:03.597 20:22:40 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:22:03.597 20:22:40 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:22:03.597 20:22:40 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:03.597 20:22:40 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:03.597 20:22:40 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:03.597 20:22:40 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:22:03.597 20:22:40 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:03.597 20:22:40 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:03.597 20:22:40 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:03.597 20:22:40 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:22:03.597 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:03.597 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.167 ms 00:22:03.597 00:22:03.597 --- 10.0.0.2 ping statistics --- 00:22:03.597 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:03.597 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:22:03.598 20:22:40 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:03.598 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:03.598 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.307 ms 00:22:03.598 00:22:03.598 --- 10.0.0.1 ping statistics --- 00:22:03.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:03.598 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:22:03.598 20:22:40 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:03.598 20:22:40 -- nvmf/common.sh@410 -- # return 0 00:22:03.598 20:22:40 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:03.598 20:22:40 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:03.598 20:22:40 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:03.598 20:22:40 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:03.598 20:22:40 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:03.598 20:22:40 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:03.598 20:22:40 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:03.598 20:22:40 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:22:03.598 20:22:40 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:03.598 20:22:40 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:03.598 20:22:40 -- common/autotest_common.sh@10 -- # set +x 00:22:03.598 20:22:40 -- nvmf/common.sh@469 -- # nvmfpid=1847629 00:22:03.598 20:22:40 -- nvmf/common.sh@470 -- # waitforlisten 1847629 00:22:03.598 20:22:40 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:03.598 20:22:40 -- common/autotest_common.sh@817 -- # '[' -z 1847629 ']' 00:22:03.598 20:22:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:03.598 20:22:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:03.598 20:22:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:03.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:03.598 20:22:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:03.598 20:22:40 -- common/autotest_common.sh@10 -- # set +x 00:22:03.598 [2024-02-14 20:22:40.366529] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:22:03.598 [2024-02-14 20:22:40.366569] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:03.598 EAL: No free 2048 kB hugepages reported on node 1 00:22:03.598 [2024-02-14 20:22:40.423802] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:03.598 [2024-02-14 20:22:40.499277] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:03.598 [2024-02-14 20:22:40.499387] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:03.598 [2024-02-14 20:22:40.499394] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:03.598 [2024-02-14 20:22:40.499400] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:03.598 [2024-02-14 20:22:40.499457] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:03.598 [2024-02-14 20:22:40.499475] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:03.598 [2024-02-14 20:22:40.499576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:03.598 [2024-02-14 20:22:40.499577] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:03.857 20:22:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:03.857 20:22:41 -- common/autotest_common.sh@850 -- # return 0 00:22:03.857 20:22:41 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:03.857 20:22:41 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:03.857 20:22:41 -- common/autotest_common.sh@10 -- # set +x 00:22:03.857 20:22:41 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:03.857 20:22:41 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:22:03.857 20:22:41 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:03.857 20:22:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:03.857 20:22:41 -- common/autotest_common.sh@10 -- # set +x 00:22:03.857 Malloc0 00:22:03.857 20:22:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:03.857 20:22:41 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:22:03.857 20:22:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:03.857 20:22:41 -- common/autotest_common.sh@10 -- # set +x 00:22:03.857 Delay0 00:22:03.857 20:22:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:03.857 20:22:41 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:03.857 20:22:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:03.857 20:22:41 -- common/autotest_common.sh@10 -- # set +x 00:22:03.857 [2024-02-14 20:22:41.236928] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:03.857 20:22:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:03.857 20:22:41 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:22:03.857 20:22:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:03.857 20:22:41 -- common/autotest_common.sh@10 -- # set +x 00:22:03.857 20:22:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:03.857 20:22:41 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:03.857 20:22:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:03.857 20:22:41 -- common/autotest_common.sh@10 -- # set +x 00:22:03.857 20:22:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:03.857 20:22:41 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:03.857 20:22:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:03.857 20:22:41 -- common/autotest_common.sh@10 -- # set +x 00:22:03.858 [2024-02-14 20:22:41.261815] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:03.858 20:22:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:03.858 20:22:41 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:22:05.235 20:22:42 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:22:05.235 20:22:42 -- common/autotest_common.sh@1175 -- # local i=0 00:22:05.235 20:22:42 -- common/autotest_common.sh@1176 -- # local nvme_device_counter=1 nvme_devices=0 00:22:05.235 20:22:42 -- common/autotest_common.sh@1177 -- # [[ -n '' ]] 00:22:05.235 20:22:42 -- common/autotest_common.sh@1182 -- # sleep 2 00:22:07.142 20:22:44 -- common/autotest_common.sh@1183 -- # (( i++ <= 15 )) 00:22:07.142 20:22:44 -- common/autotest_common.sh@1184 -- # lsblk -l -o NAME,SERIAL 00:22:07.142 20:22:44 -- common/autotest_common.sh@1184 -- # grep -c SPDKISFASTANDAWESOME 00:22:07.142 20:22:44 -- common/autotest_common.sh@1184 -- # nvme_devices=1 00:22:07.142 20:22:44 -- common/autotest_common.sh@1185 -- # (( nvme_devices == nvme_device_counter )) 00:22:07.142 20:22:44 -- common/autotest_common.sh@1185 -- # return 0 00:22:07.142 20:22:44 -- target/initiator_timeout.sh@35 -- # fio_pid=1848350 00:22:07.142 20:22:44 -- target/initiator_timeout.sh@37 -- # sleep 3 00:22:07.142 20:22:44 -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:22:07.142 [global] 00:22:07.142 thread=1 00:22:07.142 invalidate=1 00:22:07.142 rw=write 00:22:07.142 time_based=1 00:22:07.142 runtime=60 00:22:07.142 ioengine=libaio 00:22:07.142 direct=1 00:22:07.142 bs=4096 00:22:07.142 iodepth=1 00:22:07.142 norandommap=0 00:22:07.142 numjobs=1 00:22:07.142 00:22:07.142 verify_dump=1 00:22:07.142 verify_backlog=512 00:22:07.142 verify_state_save=0 00:22:07.142 do_verify=1 00:22:07.142 verify=crc32c-intel 00:22:07.142 [job0] 00:22:07.142 filename=/dev/nvme0n1 00:22:07.142 Could not set queue depth (nvme0n1) 00:22:07.401 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:22:07.401 fio-3.35 00:22:07.401 Starting 1 thread 00:22:10.690 20:22:47 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:22:10.690 20:22:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:10.690 20:22:47 -- common/autotest_common.sh@10 -- # set +x 00:22:10.690 true 00:22:10.690 20:22:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:10.690 20:22:47 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:22:10.690 20:22:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:10.690 20:22:47 -- common/autotest_common.sh@10 -- # set +x 00:22:10.690 true 00:22:10.690 20:22:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:10.690 20:22:47 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:22:10.690 20:22:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:10.690 20:22:47 -- common/autotest_common.sh@10 -- # set +x 00:22:10.690 true 00:22:10.690 20:22:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:10.690 20:22:47 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:22:10.690 20:22:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:10.690 20:22:47 -- common/autotest_common.sh@10 -- # set +x 00:22:10.690 true 00:22:10.690 20:22:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:10.690 20:22:47 -- target/initiator_timeout.sh@45 -- # sleep 3 00:22:13.222 20:22:50 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:22:13.222 20:22:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:13.222 20:22:50 -- common/autotest_common.sh@10 -- # set +x 00:22:13.222 true 00:22:13.222 20:22:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:13.222 20:22:50 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:22:13.223 20:22:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:13.223 20:22:50 -- common/autotest_common.sh@10 -- # set +x 00:22:13.223 true 00:22:13.223 20:22:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:13.223 20:22:50 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:22:13.223 20:22:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:13.223 20:22:50 -- common/autotest_common.sh@10 -- # set +x 00:22:13.223 true 00:22:13.223 20:22:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:13.223 20:22:50 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:22:13.223 20:22:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:13.223 20:22:50 -- common/autotest_common.sh@10 -- # set +x 00:22:13.223 true 00:22:13.223 20:22:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:13.223 20:22:50 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:22:13.223 20:22:50 -- target/initiator_timeout.sh@54 -- # wait 1848350 00:23:09.464 00:23:09.464 job0: (groupid=0, jobs=1): err= 0: pid=1848477: Wed Feb 14 20:23:44 2024 00:23:09.464 read: IOPS=39, BW=159KiB/s (162kB/s)(9516KiB/60025msec) 00:23:09.464 slat (usec): min=7, max=9749, avg=18.40, stdev=268.48 00:23:09.464 clat (usec): min=453, max=41514k, avg=24874.76, stdev=851113.14 00:23:09.464 lat (usec): min=461, max=41514k, avg=24893.16, stdev=851113.24 00:23:09.464 clat percentiles (usec): 00:23:09.464 | 1.00th=[ 486], 5.00th=[ 537], 10.00th=[ 578], 00:23:09.464 | 20.00th=[ 594], 30.00th=[ 611], 40.00th=[ 627], 00:23:09.464 | 50.00th=[ 644], 60.00th=[ 717], 70.00th=[ 848], 00:23:09.464 | 80.00th=[ 947], 90.00th=[ 42206], 95.00th=[ 42206], 00:23:09.464 | 99.00th=[ 42730], 99.50th=[ 42730], 99.90th=[ 43254], 00:23:09.464 | 99.95th=[ 43254], 99.99th=[17112761] 00:23:09.464 write: IOPS=42, BW=171KiB/s (175kB/s)(10.0MiB/60025msec); 0 zone resets 00:23:09.464 slat (nsec): min=6560, max=43362, avg=11436.37, stdev=2511.15 00:23:09.464 clat (usec): min=216, max=733, avg=296.01, stdev=62.62 00:23:09.464 lat (usec): min=224, max=777, avg=307.45, stdev=63.07 00:23:09.464 clat percentiles (usec): 00:23:09.464 | 1.00th=[ 239], 5.00th=[ 245], 10.00th=[ 251], 20.00th=[ 258], 00:23:09.464 | 30.00th=[ 262], 40.00th=[ 269], 50.00th=[ 273], 60.00th=[ 281], 00:23:09.464 | 70.00th=[ 289], 80.00th=[ 326], 90.00th=[ 388], 95.00th=[ 441], 00:23:09.464 | 99.00th=[ 515], 99.50th=[ 529], 99.90th=[ 693], 99.95th=[ 717], 00:23:09.464 | 99.99th=[ 734] 00:23:09.464 bw ( KiB/s): min= 2184, max= 5824, per=100.00%, avg=3413.33, stdev=1398.75, samples=6 00:23:09.464 iops : min= 546, max= 1456, avg=853.33, stdev=349.69, samples=6 00:23:09.464 lat (usec) : 250=4.98%, 500=47.07%, 750=30.17%, 1000=9.80% 00:23:09.464 lat (msec) : 2=0.10%, 50=7.86%, >=2000=0.02% 00:23:09.464 cpu : usr=0.06%, sys=0.15%, ctx=4941, majf=0, minf=2 00:23:09.464 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:09.464 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:09.464 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:09.464 issued rwts: total=2379,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:09.464 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:09.464 00:23:09.464 Run status group 0 (all jobs): 00:23:09.464 READ: bw=159KiB/s (162kB/s), 159KiB/s-159KiB/s (162kB/s-162kB/s), io=9516KiB (9744kB), run=60025-60025msec 00:23:09.464 WRITE: bw=171KiB/s (175kB/s), 171KiB/s-171KiB/s (175kB/s-175kB/s), io=10.0MiB (10.5MB), run=60025-60025msec 00:23:09.464 00:23:09.464 Disk stats (read/write): 00:23:09.464 nvme0n1: ios=2475/2560, merge=0/0, ticks=19018/728, in_queue=19746, util=99.64% 00:23:09.464 20:23:44 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:23:09.464 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:09.464 20:23:45 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:23:09.464 20:23:45 -- common/autotest_common.sh@1196 -- # local i=0 00:23:09.464 20:23:45 -- common/autotest_common.sh@1197 -- # lsblk -o NAME,SERIAL 00:23:09.464 20:23:45 -- common/autotest_common.sh@1197 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:09.464 20:23:45 -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:23:09.464 20:23:45 -- common/autotest_common.sh@1204 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:09.464 20:23:45 -- common/autotest_common.sh@1208 -- # return 0 00:23:09.464 20:23:45 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:23:09.464 20:23:45 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:23:09.464 nvmf hotplug test: fio successful as expected 00:23:09.464 20:23:45 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:09.464 20:23:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:09.464 20:23:45 -- common/autotest_common.sh@10 -- # set +x 00:23:09.464 20:23:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:09.464 20:23:45 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:23:09.464 20:23:45 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:23:09.464 20:23:45 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:23:09.464 20:23:45 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:09.464 20:23:45 -- nvmf/common.sh@116 -- # sync 00:23:09.464 20:23:45 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:23:09.464 20:23:45 -- nvmf/common.sh@119 -- # set +e 00:23:09.464 20:23:45 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:09.464 20:23:45 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:23:09.464 rmmod nvme_tcp 00:23:09.464 rmmod nvme_fabrics 00:23:09.464 rmmod nvme_keyring 00:23:09.464 20:23:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:09.464 20:23:45 -- nvmf/common.sh@123 -- # set -e 00:23:09.464 20:23:45 -- nvmf/common.sh@124 -- # return 0 00:23:09.464 20:23:45 -- nvmf/common.sh@477 -- # '[' -n 1847629 ']' 00:23:09.464 20:23:45 -- nvmf/common.sh@478 -- # killprocess 1847629 00:23:09.464 20:23:45 -- common/autotest_common.sh@924 -- # '[' -z 1847629 ']' 00:23:09.464 20:23:45 -- common/autotest_common.sh@928 -- # kill -0 1847629 00:23:09.464 20:23:45 -- common/autotest_common.sh@929 -- # uname 00:23:09.464 20:23:45 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:23:09.464 20:23:45 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 1847629 00:23:09.464 20:23:45 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:23:09.464 20:23:45 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:23:09.464 20:23:45 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 1847629' 00:23:09.464 killing process with pid 1847629 00:23:09.464 20:23:45 -- common/autotest_common.sh@943 -- # kill 1847629 00:23:09.464 20:23:45 -- common/autotest_common.sh@948 -- # wait 1847629 00:23:09.465 20:23:45 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:09.465 20:23:45 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:23:09.465 20:23:45 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:23:09.465 20:23:45 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:09.465 20:23:45 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:23:09.465 20:23:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:09.465 20:23:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:09.465 20:23:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:10.403 20:23:47 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:23:10.403 00:23:10.403 real 1m13.603s 00:23:10.403 user 4m25.343s 00:23:10.403 sys 0m6.417s 00:23:10.403 20:23:47 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:10.403 20:23:47 -- common/autotest_common.sh@10 -- # set +x 00:23:10.403 ************************************ 00:23:10.403 END TEST nvmf_initiator_timeout 00:23:10.403 ************************************ 00:23:10.403 20:23:47 -- nvmf/nvmf.sh@69 -- # [[ phy == phy ]] 00:23:10.404 20:23:47 -- nvmf/nvmf.sh@70 -- # '[' tcp = tcp ']' 00:23:10.404 20:23:47 -- nvmf/nvmf.sh@71 -- # gather_supported_nvmf_pci_devs 00:23:10.404 20:23:47 -- nvmf/common.sh@284 -- # xtrace_disable 00:23:10.404 20:23:47 -- common/autotest_common.sh@10 -- # set +x 00:23:16.993 20:23:53 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:16.993 20:23:53 -- nvmf/common.sh@290 -- # pci_devs=() 00:23:16.993 20:23:53 -- nvmf/common.sh@290 -- # local -a pci_devs 00:23:16.993 20:23:53 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:23:16.993 20:23:53 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:23:16.993 20:23:53 -- nvmf/common.sh@292 -- # pci_drivers=() 00:23:16.993 20:23:53 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:23:16.993 20:23:53 -- nvmf/common.sh@294 -- # net_devs=() 00:23:16.993 20:23:53 -- nvmf/common.sh@294 -- # local -ga net_devs 00:23:16.993 20:23:53 -- nvmf/common.sh@295 -- # e810=() 00:23:16.993 20:23:53 -- nvmf/common.sh@295 -- # local -ga e810 00:23:16.993 20:23:53 -- nvmf/common.sh@296 -- # x722=() 00:23:16.993 20:23:53 -- nvmf/common.sh@296 -- # local -ga x722 00:23:16.993 20:23:53 -- nvmf/common.sh@297 -- # mlx=() 00:23:16.993 20:23:53 -- nvmf/common.sh@297 -- # local -ga mlx 00:23:16.993 20:23:53 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:16.993 20:23:53 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:16.993 20:23:53 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:16.993 20:23:53 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:16.993 20:23:53 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:16.993 20:23:53 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:16.993 20:23:53 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:16.993 20:23:53 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:16.993 20:23:53 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:16.993 20:23:53 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:16.993 20:23:53 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:16.993 20:23:53 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:23:16.993 20:23:53 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:23:16.993 20:23:53 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:23:16.993 20:23:53 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:23:16.993 20:23:53 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:23:16.993 20:23:53 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:23:16.993 20:23:53 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:16.993 20:23:53 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:16.993 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:16.993 20:23:53 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:16.993 20:23:53 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:16.993 20:23:53 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:16.993 20:23:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:16.993 20:23:53 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:16.993 20:23:53 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:16.993 20:23:53 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:16.993 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:16.993 20:23:53 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:16.993 20:23:53 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:16.993 20:23:53 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:16.993 20:23:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:16.993 20:23:53 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:16.993 20:23:53 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:23:16.993 20:23:53 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:23:16.993 20:23:53 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:23:16.993 20:23:53 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:16.993 20:23:53 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:16.993 20:23:53 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:16.993 20:23:53 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:16.993 20:23:53 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:16.993 Found net devices under 0000:af:00.0: cvl_0_0 00:23:16.993 20:23:53 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:16.993 20:23:53 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:16.993 20:23:53 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:16.993 20:23:53 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:16.993 20:23:53 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:16.993 20:23:53 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:16.993 Found net devices under 0000:af:00.1: cvl_0_1 00:23:16.993 20:23:53 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:16.993 20:23:53 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:23:16.993 20:23:53 -- nvmf/nvmf.sh@72 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:16.993 20:23:53 -- nvmf/nvmf.sh@73 -- # (( 2 > 0 )) 00:23:16.993 20:23:53 -- nvmf/nvmf.sh@74 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:23:16.993 20:23:53 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:23:16.993 20:23:53 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:23:16.993 20:23:53 -- common/autotest_common.sh@10 -- # set +x 00:23:16.993 ************************************ 00:23:16.993 START TEST nvmf_perf_adq 00:23:16.993 ************************************ 00:23:16.993 20:23:53 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:23:16.993 * Looking for test storage... 00:23:16.993 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:16.993 20:23:53 -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:16.994 20:23:53 -- nvmf/common.sh@7 -- # uname -s 00:23:16.994 20:23:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:16.994 20:23:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:16.994 20:23:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:16.994 20:23:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:16.994 20:23:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:16.994 20:23:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:16.994 20:23:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:16.994 20:23:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:16.994 20:23:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:16.994 20:23:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:16.994 20:23:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:23:16.994 20:23:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:23:16.994 20:23:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:16.994 20:23:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:16.994 20:23:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:16.994 20:23:53 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:16.994 20:23:53 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:16.994 20:23:53 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:16.994 20:23:53 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:16.994 20:23:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.994 20:23:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.994 20:23:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.994 20:23:53 -- paths/export.sh@5 -- # export PATH 00:23:16.994 20:23:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.994 20:23:53 -- nvmf/common.sh@46 -- # : 0 00:23:16.994 20:23:53 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:16.994 20:23:53 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:16.994 20:23:53 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:16.994 20:23:53 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:16.994 20:23:53 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:16.994 20:23:53 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:16.994 20:23:53 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:16.994 20:23:53 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:16.994 20:23:53 -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:23:16.994 20:23:53 -- nvmf/common.sh@284 -- # xtrace_disable 00:23:16.994 20:23:53 -- common/autotest_common.sh@10 -- # set +x 00:23:22.274 20:23:58 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:22.274 20:23:58 -- nvmf/common.sh@290 -- # pci_devs=() 00:23:22.274 20:23:58 -- nvmf/common.sh@290 -- # local -a pci_devs 00:23:22.274 20:23:58 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:23:22.274 20:23:58 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:23:22.274 20:23:58 -- nvmf/common.sh@292 -- # pci_drivers=() 00:23:22.274 20:23:58 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:23:22.274 20:23:58 -- nvmf/common.sh@294 -- # net_devs=() 00:23:22.274 20:23:58 -- nvmf/common.sh@294 -- # local -ga net_devs 00:23:22.274 20:23:58 -- nvmf/common.sh@295 -- # e810=() 00:23:22.274 20:23:58 -- nvmf/common.sh@295 -- # local -ga e810 00:23:22.274 20:23:58 -- nvmf/common.sh@296 -- # x722=() 00:23:22.274 20:23:58 -- nvmf/common.sh@296 -- # local -ga x722 00:23:22.274 20:23:58 -- nvmf/common.sh@297 -- # mlx=() 00:23:22.274 20:23:58 -- nvmf/common.sh@297 -- # local -ga mlx 00:23:22.274 20:23:58 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:22.274 20:23:58 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:22.274 20:23:58 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:22.274 20:23:58 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:22.274 20:23:58 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:22.274 20:23:58 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:22.274 20:23:58 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:22.274 20:23:58 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:22.274 20:23:58 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:22.274 20:23:58 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:22.274 20:23:58 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:22.274 20:23:58 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:23:22.274 20:23:58 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:23:22.274 20:23:58 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:23:22.274 20:23:58 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:23:22.274 20:23:58 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:23:22.274 20:23:58 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:23:22.274 20:23:58 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:22.274 20:23:58 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:22.274 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:22.274 20:23:58 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:22.274 20:23:58 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:22.274 20:23:58 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:22.274 20:23:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:22.274 20:23:58 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:22.274 20:23:58 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:22.274 20:23:58 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:22.274 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:22.274 20:23:58 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:22.274 20:23:58 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:22.274 20:23:58 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:22.274 20:23:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:22.274 20:23:58 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:22.274 20:23:58 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:23:22.274 20:23:58 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:23:22.274 20:23:58 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:23:22.274 20:23:58 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:22.274 20:23:58 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:22.274 20:23:58 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:22.274 20:23:58 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:22.274 20:23:58 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:22.274 Found net devices under 0000:af:00.0: cvl_0_0 00:23:22.274 20:23:58 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:22.274 20:23:58 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:22.274 20:23:58 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:22.274 20:23:58 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:22.274 20:23:58 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:22.274 20:23:58 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:22.274 Found net devices under 0000:af:00.1: cvl_0_1 00:23:22.274 20:23:58 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:22.274 20:23:58 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:23:22.274 20:23:58 -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:22.274 20:23:58 -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:23:22.274 20:23:58 -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:23:22.274 20:23:58 -- target/perf_adq.sh@59 -- # adq_reload_driver 00:23:22.274 20:23:58 -- target/perf_adq.sh@52 -- # rmmod ice 00:23:22.533 20:23:59 -- target/perf_adq.sh@53 -- # modprobe ice 00:23:24.439 20:24:01 -- target/perf_adq.sh@54 -- # sleep 5 00:23:29.718 20:24:06 -- target/perf_adq.sh@67 -- # nvmftestinit 00:23:29.718 20:24:06 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:23:29.718 20:24:06 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:29.718 20:24:06 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:29.718 20:24:06 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:29.718 20:24:06 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:29.718 20:24:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:29.718 20:24:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:29.718 20:24:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:29.718 20:24:06 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:23:29.718 20:24:06 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:23:29.718 20:24:06 -- nvmf/common.sh@284 -- # xtrace_disable 00:23:29.718 20:24:06 -- common/autotest_common.sh@10 -- # set +x 00:23:29.718 20:24:06 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:29.718 20:24:06 -- nvmf/common.sh@290 -- # pci_devs=() 00:23:29.718 20:24:06 -- nvmf/common.sh@290 -- # local -a pci_devs 00:23:29.718 20:24:06 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:23:29.718 20:24:06 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:23:29.718 20:24:06 -- nvmf/common.sh@292 -- # pci_drivers=() 00:23:29.718 20:24:06 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:23:29.718 20:24:06 -- nvmf/common.sh@294 -- # net_devs=() 00:23:29.718 20:24:06 -- nvmf/common.sh@294 -- # local -ga net_devs 00:23:29.718 20:24:06 -- nvmf/common.sh@295 -- # e810=() 00:23:29.718 20:24:06 -- nvmf/common.sh@295 -- # local -ga e810 00:23:29.718 20:24:06 -- nvmf/common.sh@296 -- # x722=() 00:23:29.718 20:24:06 -- nvmf/common.sh@296 -- # local -ga x722 00:23:29.718 20:24:06 -- nvmf/common.sh@297 -- # mlx=() 00:23:29.718 20:24:06 -- nvmf/common.sh@297 -- # local -ga mlx 00:23:29.718 20:24:06 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:29.718 20:24:06 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:29.718 20:24:06 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:29.718 20:24:06 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:29.718 20:24:06 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:29.718 20:24:06 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:29.718 20:24:06 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:29.718 20:24:06 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:29.718 20:24:06 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:29.718 20:24:06 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:29.718 20:24:06 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:29.718 20:24:06 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:23:29.718 20:24:06 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:23:29.718 20:24:06 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:23:29.718 20:24:06 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:23:29.718 20:24:06 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:23:29.718 20:24:06 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:23:29.718 20:24:06 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:29.718 20:24:06 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:29.718 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:29.718 20:24:06 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:29.718 20:24:06 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:29.718 20:24:06 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:29.718 20:24:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:29.718 20:24:06 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:29.718 20:24:06 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:29.718 20:24:06 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:29.718 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:29.718 20:24:06 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:29.718 20:24:06 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:29.718 20:24:06 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:29.718 20:24:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:29.718 20:24:06 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:29.718 20:24:06 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:23:29.718 20:24:06 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:23:29.718 20:24:06 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:23:29.718 20:24:06 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:29.718 20:24:06 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:29.718 20:24:06 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:29.718 20:24:06 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:29.718 20:24:06 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:29.718 Found net devices under 0000:af:00.0: cvl_0_0 00:23:29.718 20:24:06 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:29.718 20:24:06 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:29.718 20:24:06 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:29.718 20:24:06 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:29.718 20:24:06 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:29.718 20:24:06 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:29.718 Found net devices under 0000:af:00.1: cvl_0_1 00:23:29.718 20:24:06 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:29.718 20:24:06 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:23:29.718 20:24:06 -- nvmf/common.sh@402 -- # is_hw=yes 00:23:29.718 20:24:06 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:23:29.718 20:24:06 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:23:29.718 20:24:06 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:23:29.718 20:24:06 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:29.718 20:24:06 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:29.718 20:24:06 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:29.718 20:24:06 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:23:29.718 20:24:06 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:29.718 20:24:06 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:29.718 20:24:06 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:23:29.718 20:24:06 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:29.718 20:24:06 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:29.718 20:24:06 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:23:29.718 20:24:06 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:23:29.718 20:24:06 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:23:29.718 20:24:06 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:29.718 20:24:06 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:29.718 20:24:06 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:29.718 20:24:06 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:23:29.718 20:24:06 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:29.718 20:24:07 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:29.718 20:24:07 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:29.718 20:24:07 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:23:29.718 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:29.718 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:23:29.718 00:23:29.718 --- 10.0.0.2 ping statistics --- 00:23:29.718 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:29.718 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:23:29.718 20:24:07 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:29.718 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:29.718 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.405 ms 00:23:29.718 00:23:29.718 --- 10.0.0.1 ping statistics --- 00:23:29.718 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:29.718 rtt min/avg/max/mdev = 0.405/0.405/0.405/0.000 ms 00:23:29.718 20:24:07 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:29.718 20:24:07 -- nvmf/common.sh@410 -- # return 0 00:23:29.718 20:24:07 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:29.718 20:24:07 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:29.718 20:24:07 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:29.719 20:24:07 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:29.719 20:24:07 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:29.719 20:24:07 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:29.719 20:24:07 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:29.719 20:24:07 -- target/perf_adq.sh@68 -- # nvmfappstart -m 0xF --wait-for-rpc 00:23:29.719 20:24:07 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:29.719 20:24:07 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:29.719 20:24:07 -- common/autotest_common.sh@10 -- # set +x 00:23:29.719 20:24:07 -- nvmf/common.sh@469 -- # nvmfpid=1867381 00:23:29.719 20:24:07 -- nvmf/common.sh@470 -- # waitforlisten 1867381 00:23:29.719 20:24:07 -- common/autotest_common.sh@817 -- # '[' -z 1867381 ']' 00:23:29.719 20:24:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:29.719 20:24:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:29.719 20:24:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:29.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:29.719 20:24:07 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:23:29.719 20:24:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:29.719 20:24:07 -- common/autotest_common.sh@10 -- # set +x 00:23:29.979 [2024-02-14 20:24:07.153660] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:23:29.979 [2024-02-14 20:24:07.153703] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:29.979 EAL: No free 2048 kB hugepages reported on node 1 00:23:29.979 [2024-02-14 20:24:07.215348] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:29.979 [2024-02-14 20:24:07.291937] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:29.979 [2024-02-14 20:24:07.292056] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:29.979 [2024-02-14 20:24:07.292064] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:29.979 [2024-02-14 20:24:07.292069] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:29.979 [2024-02-14 20:24:07.292107] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:29.979 [2024-02-14 20:24:07.292126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:29.979 [2024-02-14 20:24:07.292224] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:29.979 [2024-02-14 20:24:07.292226] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:30.548 20:24:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:30.548 20:24:07 -- common/autotest_common.sh@850 -- # return 0 00:23:30.548 20:24:07 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:30.548 20:24:07 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:30.548 20:24:07 -- common/autotest_common.sh@10 -- # set +x 00:23:30.807 20:24:07 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:30.807 20:24:07 -- target/perf_adq.sh@69 -- # adq_configure_nvmf_target 0 00:23:30.807 20:24:07 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:23:30.807 20:24:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:30.807 20:24:07 -- common/autotest_common.sh@10 -- # set +x 00:23:30.807 20:24:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:30.807 20:24:08 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:23:30.807 20:24:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:30.807 20:24:08 -- common/autotest_common.sh@10 -- # set +x 00:23:30.807 20:24:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:30.807 20:24:08 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:23:30.807 20:24:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:30.807 20:24:08 -- common/autotest_common.sh@10 -- # set +x 00:23:30.807 [2024-02-14 20:24:08.098249] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:30.807 20:24:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:30.807 20:24:08 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:30.807 20:24:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:30.807 20:24:08 -- common/autotest_common.sh@10 -- # set +x 00:23:30.807 Malloc1 00:23:30.807 20:24:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:30.807 20:24:08 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:30.807 20:24:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:30.807 20:24:08 -- common/autotest_common.sh@10 -- # set +x 00:23:30.807 20:24:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:30.807 20:24:08 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:30.807 20:24:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:30.807 20:24:08 -- common/autotest_common.sh@10 -- # set +x 00:23:30.807 20:24:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:30.807 20:24:08 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:30.807 20:24:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:30.807 20:24:08 -- common/autotest_common.sh@10 -- # set +x 00:23:30.807 [2024-02-14 20:24:08.145369] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:30.807 20:24:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:30.807 20:24:08 -- target/perf_adq.sh@73 -- # perfpid=1867630 00:23:30.807 20:24:08 -- target/perf_adq.sh@74 -- # sleep 2 00:23:30.807 20:24:08 -- target/perf_adq.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:23:30.807 EAL: No free 2048 kB hugepages reported on node 1 00:23:33.336 20:24:10 -- target/perf_adq.sh@76 -- # rpc_cmd nvmf_get_stats 00:23:33.336 20:24:10 -- target/perf_adq.sh@76 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:23:33.336 20:24:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:33.336 20:24:10 -- common/autotest_common.sh@10 -- # set +x 00:23:33.336 20:24:10 -- target/perf_adq.sh@76 -- # wc -l 00:23:33.336 20:24:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:33.336 20:24:10 -- target/perf_adq.sh@76 -- # count=4 00:23:33.336 20:24:10 -- target/perf_adq.sh@77 -- # [[ 4 -ne 4 ]] 00:23:33.336 20:24:10 -- target/perf_adq.sh@81 -- # wait 1867630 00:23:41.449 Initializing NVMe Controllers 00:23:41.449 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:41.449 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:23:41.449 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:23:41.449 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:23:41.449 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:23:41.449 Initialization complete. Launching workers. 00:23:41.449 ======================================================== 00:23:41.449 Latency(us) 00:23:41.449 Device Information : IOPS MiB/s Average min max 00:23:41.449 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 11379.00 44.45 5624.46 1605.30 9408.46 00:23:41.449 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 11430.20 44.65 5599.54 1361.59 11803.14 00:23:41.449 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 11358.80 44.37 5634.83 1417.52 12477.85 00:23:41.449 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 11267.00 44.01 5679.81 1328.04 12503.38 00:23:41.449 ======================================================== 00:23:41.449 Total : 45435.00 177.48 5634.51 1328.04 12503.38 00:23:41.449 00:23:41.449 20:24:18 -- target/perf_adq.sh@82 -- # nvmftestfini 00:23:41.450 20:24:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:41.450 20:24:18 -- nvmf/common.sh@116 -- # sync 00:23:41.450 20:24:18 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:23:41.450 20:24:18 -- nvmf/common.sh@119 -- # set +e 00:23:41.450 20:24:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:41.450 20:24:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:23:41.450 rmmod nvme_tcp 00:23:41.450 rmmod nvme_fabrics 00:23:41.450 rmmod nvme_keyring 00:23:41.450 20:24:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:41.450 20:24:18 -- nvmf/common.sh@123 -- # set -e 00:23:41.450 20:24:18 -- nvmf/common.sh@124 -- # return 0 00:23:41.450 20:24:18 -- nvmf/common.sh@477 -- # '[' -n 1867381 ']' 00:23:41.450 20:24:18 -- nvmf/common.sh@478 -- # killprocess 1867381 00:23:41.450 20:24:18 -- common/autotest_common.sh@924 -- # '[' -z 1867381 ']' 00:23:41.450 20:24:18 -- common/autotest_common.sh@928 -- # kill -0 1867381 00:23:41.450 20:24:18 -- common/autotest_common.sh@929 -- # uname 00:23:41.450 20:24:18 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:23:41.450 20:24:18 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 1867381 00:23:41.450 20:24:18 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:23:41.450 20:24:18 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:23:41.450 20:24:18 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 1867381' 00:23:41.450 killing process with pid 1867381 00:23:41.450 20:24:18 -- common/autotest_common.sh@943 -- # kill 1867381 00:23:41.450 20:24:18 -- common/autotest_common.sh@948 -- # wait 1867381 00:23:41.450 20:24:18 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:41.450 20:24:18 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:23:41.450 20:24:18 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:23:41.450 20:24:18 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:41.450 20:24:18 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:23:41.450 20:24:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:41.450 20:24:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:41.450 20:24:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:43.358 20:24:20 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:23:43.358 20:24:20 -- target/perf_adq.sh@84 -- # adq_reload_driver 00:23:43.358 20:24:20 -- target/perf_adq.sh@52 -- # rmmod ice 00:23:44.738 20:24:21 -- target/perf_adq.sh@53 -- # modprobe ice 00:23:46.646 20:24:23 -- target/perf_adq.sh@54 -- # sleep 5 00:23:51.959 20:24:28 -- target/perf_adq.sh@87 -- # nvmftestinit 00:23:51.959 20:24:28 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:23:51.959 20:24:28 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:51.959 20:24:28 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:51.959 20:24:28 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:51.959 20:24:28 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:51.959 20:24:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:51.959 20:24:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:51.959 20:24:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:51.959 20:24:28 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:23:51.959 20:24:28 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:23:51.959 20:24:28 -- nvmf/common.sh@284 -- # xtrace_disable 00:23:51.959 20:24:28 -- common/autotest_common.sh@10 -- # set +x 00:23:51.959 20:24:28 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:51.959 20:24:28 -- nvmf/common.sh@290 -- # pci_devs=() 00:23:51.959 20:24:28 -- nvmf/common.sh@290 -- # local -a pci_devs 00:23:51.959 20:24:28 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:23:51.959 20:24:28 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:23:51.959 20:24:28 -- nvmf/common.sh@292 -- # pci_drivers=() 00:23:51.959 20:24:28 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:23:51.959 20:24:28 -- nvmf/common.sh@294 -- # net_devs=() 00:23:51.959 20:24:28 -- nvmf/common.sh@294 -- # local -ga net_devs 00:23:51.959 20:24:28 -- nvmf/common.sh@295 -- # e810=() 00:23:51.959 20:24:28 -- nvmf/common.sh@295 -- # local -ga e810 00:23:51.959 20:24:28 -- nvmf/common.sh@296 -- # x722=() 00:23:51.959 20:24:28 -- nvmf/common.sh@296 -- # local -ga x722 00:23:51.959 20:24:28 -- nvmf/common.sh@297 -- # mlx=() 00:23:51.959 20:24:28 -- nvmf/common.sh@297 -- # local -ga mlx 00:23:51.959 20:24:28 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:51.959 20:24:28 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:51.959 20:24:28 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:51.959 20:24:28 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:51.959 20:24:28 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:51.959 20:24:28 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:51.959 20:24:28 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:51.959 20:24:28 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:51.959 20:24:28 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:51.959 20:24:28 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:51.959 20:24:28 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:51.959 20:24:28 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:23:51.959 20:24:28 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:23:51.959 20:24:28 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:23:51.959 20:24:28 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:23:51.959 20:24:28 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:23:51.959 20:24:28 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:23:51.959 20:24:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:51.959 20:24:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:51.959 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:51.959 20:24:28 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:51.959 20:24:28 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:51.959 20:24:28 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:51.959 20:24:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:51.959 20:24:28 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:51.959 20:24:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:51.959 20:24:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:51.959 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:51.959 20:24:28 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:51.959 20:24:28 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:51.959 20:24:28 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:51.959 20:24:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:51.959 20:24:28 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:51.959 20:24:28 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:23:51.959 20:24:28 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:23:51.959 20:24:28 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:23:51.959 20:24:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:51.959 20:24:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:51.959 20:24:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:51.959 20:24:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:51.959 20:24:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:51.959 Found net devices under 0000:af:00.0: cvl_0_0 00:23:51.959 20:24:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:51.959 20:24:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:51.959 20:24:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:51.959 20:24:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:51.959 20:24:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:51.959 20:24:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:51.959 Found net devices under 0000:af:00.1: cvl_0_1 00:23:51.959 20:24:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:51.959 20:24:28 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:23:51.959 20:24:28 -- nvmf/common.sh@402 -- # is_hw=yes 00:23:51.959 20:24:28 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:23:51.959 20:24:28 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:23:51.959 20:24:28 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:23:51.959 20:24:28 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:51.959 20:24:28 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:51.959 20:24:28 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:51.959 20:24:28 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:23:51.959 20:24:28 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:51.959 20:24:28 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:51.959 20:24:28 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:23:51.959 20:24:28 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:51.959 20:24:28 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:51.959 20:24:28 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:23:51.959 20:24:28 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:23:51.959 20:24:28 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:23:51.959 20:24:28 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:51.959 20:24:28 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:51.959 20:24:28 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:51.959 20:24:28 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:23:51.959 20:24:28 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:51.959 20:24:28 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:51.959 20:24:28 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:51.959 20:24:28 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:23:51.959 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:51.959 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:23:51.959 00:23:51.959 --- 10.0.0.2 ping statistics --- 00:23:51.959 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:51.959 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:23:51.959 20:24:28 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:51.959 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:51.959 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.349 ms 00:23:51.959 00:23:51.959 --- 10.0.0.1 ping statistics --- 00:23:51.959 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:51.959 rtt min/avg/max/mdev = 0.349/0.349/0.349/0.000 ms 00:23:51.959 20:24:28 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:51.959 20:24:28 -- nvmf/common.sh@410 -- # return 0 00:23:51.959 20:24:28 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:51.959 20:24:28 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:51.959 20:24:28 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:51.959 20:24:28 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:51.959 20:24:28 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:51.959 20:24:28 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:51.959 20:24:28 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:51.959 20:24:28 -- target/perf_adq.sh@88 -- # adq_configure_driver 00:23:51.959 20:24:28 -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:23:51.960 20:24:28 -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:23:51.960 20:24:28 -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:23:51.960 net.core.busy_poll = 1 00:23:51.960 20:24:28 -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:23:51.960 net.core.busy_read = 1 00:23:51.960 20:24:28 -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:23:51.960 20:24:28 -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:23:51.960 20:24:29 -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:23:51.960 20:24:29 -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:23:51.960 20:24:29 -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:23:51.960 20:24:29 -- target/perf_adq.sh@89 -- # nvmfappstart -m 0xF --wait-for-rpc 00:23:51.960 20:24:29 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:51.960 20:24:29 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:51.960 20:24:29 -- common/autotest_common.sh@10 -- # set +x 00:23:51.960 20:24:29 -- nvmf/common.sh@469 -- # nvmfpid=1871260 00:23:51.960 20:24:29 -- nvmf/common.sh@470 -- # waitforlisten 1871260 00:23:51.960 20:24:29 -- common/autotest_common.sh@817 -- # '[' -z 1871260 ']' 00:23:51.960 20:24:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:51.960 20:24:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:51.960 20:24:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:51.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:51.960 20:24:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:51.960 20:24:29 -- common/autotest_common.sh@10 -- # set +x 00:23:51.960 20:24:29 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:23:51.960 [2024-02-14 20:24:29.199590] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:23:51.960 [2024-02-14 20:24:29.199636] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:51.960 EAL: No free 2048 kB hugepages reported on node 1 00:23:51.960 [2024-02-14 20:24:29.261596] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:51.960 [2024-02-14 20:24:29.336160] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:51.960 [2024-02-14 20:24:29.336271] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:51.960 [2024-02-14 20:24:29.336278] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:51.960 [2024-02-14 20:24:29.336284] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:51.960 [2024-02-14 20:24:29.336321] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:51.960 [2024-02-14 20:24:29.336343] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:51.960 [2024-02-14 20:24:29.336452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:51.960 [2024-02-14 20:24:29.336452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:52.897 20:24:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:52.897 20:24:29 -- common/autotest_common.sh@850 -- # return 0 00:23:52.897 20:24:29 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:52.897 20:24:29 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:52.897 20:24:29 -- common/autotest_common.sh@10 -- # set +x 00:23:52.897 20:24:30 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:52.897 20:24:30 -- target/perf_adq.sh@90 -- # adq_configure_nvmf_target 1 00:23:52.897 20:24:30 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:23:52.897 20:24:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:52.897 20:24:30 -- common/autotest_common.sh@10 -- # set +x 00:23:52.897 20:24:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:52.897 20:24:30 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:23:52.897 20:24:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:52.897 20:24:30 -- common/autotest_common.sh@10 -- # set +x 00:23:52.897 20:24:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:52.897 20:24:30 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:23:52.897 20:24:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:52.897 20:24:30 -- common/autotest_common.sh@10 -- # set +x 00:23:52.897 [2024-02-14 20:24:30.116270] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:52.897 20:24:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:52.897 20:24:30 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:52.897 20:24:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:52.897 20:24:30 -- common/autotest_common.sh@10 -- # set +x 00:23:52.897 Malloc1 00:23:52.897 20:24:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:52.897 20:24:30 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:52.897 20:24:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:52.897 20:24:30 -- common/autotest_common.sh@10 -- # set +x 00:23:52.897 20:24:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:52.897 20:24:30 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:52.897 20:24:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:52.897 20:24:30 -- common/autotest_common.sh@10 -- # set +x 00:23:52.897 20:24:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:52.897 20:24:30 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:52.897 20:24:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:52.897 20:24:30 -- common/autotest_common.sh@10 -- # set +x 00:23:52.897 [2024-02-14 20:24:30.159732] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:52.897 20:24:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:52.897 20:24:30 -- target/perf_adq.sh@94 -- # perfpid=1871462 00:23:52.897 20:24:30 -- target/perf_adq.sh@95 -- # sleep 2 00:23:52.897 20:24:30 -- target/perf_adq.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:23:52.897 EAL: No free 2048 kB hugepages reported on node 1 00:23:54.792 20:24:32 -- target/perf_adq.sh@97 -- # rpc_cmd nvmf_get_stats 00:23:54.792 20:24:32 -- target/perf_adq.sh@97 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:23:54.792 20:24:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:54.792 20:24:32 -- target/perf_adq.sh@97 -- # wc -l 00:23:54.792 20:24:32 -- common/autotest_common.sh@10 -- # set +x 00:23:54.792 20:24:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:55.049 20:24:32 -- target/perf_adq.sh@97 -- # count=2 00:23:55.049 20:24:32 -- target/perf_adq.sh@98 -- # [[ 2 -lt 2 ]] 00:23:55.049 20:24:32 -- target/perf_adq.sh@103 -- # wait 1871462 00:24:03.151 Initializing NVMe Controllers 00:24:03.151 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:03.151 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:24:03.151 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:24:03.151 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:24:03.151 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:24:03.151 Initialization complete. Launching workers. 00:24:03.151 ======================================================== 00:24:03.151 Latency(us) 00:24:03.151 Device Information : IOPS MiB/s Average min max 00:24:03.151 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 6330.30 24.73 10113.94 1722.03 57401.79 00:24:03.151 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 6269.60 24.49 10210.92 1660.84 53298.70 00:24:03.151 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 6249.90 24.41 10243.08 1834.89 54603.50 00:24:03.151 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 12832.80 50.13 5002.59 1505.98 46405.86 00:24:03.151 ======================================================== 00:24:03.151 Total : 31682.60 123.76 8088.29 1505.98 57401.79 00:24:03.151 00:24:03.151 20:24:40 -- target/perf_adq.sh@104 -- # nvmftestfini 00:24:03.151 20:24:40 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:03.151 20:24:40 -- nvmf/common.sh@116 -- # sync 00:24:03.151 20:24:40 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:03.151 20:24:40 -- nvmf/common.sh@119 -- # set +e 00:24:03.151 20:24:40 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:03.151 20:24:40 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:03.151 rmmod nvme_tcp 00:24:03.151 rmmod nvme_fabrics 00:24:03.151 rmmod nvme_keyring 00:24:03.151 20:24:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:03.151 20:24:40 -- nvmf/common.sh@123 -- # set -e 00:24:03.151 20:24:40 -- nvmf/common.sh@124 -- # return 0 00:24:03.151 20:24:40 -- nvmf/common.sh@477 -- # '[' -n 1871260 ']' 00:24:03.151 20:24:40 -- nvmf/common.sh@478 -- # killprocess 1871260 00:24:03.151 20:24:40 -- common/autotest_common.sh@924 -- # '[' -z 1871260 ']' 00:24:03.151 20:24:40 -- common/autotest_common.sh@928 -- # kill -0 1871260 00:24:03.151 20:24:40 -- common/autotest_common.sh@929 -- # uname 00:24:03.151 20:24:40 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:24:03.151 20:24:40 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 1871260 00:24:03.151 20:24:40 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:24:03.151 20:24:40 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:24:03.151 20:24:40 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 1871260' 00:24:03.151 killing process with pid 1871260 00:24:03.151 20:24:40 -- common/autotest_common.sh@943 -- # kill 1871260 00:24:03.151 20:24:40 -- common/autotest_common.sh@948 -- # wait 1871260 00:24:03.411 20:24:40 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:03.411 20:24:40 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:03.411 20:24:40 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:03.411 20:24:40 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:03.411 20:24:40 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:03.411 20:24:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:03.411 20:24:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:03.411 20:24:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:05.949 20:24:42 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:24:05.949 20:24:42 -- target/perf_adq.sh@106 -- # trap - SIGINT SIGTERM EXIT 00:24:05.949 00:24:05.949 real 0m49.398s 00:24:05.949 user 2m48.231s 00:24:05.949 sys 0m10.018s 00:24:05.949 20:24:42 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:05.949 20:24:42 -- common/autotest_common.sh@10 -- # set +x 00:24:05.949 ************************************ 00:24:05.949 END TEST nvmf_perf_adq 00:24:05.949 ************************************ 00:24:05.949 20:24:42 -- nvmf/nvmf.sh@80 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:24:05.949 20:24:42 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:24:05.949 20:24:42 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:24:05.949 20:24:42 -- common/autotest_common.sh@10 -- # set +x 00:24:05.949 ************************************ 00:24:05.949 START TEST nvmf_shutdown 00:24:05.949 ************************************ 00:24:05.949 20:24:42 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:24:05.949 * Looking for test storage... 00:24:05.949 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:05.949 20:24:42 -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:05.949 20:24:42 -- nvmf/common.sh@7 -- # uname -s 00:24:05.949 20:24:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:05.949 20:24:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:05.949 20:24:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:05.949 20:24:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:05.949 20:24:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:05.949 20:24:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:05.949 20:24:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:05.949 20:24:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:05.949 20:24:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:05.949 20:24:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:05.949 20:24:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:24:05.949 20:24:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:24:05.949 20:24:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:05.949 20:24:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:05.949 20:24:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:05.949 20:24:42 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:05.949 20:24:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:05.949 20:24:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:05.949 20:24:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:05.949 20:24:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.949 20:24:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.949 20:24:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.949 20:24:42 -- paths/export.sh@5 -- # export PATH 00:24:05.949 20:24:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.949 20:24:42 -- nvmf/common.sh@46 -- # : 0 00:24:05.949 20:24:42 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:05.949 20:24:42 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:05.949 20:24:42 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:05.949 20:24:42 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:05.950 20:24:42 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:05.950 20:24:42 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:05.950 20:24:42 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:05.950 20:24:42 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:05.950 20:24:42 -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:05.950 20:24:42 -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:05.950 20:24:42 -- target/shutdown.sh@146 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:24:05.950 20:24:42 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:24:05.950 20:24:42 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:24:05.950 20:24:42 -- common/autotest_common.sh@10 -- # set +x 00:24:05.950 ************************************ 00:24:05.950 START TEST nvmf_shutdown_tc1 00:24:05.950 ************************************ 00:24:05.950 20:24:42 -- common/autotest_common.sh@1102 -- # nvmf_shutdown_tc1 00:24:05.950 20:24:42 -- target/shutdown.sh@74 -- # starttarget 00:24:05.950 20:24:42 -- target/shutdown.sh@15 -- # nvmftestinit 00:24:05.950 20:24:42 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:05.950 20:24:42 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:05.950 20:24:42 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:05.950 20:24:42 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:05.950 20:24:42 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:05.950 20:24:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:05.950 20:24:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:05.950 20:24:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:05.950 20:24:42 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:05.950 20:24:42 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:05.950 20:24:42 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:05.950 20:24:42 -- common/autotest_common.sh@10 -- # set +x 00:24:11.227 20:24:48 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:11.227 20:24:48 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:11.227 20:24:48 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:11.227 20:24:48 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:11.227 20:24:48 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:11.227 20:24:48 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:11.227 20:24:48 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:11.227 20:24:48 -- nvmf/common.sh@294 -- # net_devs=() 00:24:11.227 20:24:48 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:11.227 20:24:48 -- nvmf/common.sh@295 -- # e810=() 00:24:11.227 20:24:48 -- nvmf/common.sh@295 -- # local -ga e810 00:24:11.227 20:24:48 -- nvmf/common.sh@296 -- # x722=() 00:24:11.227 20:24:48 -- nvmf/common.sh@296 -- # local -ga x722 00:24:11.227 20:24:48 -- nvmf/common.sh@297 -- # mlx=() 00:24:11.227 20:24:48 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:11.227 20:24:48 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:11.227 20:24:48 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:11.227 20:24:48 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:11.227 20:24:48 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:11.227 20:24:48 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:11.227 20:24:48 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:11.227 20:24:48 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:11.227 20:24:48 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:11.227 20:24:48 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:11.227 20:24:48 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:11.227 20:24:48 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:11.227 20:24:48 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:11.227 20:24:48 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:11.227 20:24:48 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:24:11.227 20:24:48 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:24:11.227 20:24:48 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:24:11.227 20:24:48 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:11.227 20:24:48 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:11.227 20:24:48 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:11.227 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:11.227 20:24:48 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:11.227 20:24:48 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:11.227 20:24:48 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:11.227 20:24:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:11.227 20:24:48 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:11.227 20:24:48 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:11.227 20:24:48 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:11.227 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:11.227 20:24:48 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:11.227 20:24:48 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:11.227 20:24:48 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:11.227 20:24:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:11.227 20:24:48 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:11.227 20:24:48 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:11.227 20:24:48 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:24:11.227 20:24:48 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:24:11.227 20:24:48 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:11.227 20:24:48 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:11.227 20:24:48 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:11.227 20:24:48 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:11.227 20:24:48 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:11.227 Found net devices under 0000:af:00.0: cvl_0_0 00:24:11.227 20:24:48 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:11.227 20:24:48 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:11.227 20:24:48 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:11.227 20:24:48 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:11.227 20:24:48 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:11.227 20:24:48 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:11.227 Found net devices under 0000:af:00.1: cvl_0_1 00:24:11.227 20:24:48 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:11.227 20:24:48 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:11.227 20:24:48 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:11.227 20:24:48 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:11.227 20:24:48 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:24:11.227 20:24:48 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:24:11.227 20:24:48 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:11.227 20:24:48 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:11.227 20:24:48 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:11.227 20:24:48 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:24:11.227 20:24:48 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:11.227 20:24:48 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:11.227 20:24:48 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:24:11.227 20:24:48 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:11.227 20:24:48 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:11.227 20:24:48 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:24:11.227 20:24:48 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:24:11.227 20:24:48 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:24:11.227 20:24:48 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:11.486 20:24:48 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:11.486 20:24:48 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:11.486 20:24:48 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:24:11.486 20:24:48 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:11.486 20:24:48 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:11.486 20:24:48 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:11.487 20:24:48 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:24:11.487 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:11.487 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:24:11.487 00:24:11.487 --- 10.0.0.2 ping statistics --- 00:24:11.487 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:11.487 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:24:11.487 20:24:48 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:11.487 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:11.487 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.345 ms 00:24:11.487 00:24:11.487 --- 10.0.0.1 ping statistics --- 00:24:11.487 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:11.487 rtt min/avg/max/mdev = 0.345/0.345/0.345/0.000 ms 00:24:11.487 20:24:48 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:11.487 20:24:48 -- nvmf/common.sh@410 -- # return 0 00:24:11.487 20:24:48 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:11.487 20:24:48 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:11.487 20:24:48 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:11.487 20:24:48 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:11.487 20:24:48 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:11.487 20:24:48 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:11.487 20:24:48 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:11.487 20:24:48 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:24:11.487 20:24:48 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:11.487 20:24:48 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:11.487 20:24:48 -- common/autotest_common.sh@10 -- # set +x 00:24:11.487 20:24:48 -- nvmf/common.sh@469 -- # nvmfpid=1876961 00:24:11.487 20:24:48 -- nvmf/common.sh@470 -- # waitforlisten 1876961 00:24:11.487 20:24:48 -- common/autotest_common.sh@817 -- # '[' -z 1876961 ']' 00:24:11.487 20:24:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:11.487 20:24:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:11.487 20:24:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:11.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:11.487 20:24:48 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:11.487 20:24:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:11.487 20:24:48 -- common/autotest_common.sh@10 -- # set +x 00:24:11.746 [2024-02-14 20:24:48.916207] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:24:11.746 [2024-02-14 20:24:48.916251] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:11.746 EAL: No free 2048 kB hugepages reported on node 1 00:24:11.746 [2024-02-14 20:24:48.979478] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:11.746 [2024-02-14 20:24:49.055582] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:11.746 [2024-02-14 20:24:49.055711] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:11.746 [2024-02-14 20:24:49.055720] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:11.746 [2024-02-14 20:24:49.055727] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:11.746 [2024-02-14 20:24:49.055769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:11.746 [2024-02-14 20:24:49.055791] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:11.746 [2024-02-14 20:24:49.055901] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:11.746 [2024-02-14 20:24:49.055902] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:24:12.314 20:24:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:12.314 20:24:49 -- common/autotest_common.sh@850 -- # return 0 00:24:12.314 20:24:49 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:12.314 20:24:49 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:12.314 20:24:49 -- common/autotest_common.sh@10 -- # set +x 00:24:12.574 20:24:49 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:12.574 20:24:49 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:12.574 20:24:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:12.574 20:24:49 -- common/autotest_common.sh@10 -- # set +x 00:24:12.574 [2024-02-14 20:24:49.745846] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:12.574 20:24:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:12.574 20:24:49 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:24:12.574 20:24:49 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:24:12.574 20:24:49 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:12.574 20:24:49 -- common/autotest_common.sh@10 -- # set +x 00:24:12.574 20:24:49 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:12.574 20:24:49 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:12.574 20:24:49 -- target/shutdown.sh@28 -- # cat 00:24:12.574 20:24:49 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:12.574 20:24:49 -- target/shutdown.sh@28 -- # cat 00:24:12.574 20:24:49 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:12.574 20:24:49 -- target/shutdown.sh@28 -- # cat 00:24:12.574 20:24:49 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:12.574 20:24:49 -- target/shutdown.sh@28 -- # cat 00:24:12.574 20:24:49 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:12.574 20:24:49 -- target/shutdown.sh@28 -- # cat 00:24:12.574 20:24:49 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:12.574 20:24:49 -- target/shutdown.sh@28 -- # cat 00:24:12.574 20:24:49 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:12.574 20:24:49 -- target/shutdown.sh@28 -- # cat 00:24:12.574 20:24:49 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:12.574 20:24:49 -- target/shutdown.sh@28 -- # cat 00:24:12.574 20:24:49 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:12.574 20:24:49 -- target/shutdown.sh@28 -- # cat 00:24:12.574 20:24:49 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:12.574 20:24:49 -- target/shutdown.sh@28 -- # cat 00:24:12.574 20:24:49 -- target/shutdown.sh@35 -- # rpc_cmd 00:24:12.574 20:24:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:12.574 20:24:49 -- common/autotest_common.sh@10 -- # set +x 00:24:12.574 Malloc1 00:24:12.574 [2024-02-14 20:24:49.841104] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:12.574 Malloc2 00:24:12.574 Malloc3 00:24:12.574 Malloc4 00:24:12.574 Malloc5 00:24:12.833 Malloc6 00:24:12.833 Malloc7 00:24:12.833 Malloc8 00:24:12.833 Malloc9 00:24:12.833 Malloc10 00:24:12.833 20:24:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:12.833 20:24:50 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:24:12.833 20:24:50 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:12.833 20:24:50 -- common/autotest_common.sh@10 -- # set +x 00:24:13.093 20:24:50 -- target/shutdown.sh@78 -- # perfpid=1877239 00:24:13.093 20:24:50 -- target/shutdown.sh@79 -- # waitforlisten 1877239 /var/tmp/bdevperf.sock 00:24:13.093 20:24:50 -- common/autotest_common.sh@817 -- # '[' -z 1877239 ']' 00:24:13.093 20:24:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:13.093 20:24:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:13.093 20:24:50 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:24:13.093 20:24:50 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:13.094 20:24:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:13.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:13.094 20:24:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:13.094 20:24:50 -- nvmf/common.sh@520 -- # config=() 00:24:13.094 20:24:50 -- common/autotest_common.sh@10 -- # set +x 00:24:13.094 20:24:50 -- nvmf/common.sh@520 -- # local subsystem config 00:24:13.094 20:24:50 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:13.094 20:24:50 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:13.094 { 00:24:13.094 "params": { 00:24:13.094 "name": "Nvme$subsystem", 00:24:13.094 "trtype": "$TEST_TRANSPORT", 00:24:13.094 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:13.094 "adrfam": "ipv4", 00:24:13.094 "trsvcid": "$NVMF_PORT", 00:24:13.094 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:13.094 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:13.094 "hdgst": ${hdgst:-false}, 00:24:13.094 "ddgst": ${ddgst:-false} 00:24:13.094 }, 00:24:13.094 "method": "bdev_nvme_attach_controller" 00:24:13.094 } 00:24:13.094 EOF 00:24:13.094 )") 00:24:13.094 20:24:50 -- nvmf/common.sh@542 -- # cat 00:24:13.094 20:24:50 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:13.094 20:24:50 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:13.094 { 00:24:13.094 "params": { 00:24:13.094 "name": "Nvme$subsystem", 00:24:13.094 "trtype": "$TEST_TRANSPORT", 00:24:13.094 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:13.094 "adrfam": "ipv4", 00:24:13.094 "trsvcid": "$NVMF_PORT", 00:24:13.094 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:13.094 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:13.094 "hdgst": ${hdgst:-false}, 00:24:13.094 "ddgst": ${ddgst:-false} 00:24:13.094 }, 00:24:13.094 "method": "bdev_nvme_attach_controller" 00:24:13.094 } 00:24:13.094 EOF 00:24:13.094 )") 00:24:13.094 20:24:50 -- nvmf/common.sh@542 -- # cat 00:24:13.094 20:24:50 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:13.094 20:24:50 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:13.094 { 00:24:13.094 "params": { 00:24:13.094 "name": "Nvme$subsystem", 00:24:13.094 "trtype": "$TEST_TRANSPORT", 00:24:13.094 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:13.094 "adrfam": "ipv4", 00:24:13.094 "trsvcid": "$NVMF_PORT", 00:24:13.094 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:13.094 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:13.094 "hdgst": ${hdgst:-false}, 00:24:13.094 "ddgst": ${ddgst:-false} 00:24:13.094 }, 00:24:13.094 "method": "bdev_nvme_attach_controller" 00:24:13.094 } 00:24:13.094 EOF 00:24:13.094 )") 00:24:13.094 20:24:50 -- nvmf/common.sh@542 -- # cat 00:24:13.094 20:24:50 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:13.094 20:24:50 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:13.094 { 00:24:13.094 "params": { 00:24:13.094 "name": "Nvme$subsystem", 00:24:13.094 "trtype": "$TEST_TRANSPORT", 00:24:13.094 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:13.094 "adrfam": "ipv4", 00:24:13.094 "trsvcid": "$NVMF_PORT", 00:24:13.094 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:13.094 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:13.094 "hdgst": ${hdgst:-false}, 00:24:13.094 "ddgst": ${ddgst:-false} 00:24:13.094 }, 00:24:13.094 "method": "bdev_nvme_attach_controller" 00:24:13.094 } 00:24:13.094 EOF 00:24:13.094 )") 00:24:13.094 20:24:50 -- nvmf/common.sh@542 -- # cat 00:24:13.094 20:24:50 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:13.094 20:24:50 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:13.094 { 00:24:13.094 "params": { 00:24:13.094 "name": "Nvme$subsystem", 00:24:13.094 "trtype": "$TEST_TRANSPORT", 00:24:13.094 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:13.094 "adrfam": "ipv4", 00:24:13.094 "trsvcid": "$NVMF_PORT", 00:24:13.094 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:13.094 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:13.094 "hdgst": ${hdgst:-false}, 00:24:13.094 "ddgst": ${ddgst:-false} 00:24:13.094 }, 00:24:13.094 "method": "bdev_nvme_attach_controller" 00:24:13.094 } 00:24:13.094 EOF 00:24:13.094 )") 00:24:13.094 20:24:50 -- nvmf/common.sh@542 -- # cat 00:24:13.094 20:24:50 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:13.094 20:24:50 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:13.094 { 00:24:13.094 "params": { 00:24:13.094 "name": "Nvme$subsystem", 00:24:13.094 "trtype": "$TEST_TRANSPORT", 00:24:13.094 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:13.094 "adrfam": "ipv4", 00:24:13.094 "trsvcid": "$NVMF_PORT", 00:24:13.094 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:13.094 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:13.094 "hdgst": ${hdgst:-false}, 00:24:13.094 "ddgst": ${ddgst:-false} 00:24:13.094 }, 00:24:13.094 "method": "bdev_nvme_attach_controller" 00:24:13.094 } 00:24:13.094 EOF 00:24:13.094 )") 00:24:13.094 20:24:50 -- nvmf/common.sh@542 -- # cat 00:24:13.094 [2024-02-14 20:24:50.310602] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:24:13.094 [2024-02-14 20:24:50.310655] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:24:13.094 20:24:50 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:13.094 20:24:50 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:13.094 { 00:24:13.094 "params": { 00:24:13.094 "name": "Nvme$subsystem", 00:24:13.094 "trtype": "$TEST_TRANSPORT", 00:24:13.094 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:13.094 "adrfam": "ipv4", 00:24:13.094 "trsvcid": "$NVMF_PORT", 00:24:13.094 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:13.094 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:13.094 "hdgst": ${hdgst:-false}, 00:24:13.094 "ddgst": ${ddgst:-false} 00:24:13.094 }, 00:24:13.094 "method": "bdev_nvme_attach_controller" 00:24:13.094 } 00:24:13.094 EOF 00:24:13.094 )") 00:24:13.094 20:24:50 -- nvmf/common.sh@542 -- # cat 00:24:13.094 20:24:50 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:13.094 20:24:50 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:13.094 { 00:24:13.094 "params": { 00:24:13.094 "name": "Nvme$subsystem", 00:24:13.094 "trtype": "$TEST_TRANSPORT", 00:24:13.094 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:13.094 "adrfam": "ipv4", 00:24:13.094 "trsvcid": "$NVMF_PORT", 00:24:13.094 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:13.094 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:13.094 "hdgst": ${hdgst:-false}, 00:24:13.094 "ddgst": ${ddgst:-false} 00:24:13.094 }, 00:24:13.094 "method": "bdev_nvme_attach_controller" 00:24:13.094 } 00:24:13.094 EOF 00:24:13.094 )") 00:24:13.094 20:24:50 -- nvmf/common.sh@542 -- # cat 00:24:13.094 20:24:50 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:13.094 20:24:50 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:13.094 { 00:24:13.094 "params": { 00:24:13.094 "name": "Nvme$subsystem", 00:24:13.094 "trtype": "$TEST_TRANSPORT", 00:24:13.094 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:13.094 "adrfam": "ipv4", 00:24:13.094 "trsvcid": "$NVMF_PORT", 00:24:13.094 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:13.094 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:13.094 "hdgst": ${hdgst:-false}, 00:24:13.094 "ddgst": ${ddgst:-false} 00:24:13.094 }, 00:24:13.094 "method": "bdev_nvme_attach_controller" 00:24:13.094 } 00:24:13.094 EOF 00:24:13.094 )") 00:24:13.094 20:24:50 -- nvmf/common.sh@542 -- # cat 00:24:13.094 20:24:50 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:13.094 20:24:50 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:13.094 { 00:24:13.094 "params": { 00:24:13.094 "name": "Nvme$subsystem", 00:24:13.094 "trtype": "$TEST_TRANSPORT", 00:24:13.094 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:13.094 "adrfam": "ipv4", 00:24:13.094 "trsvcid": "$NVMF_PORT", 00:24:13.094 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:13.094 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:13.094 "hdgst": ${hdgst:-false}, 00:24:13.094 "ddgst": ${ddgst:-false} 00:24:13.094 }, 00:24:13.094 "method": "bdev_nvme_attach_controller" 00:24:13.094 } 00:24:13.094 EOF 00:24:13.094 )") 00:24:13.094 20:24:50 -- nvmf/common.sh@542 -- # cat 00:24:13.094 20:24:50 -- nvmf/common.sh@544 -- # jq . 00:24:13.094 EAL: No free 2048 kB hugepages reported on node 1 00:24:13.094 20:24:50 -- nvmf/common.sh@545 -- # IFS=, 00:24:13.094 20:24:50 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:24:13.094 "params": { 00:24:13.094 "name": "Nvme1", 00:24:13.094 "trtype": "tcp", 00:24:13.094 "traddr": "10.0.0.2", 00:24:13.094 "adrfam": "ipv4", 00:24:13.094 "trsvcid": "4420", 00:24:13.094 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:13.094 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:13.094 "hdgst": false, 00:24:13.094 "ddgst": false 00:24:13.094 }, 00:24:13.095 "method": "bdev_nvme_attach_controller" 00:24:13.095 },{ 00:24:13.095 "params": { 00:24:13.095 "name": "Nvme2", 00:24:13.095 "trtype": "tcp", 00:24:13.095 "traddr": "10.0.0.2", 00:24:13.095 "adrfam": "ipv4", 00:24:13.095 "trsvcid": "4420", 00:24:13.095 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:13.095 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:13.095 "hdgst": false, 00:24:13.095 "ddgst": false 00:24:13.095 }, 00:24:13.095 "method": "bdev_nvme_attach_controller" 00:24:13.095 },{ 00:24:13.095 "params": { 00:24:13.095 "name": "Nvme3", 00:24:13.095 "trtype": "tcp", 00:24:13.095 "traddr": "10.0.0.2", 00:24:13.095 "adrfam": "ipv4", 00:24:13.095 "trsvcid": "4420", 00:24:13.095 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:13.095 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:13.095 "hdgst": false, 00:24:13.095 "ddgst": false 00:24:13.095 }, 00:24:13.095 "method": "bdev_nvme_attach_controller" 00:24:13.095 },{ 00:24:13.095 "params": { 00:24:13.095 "name": "Nvme4", 00:24:13.095 "trtype": "tcp", 00:24:13.095 "traddr": "10.0.0.2", 00:24:13.095 "adrfam": "ipv4", 00:24:13.095 "trsvcid": "4420", 00:24:13.095 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:13.095 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:13.095 "hdgst": false, 00:24:13.095 "ddgst": false 00:24:13.095 }, 00:24:13.095 "method": "bdev_nvme_attach_controller" 00:24:13.095 },{ 00:24:13.095 "params": { 00:24:13.095 "name": "Nvme5", 00:24:13.095 "trtype": "tcp", 00:24:13.095 "traddr": "10.0.0.2", 00:24:13.095 "adrfam": "ipv4", 00:24:13.095 "trsvcid": "4420", 00:24:13.095 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:13.095 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:13.095 "hdgst": false, 00:24:13.095 "ddgst": false 00:24:13.095 }, 00:24:13.095 "method": "bdev_nvme_attach_controller" 00:24:13.095 },{ 00:24:13.095 "params": { 00:24:13.095 "name": "Nvme6", 00:24:13.095 "trtype": "tcp", 00:24:13.095 "traddr": "10.0.0.2", 00:24:13.095 "adrfam": "ipv4", 00:24:13.095 "trsvcid": "4420", 00:24:13.095 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:13.095 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:13.095 "hdgst": false, 00:24:13.095 "ddgst": false 00:24:13.095 }, 00:24:13.095 "method": "bdev_nvme_attach_controller" 00:24:13.095 },{ 00:24:13.095 "params": { 00:24:13.095 "name": "Nvme7", 00:24:13.095 "trtype": "tcp", 00:24:13.095 "traddr": "10.0.0.2", 00:24:13.095 "adrfam": "ipv4", 00:24:13.095 "trsvcid": "4420", 00:24:13.095 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:13.095 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:13.095 "hdgst": false, 00:24:13.095 "ddgst": false 00:24:13.095 }, 00:24:13.095 "method": "bdev_nvme_attach_controller" 00:24:13.095 },{ 00:24:13.095 "params": { 00:24:13.095 "name": "Nvme8", 00:24:13.095 "trtype": "tcp", 00:24:13.095 "traddr": "10.0.0.2", 00:24:13.095 "adrfam": "ipv4", 00:24:13.095 "trsvcid": "4420", 00:24:13.095 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:13.095 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:13.095 "hdgst": false, 00:24:13.095 "ddgst": false 00:24:13.095 }, 00:24:13.095 "method": "bdev_nvme_attach_controller" 00:24:13.095 },{ 00:24:13.095 "params": { 00:24:13.095 "name": "Nvme9", 00:24:13.095 "trtype": "tcp", 00:24:13.095 "traddr": "10.0.0.2", 00:24:13.095 "adrfam": "ipv4", 00:24:13.095 "trsvcid": "4420", 00:24:13.095 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:13.095 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:13.095 "hdgst": false, 00:24:13.095 "ddgst": false 00:24:13.095 }, 00:24:13.095 "method": "bdev_nvme_attach_controller" 00:24:13.095 },{ 00:24:13.095 "params": { 00:24:13.095 "name": "Nvme10", 00:24:13.095 "trtype": "tcp", 00:24:13.095 "traddr": "10.0.0.2", 00:24:13.095 "adrfam": "ipv4", 00:24:13.095 "trsvcid": "4420", 00:24:13.095 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:13.095 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:13.095 "hdgst": false, 00:24:13.095 "ddgst": false 00:24:13.095 }, 00:24:13.095 "method": "bdev_nvme_attach_controller" 00:24:13.095 }' 00:24:13.095 [2024-02-14 20:24:50.375331] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:13.095 [2024-02-14 20:24:50.445733] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:13.095 [2024-02-14 20:24:50.445790] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:24:14.472 20:24:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:14.472 20:24:51 -- common/autotest_common.sh@850 -- # return 0 00:24:14.472 20:24:51 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:14.472 20:24:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.472 20:24:51 -- common/autotest_common.sh@10 -- # set +x 00:24:14.472 20:24:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:14.472 20:24:51 -- target/shutdown.sh@83 -- # kill -9 1877239 00:24:14.472 20:24:51 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:24:14.472 20:24:51 -- target/shutdown.sh@87 -- # sleep 1 00:24:15.407 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 1877239 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:24:15.407 20:24:52 -- target/shutdown.sh@88 -- # kill -0 1876961 00:24:15.407 20:24:52 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:24:15.407 20:24:52 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:15.407 20:24:52 -- nvmf/common.sh@520 -- # config=() 00:24:15.407 20:24:52 -- nvmf/common.sh@520 -- # local subsystem config 00:24:15.407 20:24:52 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:15.407 20:24:52 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:15.407 { 00:24:15.407 "params": { 00:24:15.407 "name": "Nvme$subsystem", 00:24:15.407 "trtype": "$TEST_TRANSPORT", 00:24:15.407 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:15.407 "adrfam": "ipv4", 00:24:15.407 "trsvcid": "$NVMF_PORT", 00:24:15.407 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:15.407 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:15.407 "hdgst": ${hdgst:-false}, 00:24:15.407 "ddgst": ${ddgst:-false} 00:24:15.407 }, 00:24:15.407 "method": "bdev_nvme_attach_controller" 00:24:15.407 } 00:24:15.407 EOF 00:24:15.407 )") 00:24:15.407 20:24:52 -- nvmf/common.sh@542 -- # cat 00:24:15.407 20:24:52 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:15.407 20:24:52 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:15.407 { 00:24:15.407 "params": { 00:24:15.407 "name": "Nvme$subsystem", 00:24:15.407 "trtype": "$TEST_TRANSPORT", 00:24:15.407 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:15.407 "adrfam": "ipv4", 00:24:15.407 "trsvcid": "$NVMF_PORT", 00:24:15.407 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:15.407 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:15.407 "hdgst": ${hdgst:-false}, 00:24:15.407 "ddgst": ${ddgst:-false} 00:24:15.407 }, 00:24:15.407 "method": "bdev_nvme_attach_controller" 00:24:15.407 } 00:24:15.407 EOF 00:24:15.407 )") 00:24:15.407 20:24:52 -- nvmf/common.sh@542 -- # cat 00:24:15.407 20:24:52 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:15.666 20:24:52 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:15.666 { 00:24:15.666 "params": { 00:24:15.666 "name": "Nvme$subsystem", 00:24:15.666 "trtype": "$TEST_TRANSPORT", 00:24:15.666 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:15.666 "adrfam": "ipv4", 00:24:15.666 "trsvcid": "$NVMF_PORT", 00:24:15.666 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:15.666 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:15.666 "hdgst": ${hdgst:-false}, 00:24:15.666 "ddgst": ${ddgst:-false} 00:24:15.666 }, 00:24:15.666 "method": "bdev_nvme_attach_controller" 00:24:15.666 } 00:24:15.666 EOF 00:24:15.666 )") 00:24:15.666 20:24:52 -- nvmf/common.sh@542 -- # cat 00:24:15.666 20:24:52 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:15.666 20:24:52 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:15.666 { 00:24:15.666 "params": { 00:24:15.666 "name": "Nvme$subsystem", 00:24:15.666 "trtype": "$TEST_TRANSPORT", 00:24:15.666 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:15.666 "adrfam": "ipv4", 00:24:15.666 "trsvcid": "$NVMF_PORT", 00:24:15.666 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:15.666 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:15.666 "hdgst": ${hdgst:-false}, 00:24:15.666 "ddgst": ${ddgst:-false} 00:24:15.666 }, 00:24:15.666 "method": "bdev_nvme_attach_controller" 00:24:15.666 } 00:24:15.666 EOF 00:24:15.666 )") 00:24:15.666 20:24:52 -- nvmf/common.sh@542 -- # cat 00:24:15.666 20:24:52 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:15.666 20:24:52 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:15.666 { 00:24:15.666 "params": { 00:24:15.666 "name": "Nvme$subsystem", 00:24:15.666 "trtype": "$TEST_TRANSPORT", 00:24:15.666 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:15.666 "adrfam": "ipv4", 00:24:15.666 "trsvcid": "$NVMF_PORT", 00:24:15.666 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:15.666 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:15.666 "hdgst": ${hdgst:-false}, 00:24:15.666 "ddgst": ${ddgst:-false} 00:24:15.666 }, 00:24:15.666 "method": "bdev_nvme_attach_controller" 00:24:15.666 } 00:24:15.666 EOF 00:24:15.666 )") 00:24:15.666 20:24:52 -- nvmf/common.sh@542 -- # cat 00:24:15.666 20:24:52 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:15.666 20:24:52 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:15.666 { 00:24:15.666 "params": { 00:24:15.666 "name": "Nvme$subsystem", 00:24:15.666 "trtype": "$TEST_TRANSPORT", 00:24:15.666 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:15.666 "adrfam": "ipv4", 00:24:15.666 "trsvcid": "$NVMF_PORT", 00:24:15.666 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:15.666 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:15.666 "hdgst": ${hdgst:-false}, 00:24:15.666 "ddgst": ${ddgst:-false} 00:24:15.666 }, 00:24:15.666 "method": "bdev_nvme_attach_controller" 00:24:15.666 } 00:24:15.666 EOF 00:24:15.666 )") 00:24:15.666 20:24:52 -- nvmf/common.sh@542 -- # cat 00:24:15.666 20:24:52 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:15.666 [2024-02-14 20:24:52.849733] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:24:15.666 [2024-02-14 20:24:52.849783] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1877724 ] 00:24:15.666 20:24:52 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:15.666 { 00:24:15.666 "params": { 00:24:15.666 "name": "Nvme$subsystem", 00:24:15.666 "trtype": "$TEST_TRANSPORT", 00:24:15.666 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:15.666 "adrfam": "ipv4", 00:24:15.666 "trsvcid": "$NVMF_PORT", 00:24:15.666 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:15.666 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:15.666 "hdgst": ${hdgst:-false}, 00:24:15.666 "ddgst": ${ddgst:-false} 00:24:15.666 }, 00:24:15.666 "method": "bdev_nvme_attach_controller" 00:24:15.666 } 00:24:15.666 EOF 00:24:15.666 )") 00:24:15.666 20:24:52 -- nvmf/common.sh@542 -- # cat 00:24:15.666 20:24:52 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:15.666 20:24:52 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:15.666 { 00:24:15.666 "params": { 00:24:15.666 "name": "Nvme$subsystem", 00:24:15.666 "trtype": "$TEST_TRANSPORT", 00:24:15.666 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:15.666 "adrfam": "ipv4", 00:24:15.666 "trsvcid": "$NVMF_PORT", 00:24:15.666 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:15.666 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:15.666 "hdgst": ${hdgst:-false}, 00:24:15.666 "ddgst": ${ddgst:-false} 00:24:15.666 }, 00:24:15.666 "method": "bdev_nvme_attach_controller" 00:24:15.666 } 00:24:15.666 EOF 00:24:15.666 )") 00:24:15.666 20:24:52 -- nvmf/common.sh@542 -- # cat 00:24:15.666 20:24:52 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:15.666 20:24:52 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:15.666 { 00:24:15.666 "params": { 00:24:15.666 "name": "Nvme$subsystem", 00:24:15.666 "trtype": "$TEST_TRANSPORT", 00:24:15.666 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:15.666 "adrfam": "ipv4", 00:24:15.666 "trsvcid": "$NVMF_PORT", 00:24:15.666 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:15.666 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:15.666 "hdgst": ${hdgst:-false}, 00:24:15.666 "ddgst": ${ddgst:-false} 00:24:15.666 }, 00:24:15.666 "method": "bdev_nvme_attach_controller" 00:24:15.666 } 00:24:15.666 EOF 00:24:15.666 )") 00:24:15.666 20:24:52 -- nvmf/common.sh@542 -- # cat 00:24:15.666 20:24:52 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:15.666 20:24:52 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:15.666 { 00:24:15.666 "params": { 00:24:15.666 "name": "Nvme$subsystem", 00:24:15.666 "trtype": "$TEST_TRANSPORT", 00:24:15.666 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:15.666 "adrfam": "ipv4", 00:24:15.666 "trsvcid": "$NVMF_PORT", 00:24:15.666 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:15.666 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:15.666 "hdgst": ${hdgst:-false}, 00:24:15.666 "ddgst": ${ddgst:-false} 00:24:15.666 }, 00:24:15.666 "method": "bdev_nvme_attach_controller" 00:24:15.666 } 00:24:15.666 EOF 00:24:15.666 )") 00:24:15.666 20:24:52 -- nvmf/common.sh@542 -- # cat 00:24:15.666 20:24:52 -- nvmf/common.sh@544 -- # jq . 00:24:15.666 EAL: No free 2048 kB hugepages reported on node 1 00:24:15.667 20:24:52 -- nvmf/common.sh@545 -- # IFS=, 00:24:15.667 20:24:52 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:24:15.667 "params": { 00:24:15.667 "name": "Nvme1", 00:24:15.667 "trtype": "tcp", 00:24:15.667 "traddr": "10.0.0.2", 00:24:15.667 "adrfam": "ipv4", 00:24:15.667 "trsvcid": "4420", 00:24:15.667 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:15.667 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:15.667 "hdgst": false, 00:24:15.667 "ddgst": false 00:24:15.667 }, 00:24:15.667 "method": "bdev_nvme_attach_controller" 00:24:15.667 },{ 00:24:15.667 "params": { 00:24:15.667 "name": "Nvme2", 00:24:15.667 "trtype": "tcp", 00:24:15.667 "traddr": "10.0.0.2", 00:24:15.667 "adrfam": "ipv4", 00:24:15.667 "trsvcid": "4420", 00:24:15.667 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:15.667 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:15.667 "hdgst": false, 00:24:15.667 "ddgst": false 00:24:15.667 }, 00:24:15.667 "method": "bdev_nvme_attach_controller" 00:24:15.667 },{ 00:24:15.667 "params": { 00:24:15.667 "name": "Nvme3", 00:24:15.667 "trtype": "tcp", 00:24:15.667 "traddr": "10.0.0.2", 00:24:15.667 "adrfam": "ipv4", 00:24:15.667 "trsvcid": "4420", 00:24:15.667 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:15.667 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:15.667 "hdgst": false, 00:24:15.667 "ddgst": false 00:24:15.667 }, 00:24:15.667 "method": "bdev_nvme_attach_controller" 00:24:15.667 },{ 00:24:15.667 "params": { 00:24:15.667 "name": "Nvme4", 00:24:15.667 "trtype": "tcp", 00:24:15.667 "traddr": "10.0.0.2", 00:24:15.667 "adrfam": "ipv4", 00:24:15.667 "trsvcid": "4420", 00:24:15.667 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:15.667 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:15.667 "hdgst": false, 00:24:15.667 "ddgst": false 00:24:15.667 }, 00:24:15.667 "method": "bdev_nvme_attach_controller" 00:24:15.667 },{ 00:24:15.667 "params": { 00:24:15.667 "name": "Nvme5", 00:24:15.667 "trtype": "tcp", 00:24:15.667 "traddr": "10.0.0.2", 00:24:15.667 "adrfam": "ipv4", 00:24:15.667 "trsvcid": "4420", 00:24:15.667 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:15.667 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:15.667 "hdgst": false, 00:24:15.667 "ddgst": false 00:24:15.667 }, 00:24:15.667 "method": "bdev_nvme_attach_controller" 00:24:15.667 },{ 00:24:15.667 "params": { 00:24:15.667 "name": "Nvme6", 00:24:15.667 "trtype": "tcp", 00:24:15.667 "traddr": "10.0.0.2", 00:24:15.667 "adrfam": "ipv4", 00:24:15.667 "trsvcid": "4420", 00:24:15.667 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:15.667 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:15.667 "hdgst": false, 00:24:15.667 "ddgst": false 00:24:15.667 }, 00:24:15.667 "method": "bdev_nvme_attach_controller" 00:24:15.667 },{ 00:24:15.667 "params": { 00:24:15.667 "name": "Nvme7", 00:24:15.667 "trtype": "tcp", 00:24:15.667 "traddr": "10.0.0.2", 00:24:15.667 "adrfam": "ipv4", 00:24:15.667 "trsvcid": "4420", 00:24:15.667 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:15.667 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:15.667 "hdgst": false, 00:24:15.667 "ddgst": false 00:24:15.667 }, 00:24:15.667 "method": "bdev_nvme_attach_controller" 00:24:15.667 },{ 00:24:15.667 "params": { 00:24:15.667 "name": "Nvme8", 00:24:15.667 "trtype": "tcp", 00:24:15.667 "traddr": "10.0.0.2", 00:24:15.667 "adrfam": "ipv4", 00:24:15.667 "trsvcid": "4420", 00:24:15.667 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:15.667 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:15.667 "hdgst": false, 00:24:15.667 "ddgst": false 00:24:15.667 }, 00:24:15.667 "method": "bdev_nvme_attach_controller" 00:24:15.667 },{ 00:24:15.667 "params": { 00:24:15.667 "name": "Nvme9", 00:24:15.667 "trtype": "tcp", 00:24:15.667 "traddr": "10.0.0.2", 00:24:15.667 "adrfam": "ipv4", 00:24:15.667 "trsvcid": "4420", 00:24:15.667 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:15.667 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:15.667 "hdgst": false, 00:24:15.667 "ddgst": false 00:24:15.667 }, 00:24:15.667 "method": "bdev_nvme_attach_controller" 00:24:15.667 },{ 00:24:15.667 "params": { 00:24:15.667 "name": "Nvme10", 00:24:15.667 "trtype": "tcp", 00:24:15.667 "traddr": "10.0.0.2", 00:24:15.667 "adrfam": "ipv4", 00:24:15.667 "trsvcid": "4420", 00:24:15.667 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:15.667 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:15.667 "hdgst": false, 00:24:15.667 "ddgst": false 00:24:15.667 }, 00:24:15.667 "method": "bdev_nvme_attach_controller" 00:24:15.667 }' 00:24:15.667 [2024-02-14 20:24:52.911243] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:15.667 [2024-02-14 20:24:52.982079] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:15.667 [2024-02-14 20:24:52.982135] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:24:17.096 Running I/O for 1 seconds... 00:24:18.477 00:24:18.477 Latency(us) 00:24:18.477 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:18.477 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:18.477 Verification LBA range: start 0x0 length 0x400 00:24:18.477 Nvme1n1 : 1.05 504.30 31.52 0.00 0.00 123475.30 7926.74 113346.07 00:24:18.477 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:18.477 Verification LBA range: start 0x0 length 0x400 00:24:18.477 Nvme2n1 : 1.06 499.76 31.24 0.00 0.00 124678.10 9362.29 108352.85 00:24:18.477 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:18.477 Verification LBA range: start 0x0 length 0x400 00:24:18.477 Nvme3n1 : 1.07 406.70 25.42 0.00 0.00 151946.45 22219.82 149796.57 00:24:18.477 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:18.477 Verification LBA range: start 0x0 length 0x400 00:24:18.477 Nvme4n1 : 1.11 436.89 27.31 0.00 0.00 135601.55 9487.12 116841.33 00:24:18.477 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:18.477 Verification LBA range: start 0x0 length 0x400 00:24:18.477 Nvme5n1 : 1.06 455.06 28.44 0.00 0.00 134541.41 22594.32 121834.54 00:24:18.477 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:18.477 Verification LBA range: start 0x0 length 0x400 00:24:18.477 Nvme6n1 : 1.07 532.01 33.25 0.00 0.00 115252.97 11234.74 94871.16 00:24:18.477 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:18.477 Verification LBA range: start 0x0 length 0x400 00:24:18.477 Nvme7n1 : 1.08 449.24 28.08 0.00 0.00 135793.65 10673.01 118838.61 00:24:18.477 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:18.477 Verification LBA range: start 0x0 length 0x400 00:24:18.477 Nvme8n1 : 1.11 433.22 27.08 0.00 0.00 134333.29 13731.35 109351.50 00:24:18.477 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:18.477 Verification LBA range: start 0x0 length 0x400 00:24:18.477 Nvme9n1 : 1.08 527.03 32.94 0.00 0.00 114348.07 8925.38 99864.38 00:24:18.477 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:18.477 Verification LBA range: start 0x0 length 0x400 00:24:18.477 Nvme10n1 : 1.08 524.75 32.80 0.00 0.00 114306.88 7489.83 98366.42 00:24:18.477 =================================================================================================================== 00:24:18.477 Total : 4768.96 298.06 0.00 0.00 127443.48 7489.83 149796.57 00:24:18.477 [2024-02-14 20:24:55.587651] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:24:18.477 20:24:55 -- target/shutdown.sh@93 -- # stoptarget 00:24:18.477 20:24:55 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:24:18.477 20:24:55 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:18.477 20:24:55 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:18.477 20:24:55 -- target/shutdown.sh@45 -- # nvmftestfini 00:24:18.477 20:24:55 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:18.477 20:24:55 -- nvmf/common.sh@116 -- # sync 00:24:18.477 20:24:55 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:18.477 20:24:55 -- nvmf/common.sh@119 -- # set +e 00:24:18.477 20:24:55 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:18.477 20:24:55 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:18.477 rmmod nvme_tcp 00:24:18.477 rmmod nvme_fabrics 00:24:18.477 rmmod nvme_keyring 00:24:18.477 20:24:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:18.477 20:24:55 -- nvmf/common.sh@123 -- # set -e 00:24:18.477 20:24:55 -- nvmf/common.sh@124 -- # return 0 00:24:18.477 20:24:55 -- nvmf/common.sh@477 -- # '[' -n 1876961 ']' 00:24:18.477 20:24:55 -- nvmf/common.sh@478 -- # killprocess 1876961 00:24:18.477 20:24:55 -- common/autotest_common.sh@924 -- # '[' -z 1876961 ']' 00:24:18.477 20:24:55 -- common/autotest_common.sh@928 -- # kill -0 1876961 00:24:18.477 20:24:55 -- common/autotest_common.sh@929 -- # uname 00:24:18.477 20:24:55 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:24:18.477 20:24:55 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 1876961 00:24:18.737 20:24:55 -- common/autotest_common.sh@930 -- # process_name=reactor_1 00:24:18.737 20:24:55 -- common/autotest_common.sh@934 -- # '[' reactor_1 = sudo ']' 00:24:18.737 20:24:55 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 1876961' 00:24:18.737 killing process with pid 1876961 00:24:18.737 20:24:55 -- common/autotest_common.sh@943 -- # kill 1876961 00:24:18.737 20:24:55 -- common/autotest_common.sh@948 -- # wait 1876961 00:24:18.996 20:24:56 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:18.996 20:24:56 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:18.996 20:24:56 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:18.996 20:24:56 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:18.996 20:24:56 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:18.996 20:24:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:18.996 20:24:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:18.996 20:24:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:21.536 20:24:58 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:24:21.536 00:24:21.536 real 0m15.396s 00:24:21.536 user 0m34.492s 00:24:21.536 sys 0m5.790s 00:24:21.536 20:24:58 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:21.536 20:24:58 -- common/autotest_common.sh@10 -- # set +x 00:24:21.536 ************************************ 00:24:21.536 END TEST nvmf_shutdown_tc1 00:24:21.536 ************************************ 00:24:21.536 20:24:58 -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:24:21.536 20:24:58 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:24:21.536 20:24:58 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:24:21.536 20:24:58 -- common/autotest_common.sh@10 -- # set +x 00:24:21.536 ************************************ 00:24:21.536 START TEST nvmf_shutdown_tc2 00:24:21.536 ************************************ 00:24:21.537 20:24:58 -- common/autotest_common.sh@1102 -- # nvmf_shutdown_tc2 00:24:21.537 20:24:58 -- target/shutdown.sh@98 -- # starttarget 00:24:21.537 20:24:58 -- target/shutdown.sh@15 -- # nvmftestinit 00:24:21.537 20:24:58 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:21.537 20:24:58 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:21.537 20:24:58 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:21.537 20:24:58 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:21.537 20:24:58 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:21.537 20:24:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:21.537 20:24:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:21.537 20:24:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:21.537 20:24:58 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:21.537 20:24:58 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:21.537 20:24:58 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:21.537 20:24:58 -- common/autotest_common.sh@10 -- # set +x 00:24:21.537 20:24:58 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:21.537 20:24:58 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:21.537 20:24:58 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:21.537 20:24:58 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:21.537 20:24:58 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:21.537 20:24:58 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:21.537 20:24:58 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:21.537 20:24:58 -- nvmf/common.sh@294 -- # net_devs=() 00:24:21.537 20:24:58 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:21.537 20:24:58 -- nvmf/common.sh@295 -- # e810=() 00:24:21.537 20:24:58 -- nvmf/common.sh@295 -- # local -ga e810 00:24:21.537 20:24:58 -- nvmf/common.sh@296 -- # x722=() 00:24:21.537 20:24:58 -- nvmf/common.sh@296 -- # local -ga x722 00:24:21.537 20:24:58 -- nvmf/common.sh@297 -- # mlx=() 00:24:21.537 20:24:58 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:21.537 20:24:58 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:21.537 20:24:58 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:21.537 20:24:58 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:21.537 20:24:58 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:21.537 20:24:58 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:21.537 20:24:58 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:21.537 20:24:58 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:21.537 20:24:58 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:21.537 20:24:58 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:21.537 20:24:58 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:21.537 20:24:58 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:21.537 20:24:58 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:21.537 20:24:58 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:21.537 20:24:58 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:24:21.537 20:24:58 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:24:21.537 20:24:58 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:24:21.537 20:24:58 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:21.537 20:24:58 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:21.537 20:24:58 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:21.537 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:21.537 20:24:58 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:21.537 20:24:58 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:21.537 20:24:58 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:21.537 20:24:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:21.537 20:24:58 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:21.537 20:24:58 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:21.537 20:24:58 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:21.537 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:21.537 20:24:58 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:21.537 20:24:58 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:21.537 20:24:58 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:21.537 20:24:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:21.537 20:24:58 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:21.537 20:24:58 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:21.537 20:24:58 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:24:21.537 20:24:58 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:24:21.537 20:24:58 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:21.537 20:24:58 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:21.537 20:24:58 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:21.537 20:24:58 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:21.537 20:24:58 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:21.537 Found net devices under 0000:af:00.0: cvl_0_0 00:24:21.537 20:24:58 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:21.537 20:24:58 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:21.537 20:24:58 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:21.537 20:24:58 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:21.537 20:24:58 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:21.537 20:24:58 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:21.537 Found net devices under 0000:af:00.1: cvl_0_1 00:24:21.537 20:24:58 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:21.537 20:24:58 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:21.537 20:24:58 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:21.537 20:24:58 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:21.537 20:24:58 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:24:21.537 20:24:58 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:24:21.537 20:24:58 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:21.537 20:24:58 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:21.537 20:24:58 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:21.537 20:24:58 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:24:21.537 20:24:58 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:21.537 20:24:58 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:21.537 20:24:58 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:24:21.537 20:24:58 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:21.537 20:24:58 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:21.537 20:24:58 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:24:21.537 20:24:58 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:24:21.537 20:24:58 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:24:21.537 20:24:58 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:21.537 20:24:58 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:21.537 20:24:58 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:21.537 20:24:58 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:24:21.537 20:24:58 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:21.537 20:24:58 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:21.537 20:24:58 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:21.537 20:24:58 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:24:21.537 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:21.537 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:24:21.537 00:24:21.537 --- 10.0.0.2 ping statistics --- 00:24:21.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:21.537 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:24:21.537 20:24:58 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:21.537 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:21.537 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.342 ms 00:24:21.537 00:24:21.537 --- 10.0.0.1 ping statistics --- 00:24:21.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:21.537 rtt min/avg/max/mdev = 0.342/0.342/0.342/0.000 ms 00:24:21.537 20:24:58 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:21.537 20:24:58 -- nvmf/common.sh@410 -- # return 0 00:24:21.537 20:24:58 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:21.537 20:24:58 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:21.537 20:24:58 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:21.537 20:24:58 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:21.537 20:24:58 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:21.537 20:24:58 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:21.537 20:24:58 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:21.537 20:24:58 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:24:21.537 20:24:58 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:21.537 20:24:58 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:21.537 20:24:58 -- common/autotest_common.sh@10 -- # set +x 00:24:21.537 20:24:58 -- nvmf/common.sh@469 -- # nvmfpid=1878756 00:24:21.537 20:24:58 -- nvmf/common.sh@470 -- # waitforlisten 1878756 00:24:21.537 20:24:58 -- common/autotest_common.sh@817 -- # '[' -z 1878756 ']' 00:24:21.537 20:24:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:21.537 20:24:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:21.537 20:24:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:21.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:21.537 20:24:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:21.537 20:24:58 -- common/autotest_common.sh@10 -- # set +x 00:24:21.537 20:24:58 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:21.537 [2024-02-14 20:24:58.722697] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:24:21.537 [2024-02-14 20:24:58.722739] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:21.537 EAL: No free 2048 kB hugepages reported on node 1 00:24:21.537 [2024-02-14 20:24:58.786716] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:21.537 [2024-02-14 20:24:58.862373] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:21.537 [2024-02-14 20:24:58.862480] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:21.537 [2024-02-14 20:24:58.862487] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:21.538 [2024-02-14 20:24:58.862494] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:21.538 [2024-02-14 20:24:58.862537] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:21.538 [2024-02-14 20:24:58.862637] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:21.538 [2024-02-14 20:24:58.862743] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:21.538 [2024-02-14 20:24:58.862744] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:24:22.105 20:24:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:22.105 20:24:59 -- common/autotest_common.sh@850 -- # return 0 00:24:22.105 20:24:59 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:22.105 20:24:59 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:22.105 20:24:59 -- common/autotest_common.sh@10 -- # set +x 00:24:22.365 20:24:59 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:22.365 20:24:59 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:22.365 20:24:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:22.365 20:24:59 -- common/autotest_common.sh@10 -- # set +x 00:24:22.365 [2024-02-14 20:24:59.543788] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:22.365 20:24:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:22.365 20:24:59 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:24:22.365 20:24:59 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:24:22.365 20:24:59 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:22.365 20:24:59 -- common/autotest_common.sh@10 -- # set +x 00:24:22.365 20:24:59 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:22.365 20:24:59 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:22.365 20:24:59 -- target/shutdown.sh@28 -- # cat 00:24:22.365 20:24:59 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:22.365 20:24:59 -- target/shutdown.sh@28 -- # cat 00:24:22.365 20:24:59 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:22.365 20:24:59 -- target/shutdown.sh@28 -- # cat 00:24:22.365 20:24:59 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:22.365 20:24:59 -- target/shutdown.sh@28 -- # cat 00:24:22.365 20:24:59 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:22.365 20:24:59 -- target/shutdown.sh@28 -- # cat 00:24:22.365 20:24:59 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:22.365 20:24:59 -- target/shutdown.sh@28 -- # cat 00:24:22.365 20:24:59 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:22.365 20:24:59 -- target/shutdown.sh@28 -- # cat 00:24:22.365 20:24:59 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:22.365 20:24:59 -- target/shutdown.sh@28 -- # cat 00:24:22.365 20:24:59 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:22.365 20:24:59 -- target/shutdown.sh@28 -- # cat 00:24:22.365 20:24:59 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:22.365 20:24:59 -- target/shutdown.sh@28 -- # cat 00:24:22.365 20:24:59 -- target/shutdown.sh@35 -- # rpc_cmd 00:24:22.365 20:24:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:22.365 20:24:59 -- common/autotest_common.sh@10 -- # set +x 00:24:22.365 Malloc1 00:24:22.365 [2024-02-14 20:24:59.635182] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:22.365 Malloc2 00:24:22.365 Malloc3 00:24:22.365 Malloc4 00:24:22.365 Malloc5 00:24:22.623 Malloc6 00:24:22.623 Malloc7 00:24:22.623 Malloc8 00:24:22.623 Malloc9 00:24:22.623 Malloc10 00:24:22.623 20:25:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:22.623 20:25:00 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:24:22.623 20:25:00 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:22.623 20:25:00 -- common/autotest_common.sh@10 -- # set +x 00:24:22.883 20:25:00 -- target/shutdown.sh@102 -- # perfpid=1879039 00:24:22.883 20:25:00 -- target/shutdown.sh@103 -- # waitforlisten 1879039 /var/tmp/bdevperf.sock 00:24:22.883 20:25:00 -- common/autotest_common.sh@817 -- # '[' -z 1879039 ']' 00:24:22.883 20:25:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:22.883 20:25:00 -- target/shutdown.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:24:22.883 20:25:00 -- target/shutdown.sh@101 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:22.883 20:25:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:22.883 20:25:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:22.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:22.883 20:25:00 -- nvmf/common.sh@520 -- # config=() 00:24:22.883 20:25:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:22.883 20:25:00 -- nvmf/common.sh@520 -- # local subsystem config 00:24:22.883 20:25:00 -- common/autotest_common.sh@10 -- # set +x 00:24:22.883 20:25:00 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:22.883 20:25:00 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:22.883 { 00:24:22.883 "params": { 00:24:22.883 "name": "Nvme$subsystem", 00:24:22.883 "trtype": "$TEST_TRANSPORT", 00:24:22.883 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:22.883 "adrfam": "ipv4", 00:24:22.883 "trsvcid": "$NVMF_PORT", 00:24:22.883 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:22.883 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:22.883 "hdgst": ${hdgst:-false}, 00:24:22.883 "ddgst": ${ddgst:-false} 00:24:22.883 }, 00:24:22.883 "method": "bdev_nvme_attach_controller" 00:24:22.883 } 00:24:22.883 EOF 00:24:22.883 )") 00:24:22.883 20:25:00 -- nvmf/common.sh@542 -- # cat 00:24:22.883 20:25:00 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:22.883 20:25:00 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:22.883 { 00:24:22.883 "params": { 00:24:22.883 "name": "Nvme$subsystem", 00:24:22.883 "trtype": "$TEST_TRANSPORT", 00:24:22.883 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:22.883 "adrfam": "ipv4", 00:24:22.883 "trsvcid": "$NVMF_PORT", 00:24:22.883 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:22.883 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:22.883 "hdgst": ${hdgst:-false}, 00:24:22.883 "ddgst": ${ddgst:-false} 00:24:22.883 }, 00:24:22.883 "method": "bdev_nvme_attach_controller" 00:24:22.883 } 00:24:22.883 EOF 00:24:22.883 )") 00:24:22.883 20:25:00 -- nvmf/common.sh@542 -- # cat 00:24:22.883 20:25:00 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:22.883 20:25:00 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:22.883 { 00:24:22.883 "params": { 00:24:22.883 "name": "Nvme$subsystem", 00:24:22.883 "trtype": "$TEST_TRANSPORT", 00:24:22.883 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:22.883 "adrfam": "ipv4", 00:24:22.883 "trsvcid": "$NVMF_PORT", 00:24:22.883 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:22.883 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:22.883 "hdgst": ${hdgst:-false}, 00:24:22.883 "ddgst": ${ddgst:-false} 00:24:22.883 }, 00:24:22.883 "method": "bdev_nvme_attach_controller" 00:24:22.883 } 00:24:22.883 EOF 00:24:22.883 )") 00:24:22.883 20:25:00 -- nvmf/common.sh@542 -- # cat 00:24:22.883 20:25:00 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:22.883 20:25:00 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:22.883 { 00:24:22.883 "params": { 00:24:22.883 "name": "Nvme$subsystem", 00:24:22.883 "trtype": "$TEST_TRANSPORT", 00:24:22.883 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:22.883 "adrfam": "ipv4", 00:24:22.883 "trsvcid": "$NVMF_PORT", 00:24:22.883 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:22.883 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:22.883 "hdgst": ${hdgst:-false}, 00:24:22.883 "ddgst": ${ddgst:-false} 00:24:22.883 }, 00:24:22.883 "method": "bdev_nvme_attach_controller" 00:24:22.883 } 00:24:22.883 EOF 00:24:22.883 )") 00:24:22.883 20:25:00 -- nvmf/common.sh@542 -- # cat 00:24:22.883 20:25:00 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:22.883 20:25:00 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:22.883 { 00:24:22.883 "params": { 00:24:22.883 "name": "Nvme$subsystem", 00:24:22.883 "trtype": "$TEST_TRANSPORT", 00:24:22.883 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:22.883 "adrfam": "ipv4", 00:24:22.883 "trsvcid": "$NVMF_PORT", 00:24:22.883 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:22.883 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:22.883 "hdgst": ${hdgst:-false}, 00:24:22.883 "ddgst": ${ddgst:-false} 00:24:22.883 }, 00:24:22.883 "method": "bdev_nvme_attach_controller" 00:24:22.883 } 00:24:22.883 EOF 00:24:22.883 )") 00:24:22.883 20:25:00 -- nvmf/common.sh@542 -- # cat 00:24:22.883 20:25:00 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:22.883 20:25:00 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:22.883 { 00:24:22.883 "params": { 00:24:22.883 "name": "Nvme$subsystem", 00:24:22.883 "trtype": "$TEST_TRANSPORT", 00:24:22.883 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:22.883 "adrfam": "ipv4", 00:24:22.883 "trsvcid": "$NVMF_PORT", 00:24:22.883 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:22.883 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:22.883 "hdgst": ${hdgst:-false}, 00:24:22.883 "ddgst": ${ddgst:-false} 00:24:22.883 }, 00:24:22.883 "method": "bdev_nvme_attach_controller" 00:24:22.883 } 00:24:22.883 EOF 00:24:22.883 )") 00:24:22.883 20:25:00 -- nvmf/common.sh@542 -- # cat 00:24:22.883 20:25:00 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:22.883 20:25:00 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:22.883 { 00:24:22.883 "params": { 00:24:22.883 "name": "Nvme$subsystem", 00:24:22.883 "trtype": "$TEST_TRANSPORT", 00:24:22.883 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:22.883 "adrfam": "ipv4", 00:24:22.883 "trsvcid": "$NVMF_PORT", 00:24:22.883 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:22.883 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:22.883 "hdgst": ${hdgst:-false}, 00:24:22.883 "ddgst": ${ddgst:-false} 00:24:22.883 }, 00:24:22.883 "method": "bdev_nvme_attach_controller" 00:24:22.883 } 00:24:22.883 EOF 00:24:22.883 )") 00:24:22.883 [2024-02-14 20:25:00.094451] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:24:22.883 [2024-02-14 20:25:00.094502] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1879039 ] 00:24:22.883 20:25:00 -- nvmf/common.sh@542 -- # cat 00:24:22.883 20:25:00 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:22.883 20:25:00 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:22.883 { 00:24:22.883 "params": { 00:24:22.883 "name": "Nvme$subsystem", 00:24:22.883 "trtype": "$TEST_TRANSPORT", 00:24:22.883 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:22.883 "adrfam": "ipv4", 00:24:22.883 "trsvcid": "$NVMF_PORT", 00:24:22.883 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:22.883 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:22.883 "hdgst": ${hdgst:-false}, 00:24:22.883 "ddgst": ${ddgst:-false} 00:24:22.883 }, 00:24:22.883 "method": "bdev_nvme_attach_controller" 00:24:22.883 } 00:24:22.883 EOF 00:24:22.883 )") 00:24:22.883 20:25:00 -- nvmf/common.sh@542 -- # cat 00:24:22.883 20:25:00 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:22.883 20:25:00 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:22.883 { 00:24:22.883 "params": { 00:24:22.883 "name": "Nvme$subsystem", 00:24:22.883 "trtype": "$TEST_TRANSPORT", 00:24:22.883 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:22.883 "adrfam": "ipv4", 00:24:22.883 "trsvcid": "$NVMF_PORT", 00:24:22.883 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:22.883 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:22.883 "hdgst": ${hdgst:-false}, 00:24:22.883 "ddgst": ${ddgst:-false} 00:24:22.883 }, 00:24:22.883 "method": "bdev_nvme_attach_controller" 00:24:22.883 } 00:24:22.883 EOF 00:24:22.883 )") 00:24:22.883 20:25:00 -- nvmf/common.sh@542 -- # cat 00:24:22.883 20:25:00 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:22.883 20:25:00 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:22.883 { 00:24:22.883 "params": { 00:24:22.883 "name": "Nvme$subsystem", 00:24:22.883 "trtype": "$TEST_TRANSPORT", 00:24:22.883 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:22.883 "adrfam": "ipv4", 00:24:22.883 "trsvcid": "$NVMF_PORT", 00:24:22.883 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:22.883 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:22.884 "hdgst": ${hdgst:-false}, 00:24:22.884 "ddgst": ${ddgst:-false} 00:24:22.884 }, 00:24:22.884 "method": "bdev_nvme_attach_controller" 00:24:22.884 } 00:24:22.884 EOF 00:24:22.884 )") 00:24:22.884 20:25:00 -- nvmf/common.sh@542 -- # cat 00:24:22.884 20:25:00 -- nvmf/common.sh@544 -- # jq . 00:24:22.884 EAL: No free 2048 kB hugepages reported on node 1 00:24:22.884 20:25:00 -- nvmf/common.sh@545 -- # IFS=, 00:24:22.884 20:25:00 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:24:22.884 "params": { 00:24:22.884 "name": "Nvme1", 00:24:22.884 "trtype": "tcp", 00:24:22.884 "traddr": "10.0.0.2", 00:24:22.884 "adrfam": "ipv4", 00:24:22.884 "trsvcid": "4420", 00:24:22.884 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:22.884 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:22.884 "hdgst": false, 00:24:22.884 "ddgst": false 00:24:22.884 }, 00:24:22.884 "method": "bdev_nvme_attach_controller" 00:24:22.884 },{ 00:24:22.884 "params": { 00:24:22.884 "name": "Nvme2", 00:24:22.884 "trtype": "tcp", 00:24:22.884 "traddr": "10.0.0.2", 00:24:22.884 "adrfam": "ipv4", 00:24:22.884 "trsvcid": "4420", 00:24:22.884 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:22.884 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:22.884 "hdgst": false, 00:24:22.884 "ddgst": false 00:24:22.884 }, 00:24:22.884 "method": "bdev_nvme_attach_controller" 00:24:22.884 },{ 00:24:22.884 "params": { 00:24:22.884 "name": "Nvme3", 00:24:22.884 "trtype": "tcp", 00:24:22.884 "traddr": "10.0.0.2", 00:24:22.884 "adrfam": "ipv4", 00:24:22.884 "trsvcid": "4420", 00:24:22.884 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:22.884 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:22.884 "hdgst": false, 00:24:22.884 "ddgst": false 00:24:22.884 }, 00:24:22.884 "method": "bdev_nvme_attach_controller" 00:24:22.884 },{ 00:24:22.884 "params": { 00:24:22.884 "name": "Nvme4", 00:24:22.884 "trtype": "tcp", 00:24:22.884 "traddr": "10.0.0.2", 00:24:22.884 "adrfam": "ipv4", 00:24:22.884 "trsvcid": "4420", 00:24:22.884 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:22.884 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:22.884 "hdgst": false, 00:24:22.884 "ddgst": false 00:24:22.884 }, 00:24:22.884 "method": "bdev_nvme_attach_controller" 00:24:22.884 },{ 00:24:22.884 "params": { 00:24:22.884 "name": "Nvme5", 00:24:22.884 "trtype": "tcp", 00:24:22.884 "traddr": "10.0.0.2", 00:24:22.884 "adrfam": "ipv4", 00:24:22.884 "trsvcid": "4420", 00:24:22.884 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:22.884 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:22.884 "hdgst": false, 00:24:22.884 "ddgst": false 00:24:22.884 }, 00:24:22.884 "method": "bdev_nvme_attach_controller" 00:24:22.884 },{ 00:24:22.884 "params": { 00:24:22.884 "name": "Nvme6", 00:24:22.884 "trtype": "tcp", 00:24:22.884 "traddr": "10.0.0.2", 00:24:22.884 "adrfam": "ipv4", 00:24:22.884 "trsvcid": "4420", 00:24:22.884 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:22.884 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:22.884 "hdgst": false, 00:24:22.884 "ddgst": false 00:24:22.884 }, 00:24:22.884 "method": "bdev_nvme_attach_controller" 00:24:22.884 },{ 00:24:22.884 "params": { 00:24:22.884 "name": "Nvme7", 00:24:22.884 "trtype": "tcp", 00:24:22.884 "traddr": "10.0.0.2", 00:24:22.884 "adrfam": "ipv4", 00:24:22.884 "trsvcid": "4420", 00:24:22.884 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:22.884 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:22.884 "hdgst": false, 00:24:22.884 "ddgst": false 00:24:22.884 }, 00:24:22.884 "method": "bdev_nvme_attach_controller" 00:24:22.884 },{ 00:24:22.884 "params": { 00:24:22.884 "name": "Nvme8", 00:24:22.884 "trtype": "tcp", 00:24:22.884 "traddr": "10.0.0.2", 00:24:22.884 "adrfam": "ipv4", 00:24:22.884 "trsvcid": "4420", 00:24:22.884 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:22.884 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:22.884 "hdgst": false, 00:24:22.884 "ddgst": false 00:24:22.884 }, 00:24:22.884 "method": "bdev_nvme_attach_controller" 00:24:22.884 },{ 00:24:22.884 "params": { 00:24:22.884 "name": "Nvme9", 00:24:22.884 "trtype": "tcp", 00:24:22.884 "traddr": "10.0.0.2", 00:24:22.884 "adrfam": "ipv4", 00:24:22.884 "trsvcid": "4420", 00:24:22.884 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:22.884 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:22.884 "hdgst": false, 00:24:22.884 "ddgst": false 00:24:22.884 }, 00:24:22.884 "method": "bdev_nvme_attach_controller" 00:24:22.884 },{ 00:24:22.884 "params": { 00:24:22.884 "name": "Nvme10", 00:24:22.884 "trtype": "tcp", 00:24:22.884 "traddr": "10.0.0.2", 00:24:22.884 "adrfam": "ipv4", 00:24:22.884 "trsvcid": "4420", 00:24:22.884 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:22.884 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:22.884 "hdgst": false, 00:24:22.884 "ddgst": false 00:24:22.884 }, 00:24:22.884 "method": "bdev_nvme_attach_controller" 00:24:22.884 }' 00:24:22.884 [2024-02-14 20:25:00.155666] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:22.884 [2024-02-14 20:25:00.226043] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:22.884 [2024-02-14 20:25:00.226099] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:24:24.260 Running I/O for 10 seconds... 00:24:24.260 20:25:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:24.260 20:25:01 -- common/autotest_common.sh@850 -- # return 0 00:24:24.260 20:25:01 -- target/shutdown.sh@104 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:24.260 20:25:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:24.260 20:25:01 -- common/autotest_common.sh@10 -- # set +x 00:24:24.260 20:25:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:24.260 20:25:01 -- target/shutdown.sh@106 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:24:24.260 20:25:01 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:24:24.260 20:25:01 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:24:24.260 20:25:01 -- target/shutdown.sh@57 -- # local ret=1 00:24:24.260 20:25:01 -- target/shutdown.sh@58 -- # local i 00:24:24.260 20:25:01 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:24:24.260 20:25:01 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:24.260 20:25:01 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:24.260 20:25:01 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:24.260 20:25:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:24.260 20:25:01 -- common/autotest_common.sh@10 -- # set +x 00:24:24.260 20:25:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:24.260 20:25:01 -- target/shutdown.sh@60 -- # read_io_count=3 00:24:24.260 20:25:01 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:24:24.260 20:25:01 -- target/shutdown.sh@67 -- # sleep 0.25 00:24:24.520 20:25:01 -- target/shutdown.sh@59 -- # (( i-- )) 00:24:24.520 20:25:01 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:24.520 20:25:01 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:24.520 20:25:01 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:24.520 20:25:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:24.520 20:25:01 -- common/autotest_common.sh@10 -- # set +x 00:24:24.779 20:25:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:24.779 20:25:01 -- target/shutdown.sh@60 -- # read_io_count=167 00:24:24.779 20:25:01 -- target/shutdown.sh@63 -- # '[' 167 -ge 100 ']' 00:24:24.779 20:25:01 -- target/shutdown.sh@64 -- # ret=0 00:24:24.779 20:25:01 -- target/shutdown.sh@65 -- # break 00:24:24.779 20:25:01 -- target/shutdown.sh@69 -- # return 0 00:24:24.779 20:25:01 -- target/shutdown.sh@109 -- # killprocess 1879039 00:24:24.779 20:25:01 -- common/autotest_common.sh@924 -- # '[' -z 1879039 ']' 00:24:24.779 20:25:01 -- common/autotest_common.sh@928 -- # kill -0 1879039 00:24:24.779 20:25:01 -- common/autotest_common.sh@929 -- # uname 00:24:24.779 20:25:01 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:24:24.779 20:25:01 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 1879039 00:24:24.779 20:25:02 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:24:24.779 20:25:02 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:24:24.779 20:25:02 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 1879039' 00:24:24.779 killing process with pid 1879039 00:24:24.779 20:25:02 -- common/autotest_common.sh@943 -- # kill 1879039 00:24:24.779 20:25:02 -- common/autotest_common.sh@948 -- # wait 1879039 00:24:24.779 Received shutdown signal, test time was about 0.544787 seconds 00:24:24.779 00:24:24.779 Latency(us) 00:24:24.779 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:24.779 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:24.779 Verification LBA range: start 0x0 length 0x400 00:24:24.779 Nvme1n1 : 0.49 485.85 30.37 0.00 0.00 125372.19 8113.98 115343.36 00:24:24.779 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:24.779 Verification LBA range: start 0x0 length 0x400 00:24:24.779 Nvme2n1 : 0.48 473.35 29.58 0.00 0.00 129749.23 11983.73 119337.94 00:24:24.779 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:24.779 Verification LBA range: start 0x0 length 0x400 00:24:24.779 Nvme3n1 : 0.54 499.88 31.24 0.00 0.00 112540.53 13419.28 93872.52 00:24:24.779 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:24.779 Verification LBA range: start 0x0 length 0x400 00:24:24.779 Nvme4n1 : 0.49 389.84 24.36 0.00 0.00 151423.31 23967.45 143804.71 00:24:24.779 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:24.779 Verification LBA range: start 0x0 length 0x400 00:24:24.779 Nvme5n1 : 0.50 545.64 34.10 0.00 0.00 107466.64 4930.80 99864.38 00:24:24.779 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:24.779 Verification LBA range: start 0x0 length 0x400 00:24:24.779 Nvme6n1 : 0.50 545.35 34.08 0.00 0.00 107209.75 9175.04 98865.74 00:24:24.779 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:24.779 Verification LBA range: start 0x0 length 0x400 00:24:24.779 Nvme7n1 : 0.48 472.22 29.51 0.00 0.00 120763.63 12857.54 95869.81 00:24:24.779 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:24.779 Verification LBA range: start 0x0 length 0x400 00:24:24.779 Nvme8n1 : 0.50 462.33 28.90 0.00 0.00 120940.15 4056.99 105356.92 00:24:24.779 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:24.779 Verification LBA range: start 0x0 length 0x400 00:24:24.780 Nvme9n1 : 0.49 464.13 29.01 0.00 0.00 118761.96 18599.74 96369.13 00:24:24.780 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:24.780 Verification LBA range: start 0x0 length 0x400 00:24:24.780 Nvme10n1 : 0.49 462.29 28.89 0.00 0.00 117801.64 16976.94 102860.31 00:24:24.780 =================================================================================================================== 00:24:24.780 Total : 4800.87 300.05 0.00 0.00 120050.43 4056.99 143804.71 00:24:24.780 [2024-02-14 20:25:02.124757] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:24:25.038 20:25:02 -- target/shutdown.sh@112 -- # sleep 1 00:24:25.974 20:25:03 -- target/shutdown.sh@113 -- # kill -0 1878756 00:24:25.974 20:25:03 -- target/shutdown.sh@115 -- # stoptarget 00:24:25.974 20:25:03 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:24:25.974 20:25:03 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:25.974 20:25:03 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:25.974 20:25:03 -- target/shutdown.sh@45 -- # nvmftestfini 00:24:25.974 20:25:03 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:25.974 20:25:03 -- nvmf/common.sh@116 -- # sync 00:24:25.974 20:25:03 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:25.974 20:25:03 -- nvmf/common.sh@119 -- # set +e 00:24:25.974 20:25:03 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:25.974 20:25:03 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:25.974 rmmod nvme_tcp 00:24:25.974 rmmod nvme_fabrics 00:24:25.974 rmmod nvme_keyring 00:24:26.233 20:25:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:26.233 20:25:03 -- nvmf/common.sh@123 -- # set -e 00:24:26.233 20:25:03 -- nvmf/common.sh@124 -- # return 0 00:24:26.233 20:25:03 -- nvmf/common.sh@477 -- # '[' -n 1878756 ']' 00:24:26.233 20:25:03 -- nvmf/common.sh@478 -- # killprocess 1878756 00:24:26.233 20:25:03 -- common/autotest_common.sh@924 -- # '[' -z 1878756 ']' 00:24:26.233 20:25:03 -- common/autotest_common.sh@928 -- # kill -0 1878756 00:24:26.233 20:25:03 -- common/autotest_common.sh@929 -- # uname 00:24:26.233 20:25:03 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:24:26.233 20:25:03 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 1878756 00:24:26.233 20:25:03 -- common/autotest_common.sh@930 -- # process_name=reactor_1 00:24:26.233 20:25:03 -- common/autotest_common.sh@934 -- # '[' reactor_1 = sudo ']' 00:24:26.233 20:25:03 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 1878756' 00:24:26.233 killing process with pid 1878756 00:24:26.233 20:25:03 -- common/autotest_common.sh@943 -- # kill 1878756 00:24:26.233 20:25:03 -- common/autotest_common.sh@948 -- # wait 1878756 00:24:26.492 20:25:03 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:26.492 20:25:03 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:26.492 20:25:03 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:26.492 20:25:03 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:26.492 20:25:03 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:26.493 20:25:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:26.493 20:25:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:26.493 20:25:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:29.031 20:25:05 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:24:29.031 00:24:29.031 real 0m7.525s 00:24:29.031 user 0m22.158s 00:24:29.031 sys 0m1.244s 00:24:29.031 20:25:05 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:29.031 20:25:05 -- common/autotest_common.sh@10 -- # set +x 00:24:29.031 ************************************ 00:24:29.031 END TEST nvmf_shutdown_tc2 00:24:29.031 ************************************ 00:24:29.031 20:25:05 -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:24:29.031 20:25:05 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:24:29.031 20:25:05 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:24:29.031 20:25:05 -- common/autotest_common.sh@10 -- # set +x 00:24:29.031 ************************************ 00:24:29.031 START TEST nvmf_shutdown_tc3 00:24:29.031 ************************************ 00:24:29.031 20:25:05 -- common/autotest_common.sh@1102 -- # nvmf_shutdown_tc3 00:24:29.031 20:25:05 -- target/shutdown.sh@120 -- # starttarget 00:24:29.031 20:25:05 -- target/shutdown.sh@15 -- # nvmftestinit 00:24:29.031 20:25:05 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:29.031 20:25:05 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:29.031 20:25:05 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:29.031 20:25:05 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:29.031 20:25:05 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:29.031 20:25:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:29.031 20:25:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:29.031 20:25:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:29.031 20:25:05 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:29.031 20:25:05 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:29.031 20:25:05 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:29.031 20:25:05 -- common/autotest_common.sh@10 -- # set +x 00:24:29.031 20:25:05 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:29.031 20:25:05 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:29.031 20:25:05 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:29.031 20:25:05 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:29.031 20:25:05 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:29.031 20:25:05 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:29.031 20:25:05 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:29.031 20:25:05 -- nvmf/common.sh@294 -- # net_devs=() 00:24:29.031 20:25:05 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:29.031 20:25:05 -- nvmf/common.sh@295 -- # e810=() 00:24:29.031 20:25:05 -- nvmf/common.sh@295 -- # local -ga e810 00:24:29.031 20:25:05 -- nvmf/common.sh@296 -- # x722=() 00:24:29.031 20:25:05 -- nvmf/common.sh@296 -- # local -ga x722 00:24:29.031 20:25:05 -- nvmf/common.sh@297 -- # mlx=() 00:24:29.031 20:25:05 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:29.031 20:25:05 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:29.031 20:25:05 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:29.031 20:25:05 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:29.031 20:25:05 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:29.031 20:25:05 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:29.031 20:25:05 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:29.031 20:25:05 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:29.031 20:25:05 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:29.031 20:25:05 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:29.031 20:25:05 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:29.031 20:25:05 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:29.031 20:25:05 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:29.031 20:25:05 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:29.031 20:25:05 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:24:29.031 20:25:05 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:24:29.031 20:25:05 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:24:29.031 20:25:05 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:29.031 20:25:05 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:29.031 20:25:05 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:29.031 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:29.031 20:25:05 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:29.031 20:25:05 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:29.031 20:25:05 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:29.031 20:25:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:29.031 20:25:05 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:29.031 20:25:05 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:29.031 20:25:05 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:29.031 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:29.031 20:25:05 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:29.031 20:25:05 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:29.031 20:25:05 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:29.031 20:25:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:29.031 20:25:05 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:29.031 20:25:05 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:29.031 20:25:05 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:24:29.031 20:25:05 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:24:29.031 20:25:05 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:29.031 20:25:05 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:29.031 20:25:05 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:29.032 20:25:05 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:29.032 20:25:05 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:29.032 Found net devices under 0000:af:00.0: cvl_0_0 00:24:29.032 20:25:05 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:29.032 20:25:05 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:29.032 20:25:05 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:29.032 20:25:05 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:29.032 20:25:05 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:29.032 20:25:05 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:29.032 Found net devices under 0000:af:00.1: cvl_0_1 00:24:29.032 20:25:05 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:29.032 20:25:05 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:29.032 20:25:05 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:29.032 20:25:05 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:29.032 20:25:05 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:24:29.032 20:25:05 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:24:29.032 20:25:05 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:29.032 20:25:05 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:29.032 20:25:05 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:29.032 20:25:05 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:24:29.032 20:25:05 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:29.032 20:25:05 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:29.032 20:25:05 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:24:29.032 20:25:05 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:29.032 20:25:05 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:29.032 20:25:05 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:24:29.032 20:25:05 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:24:29.032 20:25:05 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:24:29.032 20:25:05 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:29.032 20:25:06 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:29.032 20:25:06 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:29.032 20:25:06 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:24:29.032 20:25:06 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:29.032 20:25:06 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:29.032 20:25:06 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:29.032 20:25:06 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:24:29.032 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:29.032 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.166 ms 00:24:29.032 00:24:29.032 --- 10.0.0.2 ping statistics --- 00:24:29.032 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:29.032 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:24:29.032 20:25:06 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:29.032 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:29.032 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.394 ms 00:24:29.032 00:24:29.032 --- 10.0.0.1 ping statistics --- 00:24:29.032 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:29.032 rtt min/avg/max/mdev = 0.394/0.394/0.394/0.000 ms 00:24:29.032 20:25:06 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:29.032 20:25:06 -- nvmf/common.sh@410 -- # return 0 00:24:29.032 20:25:06 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:29.032 20:25:06 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:29.032 20:25:06 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:29.032 20:25:06 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:29.032 20:25:06 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:29.032 20:25:06 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:29.032 20:25:06 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:29.032 20:25:06 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:24:29.032 20:25:06 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:29.032 20:25:06 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:29.032 20:25:06 -- common/autotest_common.sh@10 -- # set +x 00:24:29.032 20:25:06 -- nvmf/common.sh@469 -- # nvmfpid=1880281 00:24:29.032 20:25:06 -- nvmf/common.sh@470 -- # waitforlisten 1880281 00:24:29.032 20:25:06 -- common/autotest_common.sh@817 -- # '[' -z 1880281 ']' 00:24:29.032 20:25:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:29.032 20:25:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:29.032 20:25:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:29.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:29.032 20:25:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:29.032 20:25:06 -- common/autotest_common.sh@10 -- # set +x 00:24:29.032 20:25:06 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:29.032 [2024-02-14 20:25:06.263459] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:24:29.032 [2024-02-14 20:25:06.263499] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:29.032 EAL: No free 2048 kB hugepages reported on node 1 00:24:29.032 [2024-02-14 20:25:06.325032] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:29.032 [2024-02-14 20:25:06.400808] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:29.032 [2024-02-14 20:25:06.400912] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:29.032 [2024-02-14 20:25:06.400920] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:29.032 [2024-02-14 20:25:06.400926] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:29.032 [2024-02-14 20:25:06.400961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:29.032 [2024-02-14 20:25:06.401048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:29.032 [2024-02-14 20:25:06.401154] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:29.032 [2024-02-14 20:25:06.401156] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:24:29.966 20:25:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:29.966 20:25:07 -- common/autotest_common.sh@850 -- # return 0 00:24:29.966 20:25:07 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:29.966 20:25:07 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:29.966 20:25:07 -- common/autotest_common.sh@10 -- # set +x 00:24:29.966 20:25:07 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:29.966 20:25:07 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:29.966 20:25:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:29.966 20:25:07 -- common/autotest_common.sh@10 -- # set +x 00:24:29.966 [2024-02-14 20:25:07.104869] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:29.966 20:25:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:29.966 20:25:07 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:24:29.966 20:25:07 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:24:29.966 20:25:07 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:29.966 20:25:07 -- common/autotest_common.sh@10 -- # set +x 00:24:29.966 20:25:07 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:29.966 20:25:07 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:29.966 20:25:07 -- target/shutdown.sh@28 -- # cat 00:24:29.966 20:25:07 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:29.966 20:25:07 -- target/shutdown.sh@28 -- # cat 00:24:29.966 20:25:07 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:29.966 20:25:07 -- target/shutdown.sh@28 -- # cat 00:24:29.967 20:25:07 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:29.967 20:25:07 -- target/shutdown.sh@28 -- # cat 00:24:29.967 20:25:07 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:29.967 20:25:07 -- target/shutdown.sh@28 -- # cat 00:24:29.967 20:25:07 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:29.967 20:25:07 -- target/shutdown.sh@28 -- # cat 00:24:29.967 20:25:07 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:29.967 20:25:07 -- target/shutdown.sh@28 -- # cat 00:24:29.967 20:25:07 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:29.967 20:25:07 -- target/shutdown.sh@28 -- # cat 00:24:29.967 20:25:07 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:29.967 20:25:07 -- target/shutdown.sh@28 -- # cat 00:24:29.967 20:25:07 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:29.967 20:25:07 -- target/shutdown.sh@28 -- # cat 00:24:29.967 20:25:07 -- target/shutdown.sh@35 -- # rpc_cmd 00:24:29.967 20:25:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:29.967 20:25:07 -- common/autotest_common.sh@10 -- # set +x 00:24:29.967 Malloc1 00:24:29.967 [2024-02-14 20:25:07.200500] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:29.967 Malloc2 00:24:29.967 Malloc3 00:24:29.967 Malloc4 00:24:29.967 Malloc5 00:24:30.226 Malloc6 00:24:30.226 Malloc7 00:24:30.226 Malloc8 00:24:30.226 Malloc9 00:24:30.226 Malloc10 00:24:30.226 20:25:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:30.226 20:25:07 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:24:30.226 20:25:07 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:30.226 20:25:07 -- common/autotest_common.sh@10 -- # set +x 00:24:30.226 20:25:07 -- target/shutdown.sh@124 -- # perfpid=1880560 00:24:30.226 20:25:07 -- target/shutdown.sh@125 -- # waitforlisten 1880560 /var/tmp/bdevperf.sock 00:24:30.226 20:25:07 -- common/autotest_common.sh@817 -- # '[' -z 1880560 ']' 00:24:30.226 20:25:07 -- target/shutdown.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:24:30.226 20:25:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:30.226 20:25:07 -- target/shutdown.sh@123 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:30.226 20:25:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:30.226 20:25:07 -- nvmf/common.sh@520 -- # config=() 00:24:30.226 20:25:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:30.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:30.226 20:25:07 -- nvmf/common.sh@520 -- # local subsystem config 00:24:30.226 20:25:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:30.226 20:25:07 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:30.226 20:25:07 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:30.226 { 00:24:30.226 "params": { 00:24:30.226 "name": "Nvme$subsystem", 00:24:30.226 "trtype": "$TEST_TRANSPORT", 00:24:30.226 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:30.226 "adrfam": "ipv4", 00:24:30.226 "trsvcid": "$NVMF_PORT", 00:24:30.226 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:30.226 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:30.226 "hdgst": ${hdgst:-false}, 00:24:30.226 "ddgst": ${ddgst:-false} 00:24:30.226 }, 00:24:30.226 "method": "bdev_nvme_attach_controller" 00:24:30.226 } 00:24:30.226 EOF 00:24:30.226 )") 00:24:30.226 20:25:07 -- common/autotest_common.sh@10 -- # set +x 00:24:30.226 20:25:07 -- nvmf/common.sh@542 -- # cat 00:24:30.226 20:25:07 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:30.226 20:25:07 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:30.226 { 00:24:30.226 "params": { 00:24:30.226 "name": "Nvme$subsystem", 00:24:30.226 "trtype": "$TEST_TRANSPORT", 00:24:30.226 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:30.226 "adrfam": "ipv4", 00:24:30.226 "trsvcid": "$NVMF_PORT", 00:24:30.226 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:30.226 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:30.226 "hdgst": ${hdgst:-false}, 00:24:30.226 "ddgst": ${ddgst:-false} 00:24:30.226 }, 00:24:30.226 "method": "bdev_nvme_attach_controller" 00:24:30.226 } 00:24:30.226 EOF 00:24:30.226 )") 00:24:30.486 20:25:07 -- nvmf/common.sh@542 -- # cat 00:24:30.486 20:25:07 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:30.486 20:25:07 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:30.486 { 00:24:30.486 "params": { 00:24:30.486 "name": "Nvme$subsystem", 00:24:30.486 "trtype": "$TEST_TRANSPORT", 00:24:30.486 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:30.486 "adrfam": "ipv4", 00:24:30.486 "trsvcid": "$NVMF_PORT", 00:24:30.486 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:30.486 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:30.486 "hdgst": ${hdgst:-false}, 00:24:30.486 "ddgst": ${ddgst:-false} 00:24:30.486 }, 00:24:30.486 "method": "bdev_nvme_attach_controller" 00:24:30.486 } 00:24:30.486 EOF 00:24:30.486 )") 00:24:30.486 20:25:07 -- nvmf/common.sh@542 -- # cat 00:24:30.486 20:25:07 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:30.486 20:25:07 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:30.486 { 00:24:30.486 "params": { 00:24:30.486 "name": "Nvme$subsystem", 00:24:30.486 "trtype": "$TEST_TRANSPORT", 00:24:30.486 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:30.486 "adrfam": "ipv4", 00:24:30.486 "trsvcid": "$NVMF_PORT", 00:24:30.486 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:30.486 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:30.486 "hdgst": ${hdgst:-false}, 00:24:30.486 "ddgst": ${ddgst:-false} 00:24:30.486 }, 00:24:30.486 "method": "bdev_nvme_attach_controller" 00:24:30.486 } 00:24:30.486 EOF 00:24:30.486 )") 00:24:30.486 20:25:07 -- nvmf/common.sh@542 -- # cat 00:24:30.486 20:25:07 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:30.486 20:25:07 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:30.486 { 00:24:30.486 "params": { 00:24:30.486 "name": "Nvme$subsystem", 00:24:30.486 "trtype": "$TEST_TRANSPORT", 00:24:30.486 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:30.486 "adrfam": "ipv4", 00:24:30.486 "trsvcid": "$NVMF_PORT", 00:24:30.486 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:30.486 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:30.486 "hdgst": ${hdgst:-false}, 00:24:30.486 "ddgst": ${ddgst:-false} 00:24:30.486 }, 00:24:30.486 "method": "bdev_nvme_attach_controller" 00:24:30.486 } 00:24:30.486 EOF 00:24:30.486 )") 00:24:30.486 20:25:07 -- nvmf/common.sh@542 -- # cat 00:24:30.486 20:25:07 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:30.486 20:25:07 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:30.486 { 00:24:30.486 "params": { 00:24:30.486 "name": "Nvme$subsystem", 00:24:30.486 "trtype": "$TEST_TRANSPORT", 00:24:30.486 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:30.486 "adrfam": "ipv4", 00:24:30.486 "trsvcid": "$NVMF_PORT", 00:24:30.486 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:30.486 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:30.486 "hdgst": ${hdgst:-false}, 00:24:30.486 "ddgst": ${ddgst:-false} 00:24:30.486 }, 00:24:30.486 "method": "bdev_nvme_attach_controller" 00:24:30.486 } 00:24:30.486 EOF 00:24:30.486 )") 00:24:30.486 20:25:07 -- nvmf/common.sh@542 -- # cat 00:24:30.486 [2024-02-14 20:25:07.671460] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:24:30.486 [2024-02-14 20:25:07.671509] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1880560 ] 00:24:30.486 20:25:07 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:30.486 20:25:07 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:30.486 { 00:24:30.486 "params": { 00:24:30.486 "name": "Nvme$subsystem", 00:24:30.487 "trtype": "$TEST_TRANSPORT", 00:24:30.487 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:30.487 "adrfam": "ipv4", 00:24:30.487 "trsvcid": "$NVMF_PORT", 00:24:30.487 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:30.487 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:30.487 "hdgst": ${hdgst:-false}, 00:24:30.487 "ddgst": ${ddgst:-false} 00:24:30.487 }, 00:24:30.487 "method": "bdev_nvme_attach_controller" 00:24:30.487 } 00:24:30.487 EOF 00:24:30.487 )") 00:24:30.487 20:25:07 -- nvmf/common.sh@542 -- # cat 00:24:30.487 20:25:07 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:30.487 20:25:07 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:30.487 { 00:24:30.487 "params": { 00:24:30.487 "name": "Nvme$subsystem", 00:24:30.487 "trtype": "$TEST_TRANSPORT", 00:24:30.487 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:30.487 "adrfam": "ipv4", 00:24:30.487 "trsvcid": "$NVMF_PORT", 00:24:30.487 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:30.487 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:30.487 "hdgst": ${hdgst:-false}, 00:24:30.487 "ddgst": ${ddgst:-false} 00:24:30.487 }, 00:24:30.487 "method": "bdev_nvme_attach_controller" 00:24:30.487 } 00:24:30.487 EOF 00:24:30.487 )") 00:24:30.487 20:25:07 -- nvmf/common.sh@542 -- # cat 00:24:30.487 20:25:07 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:30.487 20:25:07 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:30.487 { 00:24:30.487 "params": { 00:24:30.487 "name": "Nvme$subsystem", 00:24:30.487 "trtype": "$TEST_TRANSPORT", 00:24:30.487 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:30.487 "adrfam": "ipv4", 00:24:30.487 "trsvcid": "$NVMF_PORT", 00:24:30.487 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:30.487 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:30.487 "hdgst": ${hdgst:-false}, 00:24:30.487 "ddgst": ${ddgst:-false} 00:24:30.487 }, 00:24:30.487 "method": "bdev_nvme_attach_controller" 00:24:30.487 } 00:24:30.487 EOF 00:24:30.487 )") 00:24:30.487 20:25:07 -- nvmf/common.sh@542 -- # cat 00:24:30.487 20:25:07 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:30.487 20:25:07 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:30.487 { 00:24:30.487 "params": { 00:24:30.487 "name": "Nvme$subsystem", 00:24:30.487 "trtype": "$TEST_TRANSPORT", 00:24:30.487 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:30.487 "adrfam": "ipv4", 00:24:30.487 "trsvcid": "$NVMF_PORT", 00:24:30.487 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:30.487 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:30.487 "hdgst": ${hdgst:-false}, 00:24:30.487 "ddgst": ${ddgst:-false} 00:24:30.487 }, 00:24:30.487 "method": "bdev_nvme_attach_controller" 00:24:30.487 } 00:24:30.487 EOF 00:24:30.487 )") 00:24:30.487 20:25:07 -- nvmf/common.sh@542 -- # cat 00:24:30.487 20:25:07 -- nvmf/common.sh@544 -- # jq . 00:24:30.487 EAL: No free 2048 kB hugepages reported on node 1 00:24:30.487 20:25:07 -- nvmf/common.sh@545 -- # IFS=, 00:24:30.487 20:25:07 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:24:30.487 "params": { 00:24:30.487 "name": "Nvme1", 00:24:30.487 "trtype": "tcp", 00:24:30.487 "traddr": "10.0.0.2", 00:24:30.487 "adrfam": "ipv4", 00:24:30.487 "trsvcid": "4420", 00:24:30.487 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:30.487 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:30.487 "hdgst": false, 00:24:30.487 "ddgst": false 00:24:30.487 }, 00:24:30.487 "method": "bdev_nvme_attach_controller" 00:24:30.487 },{ 00:24:30.487 "params": { 00:24:30.487 "name": "Nvme2", 00:24:30.487 "trtype": "tcp", 00:24:30.487 "traddr": "10.0.0.2", 00:24:30.487 "adrfam": "ipv4", 00:24:30.487 "trsvcid": "4420", 00:24:30.487 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:30.487 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:30.487 "hdgst": false, 00:24:30.487 "ddgst": false 00:24:30.487 }, 00:24:30.487 "method": "bdev_nvme_attach_controller" 00:24:30.487 },{ 00:24:30.487 "params": { 00:24:30.487 "name": "Nvme3", 00:24:30.487 "trtype": "tcp", 00:24:30.487 "traddr": "10.0.0.2", 00:24:30.487 "adrfam": "ipv4", 00:24:30.487 "trsvcid": "4420", 00:24:30.487 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:30.487 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:30.487 "hdgst": false, 00:24:30.487 "ddgst": false 00:24:30.487 }, 00:24:30.487 "method": "bdev_nvme_attach_controller" 00:24:30.487 },{ 00:24:30.487 "params": { 00:24:30.487 "name": "Nvme4", 00:24:30.487 "trtype": "tcp", 00:24:30.487 "traddr": "10.0.0.2", 00:24:30.487 "adrfam": "ipv4", 00:24:30.487 "trsvcid": "4420", 00:24:30.487 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:30.487 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:30.487 "hdgst": false, 00:24:30.487 "ddgst": false 00:24:30.487 }, 00:24:30.487 "method": "bdev_nvme_attach_controller" 00:24:30.487 },{ 00:24:30.487 "params": { 00:24:30.487 "name": "Nvme5", 00:24:30.487 "trtype": "tcp", 00:24:30.487 "traddr": "10.0.0.2", 00:24:30.487 "adrfam": "ipv4", 00:24:30.487 "trsvcid": "4420", 00:24:30.487 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:30.487 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:30.487 "hdgst": false, 00:24:30.487 "ddgst": false 00:24:30.487 }, 00:24:30.487 "method": "bdev_nvme_attach_controller" 00:24:30.487 },{ 00:24:30.487 "params": { 00:24:30.487 "name": "Nvme6", 00:24:30.487 "trtype": "tcp", 00:24:30.487 "traddr": "10.0.0.2", 00:24:30.487 "adrfam": "ipv4", 00:24:30.487 "trsvcid": "4420", 00:24:30.487 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:30.487 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:30.487 "hdgst": false, 00:24:30.487 "ddgst": false 00:24:30.487 }, 00:24:30.487 "method": "bdev_nvme_attach_controller" 00:24:30.487 },{ 00:24:30.487 "params": { 00:24:30.487 "name": "Nvme7", 00:24:30.487 "trtype": "tcp", 00:24:30.487 "traddr": "10.0.0.2", 00:24:30.487 "adrfam": "ipv4", 00:24:30.487 "trsvcid": "4420", 00:24:30.487 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:30.487 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:30.487 "hdgst": false, 00:24:30.487 "ddgst": false 00:24:30.487 }, 00:24:30.487 "method": "bdev_nvme_attach_controller" 00:24:30.487 },{ 00:24:30.487 "params": { 00:24:30.487 "name": "Nvme8", 00:24:30.487 "trtype": "tcp", 00:24:30.487 "traddr": "10.0.0.2", 00:24:30.487 "adrfam": "ipv4", 00:24:30.487 "trsvcid": "4420", 00:24:30.487 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:30.487 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:30.487 "hdgst": false, 00:24:30.487 "ddgst": false 00:24:30.487 }, 00:24:30.487 "method": "bdev_nvme_attach_controller" 00:24:30.487 },{ 00:24:30.487 "params": { 00:24:30.487 "name": "Nvme9", 00:24:30.487 "trtype": "tcp", 00:24:30.487 "traddr": "10.0.0.2", 00:24:30.487 "adrfam": "ipv4", 00:24:30.487 "trsvcid": "4420", 00:24:30.487 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:30.487 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:30.487 "hdgst": false, 00:24:30.487 "ddgst": false 00:24:30.487 }, 00:24:30.487 "method": "bdev_nvme_attach_controller" 00:24:30.487 },{ 00:24:30.487 "params": { 00:24:30.487 "name": "Nvme10", 00:24:30.487 "trtype": "tcp", 00:24:30.487 "traddr": "10.0.0.2", 00:24:30.487 "adrfam": "ipv4", 00:24:30.487 "trsvcid": "4420", 00:24:30.487 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:30.487 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:30.487 "hdgst": false, 00:24:30.487 "ddgst": false 00:24:30.487 }, 00:24:30.487 "method": "bdev_nvme_attach_controller" 00:24:30.487 }' 00:24:30.487 [2024-02-14 20:25:07.735180] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:30.487 [2024-02-14 20:25:07.804611] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:30.487 [2024-02-14 20:25:07.804672] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:24:32.390 Running I/O for 10 seconds... 00:24:32.654 20:25:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:32.654 20:25:09 -- common/autotest_common.sh@850 -- # return 0 00:24:32.654 20:25:09 -- target/shutdown.sh@126 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:32.654 20:25:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:32.654 20:25:09 -- common/autotest_common.sh@10 -- # set +x 00:24:32.654 20:25:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:32.654 20:25:09 -- target/shutdown.sh@129 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:32.654 20:25:09 -- target/shutdown.sh@131 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:24:32.654 20:25:09 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:24:32.654 20:25:09 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:24:32.654 20:25:09 -- target/shutdown.sh@57 -- # local ret=1 00:24:32.654 20:25:09 -- target/shutdown.sh@58 -- # local i 00:24:32.654 20:25:09 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:24:32.654 20:25:09 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:32.654 20:25:09 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:32.654 20:25:09 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:32.654 20:25:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:32.654 20:25:09 -- common/autotest_common.sh@10 -- # set +x 00:24:32.654 20:25:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:32.654 20:25:09 -- target/shutdown.sh@60 -- # read_io_count=129 00:24:32.654 20:25:09 -- target/shutdown.sh@63 -- # '[' 129 -ge 100 ']' 00:24:32.654 20:25:09 -- target/shutdown.sh@64 -- # ret=0 00:24:32.654 20:25:09 -- target/shutdown.sh@65 -- # break 00:24:32.654 20:25:09 -- target/shutdown.sh@69 -- # return 0 00:24:32.654 20:25:09 -- target/shutdown.sh@134 -- # killprocess 1880281 00:24:32.654 20:25:09 -- common/autotest_common.sh@924 -- # '[' -z 1880281 ']' 00:24:32.654 20:25:09 -- common/autotest_common.sh@928 -- # kill -0 1880281 00:24:32.654 20:25:09 -- common/autotest_common.sh@929 -- # uname 00:24:32.654 20:25:09 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:24:32.654 20:25:09 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 1880281 00:24:32.654 20:25:09 -- common/autotest_common.sh@930 -- # process_name=reactor_1 00:24:32.654 20:25:09 -- common/autotest_common.sh@934 -- # '[' reactor_1 = sudo ']' 00:24:32.654 20:25:09 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 1880281' 00:24:32.654 killing process with pid 1880281 00:24:32.654 20:25:09 -- common/autotest_common.sh@943 -- # kill 1880281 00:24:32.654 20:25:09 -- common/autotest_common.sh@948 -- # wait 1880281 00:24:32.654 [2024-02-14 20:25:09.964799] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8769a0 is same with the state(5) to be set 00:24:32.654 [2024-02-14 20:25:09.964847] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8769a0 is same with the state(5) to be set 00:24:32.654 [2024-02-14 20:25:09.964854] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8769a0 is same with the state(5) to be set 00:24:32.654 [2024-02-14 20:25:09.964861] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8769a0 is same with the state(5) to be set 00:24:32.654 [2024-02-14 20:25:09.964868] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8769a0 is same with the state(5) to be set 00:24:32.654 [2024-02-14 20:25:09.964874] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8769a0 is same with the state(5) to be set 00:24:32.654 [2024-02-14 20:25:09.964880] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8769a0 is same with the state(5) to be set 00:24:32.654 [2024-02-14 20:25:09.964886] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8769a0 is same with the state(5) to be set 00:24:32.654 [2024-02-14 20:25:09.964891] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8769a0 is same with the state(5) to be set 00:24:32.654 [2024-02-14 20:25:09.964897] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8769a0 is same with the state(5) to be set 00:24:32.654 [2024-02-14 20:25:09.964903] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8769a0 is same with the state(5) to be set 00:24:32.654 [2024-02-14 20:25:09.964909] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8769a0 is same with the state(5) to be set 00:24:32.654 [2024-02-14 20:25:09.964915] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8769a0 is same with the state(5) to be set 00:24:32.654 [2024-02-14 20:25:09.964921] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8769a0 is same with the state(5) to be set 00:24:32.655 [2024-02-14 20:25:09.964927] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8769a0 is same with the state(5) to be set 00:24:32.655 [2024-02-14 20:25:09.964933] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8769a0 is same with the state(5) to be set 00:24:32.655 [2024-02-14 20:25:09.964939] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8769a0 is same with the state(5) to be set 00:24:32.655 [2024-02-14 20:25:09.964945] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8769a0 is same with the state(5) to be set 00:24:32.655 [2024-02-14 20:25:09.964956] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8769a0 is same with the state(5) to be set 00:24:32.655 [2024-02-14 20:25:09.964962] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8769a0 is same with the state(5) to be set 00:24:32.655 [2024-02-14 20:25:09.964968] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8769a0 is same with the state(5) to be set 00:24:32.655 [2024-02-14 20:25:09.964974] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8769a0 is same with the state(5) to be set 00:24:32.655 [2024-02-14 20:25:09.964980] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8769a0 is same with the state(5) to be set 00:24:32.655 [2024-02-14 20:25:09.964986] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8769a0 is same with the state(5) to be set 00:24:32.655 [2024-02-14 20:25:09.964993] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8769a0 is same with the state(5) to be set 00:24:32.655 [2024-02-14 20:25:09.964999] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8769a0 is same with the state(5) to be set 00:24:32.655 [2024-02-14 20:25:09.965005] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8769a0 is same with the state(5) to be set 00:24:32.655 [2024-02-14 20:25:09.965011] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8769a0 is same with the state(5) to be set 00:24:32.655 [2024-02-14 20:25:09.965017] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8769a0 is same with the state(5) to be set 00:24:32.655 [2024-02-14 20:25:09.965022] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8769a0 is same with the state(5) to be set 00:24:32.655 [2024-02-14 20:25:09.965028] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8769a0 is same with the state(5) to be set 00:24:32.655 [2024-02-14 20:25:09.965034] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8769a0 is same with the state(5) to be set 00:24:32.655 [2024-02-14 20:25:09.965040] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8769a0 is same with the state(5) to be set 00:24:32.655 [2024-02-14 20:25:09.965046] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8769a0 is same with the state(5) to be set 00:24:32.655 [2024-02-14 20:25:09.965052] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8769a0 is same with the state(5) to be set 00:24:32.655 [2024-02-14 20:25:09.965058] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8769a0 is same with the state(5) to be set 00:24:32.655 [2024-02-14 20:25:09.965064] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8769a0 is same with the state(5) to be set 00:24:32.655 [2024-02-14 20:25:09.965069] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8769a0 is same with the state(5) to be set 00:24:32.655 [2024-02-14 20:25:09.965075] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8769a0 is same with the state(5) to be set 00:24:32.655 [2024-02-14 20:25:09.965081] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8769a0 is same with the state(5) to be set 00:24:32.655 [2024-02-14 20:25:09.965087] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8769a0 is same with the state(5) to be set 00:24:32.655 [2024-02-14 20:25:09.965093] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8769a0 is same with the state(5) to be set 00:24:32.655 [2024-02-14 20:25:09.965098] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8769a0 is same with the state(5) to be set 00:24:32.655 [2024-02-14 20:25:09.965104] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8769a0 is same with the state(5) to be set 00:24:32.655 [2024-02-14 20:25:09.965110] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8769a0 is same with the state(5) to be set 00:24:32.655 [2024-02-14 20:25:09.965115] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8769a0 is same with the state(5) to be set 00:24:32.655 [2024-02-14 20:25:09.965123] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8769a0 is same with the state(5) to be set 00:24:32.655 [2024-02-14 20:25:09.965129] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8769a0 is same with the state(5) to be set 00:24:32.655 [2024-02-14 20:25:09.965135] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8769a0 is same with the state(5) to be set 00:24:32.655 [2024-02-14 20:25:09.965141] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8769a0 is same with the state(5) to be set 00:24:32.655 [2024-02-14 20:25:09.965146] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8769a0 is same with the state(5) to be set 00:24:32.655 [2024-02-14 20:25:09.965152] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8769a0 is same with the state(5) to be set 00:24:32.655 [2024-02-14 20:25:09.965158] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8769a0 is same with the state(5) to be set 00:24:32.655 [2024-02-14 20:25:09.965164] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8769a0 is same with the state(5) to be set 00:24:32.655 [2024-02-14 20:25:09.965170] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8769a0 is same with the state(5) to be set 00:24:32.655 [2024-02-14 20:25:09.965176] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8769a0 is same with the state(5) to be set 00:24:32.655 [2024-02-14 20:25:09.965182] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8769a0 is same with the state(5) to be set 00:24:32.655 [2024-02-14 20:25:09.965188] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8769a0 is same with the state(5) to be set 00:24:32.655 [2024-02-14 20:25:09.965194] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8769a0 is same with the state(5) to be set 00:24:32.655 [2024-02-14 20:25:09.965200] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8769a0 is same with the state(5) to be set 00:24:32.655 [2024-02-14 20:25:09.965206] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8769a0 is same with the state(5) to be set 00:24:32.655 [2024-02-14 20:25:09.965212] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8769a0 is same with the state(5) to be set 00:24:32.655 [2024-02-14 20:25:09.965218] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8769a0 is same with the state(5) to be set 00:24:32.655 [2024-02-14 20:25:09.966307] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8792b0 is same with the state(5) to be set 00:24:32.655 [2024-02-14 20:25:09.966333] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8792b0 is same with the state(5) to be set 00:24:32.655 [2024-02-14 20:25:09.967187] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876e30 is same with the state(5) to be set 00:24:32.655 [2024-02-14 20:25:09.967197] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876e30 is same with the state(5) to be set 00:24:32.655 [2024-02-14 20:25:09.967204] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876e30 is same with the state(5) to be set 00:24:32.655 [2024-02-14 20:25:09.967210] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876e30 is same with the state(5) to be set 00:24:32.655 [2024-02-14 20:25:09.967216] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876e30 is same with the state(5) to be set 00:24:32.655 [2024-02-14 20:25:09.967224] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876e30 is same with the state(5) to be set 00:24:32.655 [2024-02-14 20:25:09.967230] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876e30 is same with the state(5) to be set 00:24:32.655 [2024-02-14 20:25:09.967236] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876e30 is same with the state(5) to be set 00:24:32.655 [2024-02-14 20:25:09.967246] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876e30 is same with the state(5) to be set 00:24:32.655 [2024-02-14 20:25:09.967252] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876e30 is same with the state(5) to be set 00:24:32.655 [2024-02-14 20:25:09.967259] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876e30 is same with the state(5) to be set 00:24:32.655 [2024-02-14 20:25:09.967265] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876e30 is same with the state(5) to be set 00:24:32.655 [2024-02-14 20:25:09.967271] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876e30 is same with the state(5) to be set 00:24:32.655 [2024-02-14 20:25:09.967277] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876e30 is same with the state(5) to be set 00:24:32.655 [2024-02-14 20:25:09.967283] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876e30 is same with the state(5) to be set 00:24:32.655 [2024-02-14 20:25:09.967290] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876e30 is same with the state(5) to be set 00:24:32.655 [2024-02-14 20:25:09.967296] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876e30 is same with the state(5) to be set 00:24:32.655 [2024-02-14 20:25:09.967301] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876e30 is same with the state(5) to be set 00:24:32.655 [2024-02-14 20:25:09.967307] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876e30 is same with the state(5) to be set 00:24:32.655 [2024-02-14 20:25:09.967314] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876e30 is same with the state(5) to be set 00:24:32.655 [2024-02-14 20:25:09.967319] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876e30 is same with the state(5) to be set 00:24:32.655 [2024-02-14 20:25:09.967325] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876e30 is same with the state(5) to be set 00:24:32.655 [2024-02-14 20:25:09.967332] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876e30 is same with the state(5) to be set 00:24:32.655 [2024-02-14 20:25:09.967337] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876e30 is same with the state(5) to be set 00:24:32.655 [2024-02-14 20:25:09.967343] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876e30 is same with the state(5) to be set 00:24:32.655 [2024-02-14 20:25:09.967349] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876e30 is same with the state(5) to be set 00:24:32.655 [2024-02-14 20:25:09.967354] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876e30 is same with the state(5) to be set 00:24:32.655 [2024-02-14 20:25:09.967361] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876e30 is same with the state(5) to be set 00:24:32.655 [2024-02-14 20:25:09.967367] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876e30 is same with the state(5) to be set 00:24:32.655 [2024-02-14 20:25:09.967374] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876e30 is same with the state(5) to be set 00:24:32.655 [2024-02-14 20:25:09.967380] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876e30 is same with the state(5) to be set 00:24:32.655 [2024-02-14 20:25:09.967386] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876e30 is same with the state(5) to be set 00:24:32.655 [2024-02-14 20:25:09.967392] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876e30 is same with the state(5) to be set 00:24:32.655 [2024-02-14 20:25:09.967399] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876e30 is same with the state(5) to be set 00:24:32.656 [2024-02-14 20:25:09.967405] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876e30 is same with the state(5) to be set 00:24:32.656 [2024-02-14 20:25:09.967413] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876e30 is same with the state(5) to be set 00:24:32.656 [2024-02-14 20:25:09.967419] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876e30 is same with the state(5) to be set 00:24:32.656 [2024-02-14 20:25:09.967425] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876e30 is same with the state(5) to be set 00:24:32.656 [2024-02-14 20:25:09.967431] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876e30 is same with the state(5) to be set 00:24:32.656 [2024-02-14 20:25:09.967437] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876e30 is same with the state(5) to be set 00:24:32.656 [2024-02-14 20:25:09.967443] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876e30 is same with the state(5) to be set 00:24:32.656 [2024-02-14 20:25:09.967449] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876e30 is same with the state(5) to be set 00:24:32.656 [2024-02-14 20:25:09.967455] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876e30 is same with the state(5) to be set 00:24:32.656 [2024-02-14 20:25:09.967461] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876e30 is same with the state(5) to be set 00:24:32.656 [2024-02-14 20:25:09.967467] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876e30 is same with the state(5) to be set 00:24:32.656 [2024-02-14 20:25:09.967473] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876e30 is same with the state(5) to be set 00:24:32.656 [2024-02-14 20:25:09.967478] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876e30 is same with the state(5) to be set 00:24:32.656 [2024-02-14 20:25:09.967484] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876e30 is same with the state(5) to be set 00:24:32.656 [2024-02-14 20:25:09.967490] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876e30 is same with the state(5) to be set 00:24:32.656 [2024-02-14 20:25:09.967496] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876e30 is same with the state(5) to be set 00:24:32.656 [2024-02-14 20:25:09.967502] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876e30 is same with the state(5) to be set 00:24:32.656 [2024-02-14 20:25:09.967507] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876e30 is same with the state(5) to be set 00:24:32.656 [2024-02-14 20:25:09.967513] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876e30 is same with the state(5) to be set 00:24:32.656 [2024-02-14 20:25:09.967519] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876e30 is same with the state(5) to be set 00:24:32.656 [2024-02-14 20:25:09.967524] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876e30 is same with the state(5) to be set 00:24:32.656 [2024-02-14 20:25:09.967530] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876e30 is same with the state(5) to be set 00:24:32.656 [2024-02-14 20:25:09.967535] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876e30 is same with the state(5) to be set 00:24:32.656 [2024-02-14 20:25:09.967541] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876e30 is same with the state(5) to be set 00:24:32.656 [2024-02-14 20:25:09.967547] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876e30 is same with the state(5) to be set 00:24:32.656 [2024-02-14 20:25:09.967553] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876e30 is same with the state(5) to be set 00:24:32.656 [2024-02-14 20:25:09.967558] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876e30 is same with the state(5) to be set 00:24:32.656 [2024-02-14 20:25:09.967565] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876e30 is same with the state(5) to be set 00:24:32.656 [2024-02-14 20:25:09.967571] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876e30 is same with the state(5) to be set 00:24:32.656 [2024-02-14 20:25:09.968809] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8772c0 is same with the state(5) to be set 00:24:32.656 [2024-02-14 20:25:09.968831] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8772c0 is same with the state(5) to be set 00:24:32.656 [2024-02-14 20:25:09.968839] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8772c0 is same with the state(5) to be set 00:24:32.656 [2024-02-14 20:25:09.968845] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8772c0 is same with the state(5) to be set 00:24:32.656 [2024-02-14 20:25:09.968851] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8772c0 is same with the state(5) to be set 00:24:32.656 [2024-02-14 20:25:09.968857] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8772c0 is same with the state(5) to be set 00:24:32.656 [2024-02-14 20:25:09.968863] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8772c0 is same with the state(5) to be set 00:24:32.656 [2024-02-14 20:25:09.968869] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8772c0 is same with the state(5) to be set 00:24:32.656 [2024-02-14 20:25:09.968875] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8772c0 is same with the state(5) to be set 00:24:32.656 [2024-02-14 20:25:09.968880] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8772c0 is same with the state(5) to be set 00:24:32.656 [2024-02-14 20:25:09.968886] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8772c0 is same with the state(5) to be set 00:24:32.656 [2024-02-14 20:25:09.968892] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8772c0 is same with the state(5) to be set 00:24:32.656 [2024-02-14 20:25:09.968899] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8772c0 is same with the state(5) to be set 00:24:32.656 [2024-02-14 20:25:09.968905] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8772c0 is same with the state(5) to be set 00:24:32.656 [2024-02-14 20:25:09.968911] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8772c0 is same with the state(5) to be set 00:24:32.656 [2024-02-14 20:25:09.968917] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8772c0 is same with the state(5) to be set 00:24:32.656 [2024-02-14 20:25:09.968923] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8772c0 is same with the state(5) to be set 00:24:32.656 [2024-02-14 20:25:09.968928] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8772c0 is same with the state(5) to be set 00:24:32.656 [2024-02-14 20:25:09.968934] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8772c0 is same with the state(5) to be set 00:24:32.656 [2024-02-14 20:25:09.968940] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8772c0 is same with the state(5) to be set 00:24:32.656 [2024-02-14 20:25:09.968947] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8772c0 is same with the state(5) to be set 00:24:32.656 [2024-02-14 20:25:09.968953] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8772c0 is same with the state(5) to be set 00:24:32.656 [2024-02-14 20:25:09.968959] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8772c0 is same with the state(5) to be set 00:24:32.656 [2024-02-14 20:25:09.968965] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8772c0 is same with the state(5) to be set 00:24:32.656 [2024-02-14 20:25:09.968970] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8772c0 is same with the state(5) to be set 00:24:32.656 [2024-02-14 20:25:09.968976] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8772c0 is same with the state(5) to be set 00:24:32.656 [2024-02-14 20:25:09.968982] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8772c0 is same with the state(5) to be set 00:24:32.656 [2024-02-14 20:25:09.968992] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8772c0 is same with the state(5) to be set 00:24:32.656 [2024-02-14 20:25:09.968998] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8772c0 is same with the state(5) to be set 00:24:32.656 [2024-02-14 20:25:09.969004] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8772c0 is same with the state(5) to be set 00:24:32.656 [2024-02-14 20:25:09.969010] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8772c0 is same with the state(5) to be set 00:24:32.656 [2024-02-14 20:25:09.969015] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8772c0 is same with the state(5) to be set 00:24:32.656 [2024-02-14 20:25:09.969021] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8772c0 is same with the state(5) to be set 00:24:32.656 [2024-02-14 20:25:09.969027] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8772c0 is same with the state(5) to be set 00:24:32.656 [2024-02-14 20:25:09.969033] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8772c0 is same with the state(5) to be set 00:24:32.656 [2024-02-14 20:25:09.969039] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8772c0 is same with the state(5) to be set 00:24:32.656 [2024-02-14 20:25:09.969044] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8772c0 is same with the state(5) to be set 00:24:32.656 [2024-02-14 20:25:09.969050] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8772c0 is same with the state(5) to be set 00:24:32.656 [2024-02-14 20:25:09.969056] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8772c0 is same with the state(5) to be set 00:24:32.656 [2024-02-14 20:25:09.969062] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8772c0 is same with the state(5) to be set 00:24:32.656 [2024-02-14 20:25:09.969069] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8772c0 is same with the state(5) to be set 00:24:32.656 [2024-02-14 20:25:09.969075] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8772c0 is same with the state(5) to be set 00:24:32.656 [2024-02-14 20:25:09.969080] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8772c0 is same with the state(5) to be set 00:24:32.656 [2024-02-14 20:25:09.969086] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8772c0 is same with the state(5) to be set 00:24:32.656 [2024-02-14 20:25:09.969092] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8772c0 is same with the state(5) to be set 00:24:32.656 [2024-02-14 20:25:09.969097] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8772c0 is same with the state(5) to be set 00:24:32.656 [2024-02-14 20:25:09.969103] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8772c0 is same with the state(5) to be set 00:24:32.656 [2024-02-14 20:25:09.969109] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8772c0 is same with the state(5) to be set 00:24:32.656 [2024-02-14 20:25:09.969115] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8772c0 is same with the state(5) to be set 00:24:32.656 [2024-02-14 20:25:09.969120] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8772c0 is same with the state(5) to be set 00:24:32.656 [2024-02-14 20:25:09.969126] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8772c0 is same with the state(5) to be set 00:24:32.656 [2024-02-14 20:25:09.969132] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8772c0 is same with the state(5) to be set 00:24:32.656 [2024-02-14 20:25:09.969138] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8772c0 is same with the state(5) to be set 00:24:32.656 [2024-02-14 20:25:09.969144] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8772c0 is same with the state(5) to be set 00:24:32.656 [2024-02-14 20:25:09.969151] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8772c0 is same with the state(5) to be set 00:24:32.657 [2024-02-14 20:25:09.969157] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8772c0 is same with the state(5) to be set 00:24:32.657 [2024-02-14 20:25:09.969163] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8772c0 is same with the state(5) to be set 00:24:32.657 [2024-02-14 20:25:09.969168] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8772c0 is same with the state(5) to be set 00:24:32.657 [2024-02-14 20:25:09.970741] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877750 is same with the state(5) to be set 00:24:32.657 [2024-02-14 20:25:09.970765] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877750 is same with the state(5) to be set 00:24:32.657 [2024-02-14 20:25:09.970773] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877750 is same with the state(5) to be set 00:24:32.657 [2024-02-14 20:25:09.970779] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877750 is same with the state(5) to be set 00:24:32.657 [2024-02-14 20:25:09.970785] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877750 is same with the state(5) to be set 00:24:32.657 [2024-02-14 20:25:09.970791] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877750 is same with the state(5) to be set 00:24:32.657 [2024-02-14 20:25:09.970798] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877750 is same with the state(5) to be set 00:24:32.657 [2024-02-14 20:25:09.970804] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877750 is same with the state(5) to be set 00:24:32.657 [2024-02-14 20:25:09.970810] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877750 is same with the state(5) to be set 00:24:32.657 [2024-02-14 20:25:09.970815] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877750 is same with the state(5) to be set 00:24:32.657 [2024-02-14 20:25:09.970821] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877750 is same with the state(5) to be set 00:24:32.657 [2024-02-14 20:25:09.970827] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877750 is same with the state(5) to be set 00:24:32.657 [2024-02-14 20:25:09.970832] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877750 is same with the state(5) to be set 00:24:32.657 [2024-02-14 20:25:09.970838] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877750 is same with the state(5) to be set 00:24:32.657 [2024-02-14 20:25:09.970844] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877750 is same with the state(5) to be set 00:24:32.657 [2024-02-14 20:25:09.970850] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877750 is same with the state(5) to be set 00:24:32.657 [2024-02-14 20:25:09.970856] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877750 is same with the state(5) to be set 00:24:32.657 [2024-02-14 20:25:09.970862] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877750 is same with the state(5) to be set 00:24:32.657 [2024-02-14 20:25:09.970868] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877750 is same with the state(5) to be set 00:24:32.657 [2024-02-14 20:25:09.970874] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877750 is same with the state(5) to be set 00:24:32.657 [2024-02-14 20:25:09.970880] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877750 is same with the state(5) to be set 00:24:32.657 [2024-02-14 20:25:09.970886] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877750 is same with the state(5) to be set 00:24:32.657 [2024-02-14 20:25:09.970891] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877750 is same with the state(5) to be set 00:24:32.657 [2024-02-14 20:25:09.970901] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877750 is same with the state(5) to be set 00:24:32.657 [2024-02-14 20:25:09.970907] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877750 is same with the state(5) to be set 00:24:32.657 [2024-02-14 20:25:09.970913] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877750 is same with the state(5) to be set 00:24:32.657 [2024-02-14 20:25:09.970919] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877750 is same with the state(5) to be set 00:24:32.657 [2024-02-14 20:25:09.970925] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877750 is same with the state(5) to be set 00:24:32.657 [2024-02-14 20:25:09.970931] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877750 is same with the state(5) to be set 00:24:32.657 [2024-02-14 20:25:09.970937] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877750 is same with the state(5) to be set 00:24:32.657 [2024-02-14 20:25:09.970942] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877750 is same with the state(5) to be set 00:24:32.657 [2024-02-14 20:25:09.970949] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877750 is same with the state(5) to be set 00:24:32.657 [2024-02-14 20:25:09.970955] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877750 is same with the state(5) to be set 00:24:32.657 [2024-02-14 20:25:09.970961] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877750 is same with the state(5) to be set 00:24:32.657 [2024-02-14 20:25:09.970967] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877750 is same with the state(5) to be set 00:24:32.657 [2024-02-14 20:25:09.970973] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877750 is same with the state(5) to be set 00:24:32.657 [2024-02-14 20:25:09.970979] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877750 is same with the state(5) to be set 00:24:32.657 [2024-02-14 20:25:09.970985] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877750 is same with the state(5) to be set 00:24:32.657 [2024-02-14 20:25:09.970990] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877750 is same with the state(5) to be set 00:24:32.657 [2024-02-14 20:25:09.970996] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877750 is same with the state(5) to be set 00:24:32.657 [2024-02-14 20:25:09.971002] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877750 is same with the state(5) to be set 00:24:32.657 [2024-02-14 20:25:09.971008] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877750 is same with the state(5) to be set 00:24:32.657 [2024-02-14 20:25:09.971014] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877750 is same with the state(5) to be set 00:24:32.657 [2024-02-14 20:25:09.971020] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877750 is same with the state(5) to be set 00:24:32.657 [2024-02-14 20:25:09.971026] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877750 is same with the state(5) to be set 00:24:32.657 [2024-02-14 20:25:09.971032] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877750 is same with the state(5) to be set 00:24:32.657 [2024-02-14 20:25:09.971037] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877750 is same with the state(5) to be set 00:24:32.657 [2024-02-14 20:25:09.971043] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877750 is same with the state(5) to be set 00:24:32.657 [2024-02-14 20:25:09.971049] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877750 is same with the state(5) to be set 00:24:32.657 [2024-02-14 20:25:09.971056] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877750 is same with the state(5) to be set 00:24:32.657 [2024-02-14 20:25:09.971062] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877750 is same with the state(5) to be set 00:24:32.657 [2024-02-14 20:25:09.971069] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877750 is same with the state(5) to be set 00:24:32.657 [2024-02-14 20:25:09.971075] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877750 is same with the state(5) to be set 00:24:32.657 [2024-02-14 20:25:09.971080] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877750 is same with the state(5) to be set 00:24:32.657 [2024-02-14 20:25:09.971196] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.657 [2024-02-14 20:25:09.971225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.657 [2024-02-14 20:25:09.971234] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.657 [2024-02-14 20:25:09.971241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.657 [2024-02-14 20:25:09.971248] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.657 [2024-02-14 20:25:09.971254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.657 [2024-02-14 20:25:09.971262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.657 [2024-02-14 20:25:09.971268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.657 [2024-02-14 20:25:09.971275] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x196dce0 is same with the state(5) to be set 00:24:32.657 [2024-02-14 20:25:09.971313] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.657 [2024-02-14 20:25:09.971321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.657 [2024-02-14 20:25:09.971328] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.657 [2024-02-14 20:25:09.971335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.657 [2024-02-14 20:25:09.971342] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.657 [2024-02-14 20:25:09.971348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.657 [2024-02-14 20:25:09.971355] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.657 [2024-02-14 20:25:09.971362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.657 [2024-02-14 20:25:09.971368] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b0310 is same with the state(5) to be set 00:24:32.657 [2024-02-14 20:25:09.971388] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.657 [2024-02-14 20:25:09.971396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.657 [2024-02-14 20:25:09.971403] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.657 [2024-02-14 20:25:09.971409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.657 [2024-02-14 20:25:09.971416] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.657 [2024-02-14 20:25:09.971426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.657 [2024-02-14 20:25:09.971433] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.658 [2024-02-14 20:25:09.971439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.658 [2024-02-14 20:25:09.971445] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897630 is same with the state(5) to be set 00:24:32.658 [2024-02-14 20:25:09.971466] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.658 [2024-02-14 20:25:09.971473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.658 [2024-02-14 20:25:09.971480] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.658 [2024-02-14 20:25:09.971486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.658 [2024-02-14 20:25:09.971493] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.658 [2024-02-14 20:25:09.971499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.658 [2024-02-14 20:25:09.971508] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.658 [2024-02-14 20:25:09.971515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.658 [2024-02-14 20:25:09.971521] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x196add0 is same with the state(5) to be set 00:24:32.658 [2024-02-14 20:25:09.971554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.658 [2024-02-14 20:25:09.971562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.658 [2024-02-14 20:25:09.971568] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.658 [2024-02-14 20:25:09.971575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.658 [2024-02-14 20:25:09.971582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.658 [2024-02-14 20:25:09.971588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.658 [2024-02-14 20:25:09.971595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.658 [2024-02-14 20:25:09.971601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.658 [2024-02-14 20:25:09.971607] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1953280 is same with the state(5) to be set 00:24:32.658 [2024-02-14 20:25:09.974227] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877be0 is same with the state(5) to be set 00:24:32.658 [2024-02-14 20:25:09.974242] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877be0 is same with the state(5) to be set 00:24:32.658 [2024-02-14 20:25:09.974248] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877be0 is same with the state(5) to be set 00:24:32.658 [2024-02-14 20:25:09.974257] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877be0 is same with the state(5) to be set 00:24:32.658 [2024-02-14 20:25:09.974263] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877be0 is same with the state(5) to be set 00:24:32.658 [2024-02-14 20:25:09.974269] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877be0 is same with the state(5) to be set 00:24:32.658 [2024-02-14 20:25:09.974275] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877be0 is same with the state(5) to be set 00:24:32.658 [2024-02-14 20:25:09.974281] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877be0 is same with the state(5) to be set 00:24:32.658 [2024-02-14 20:25:09.974286] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877be0 is same with the state(5) to be set 00:24:32.658 [2024-02-14 20:25:09.974292] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877be0 is same with the state(5) to be set 00:24:32.658 [2024-02-14 20:25:09.974298] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877be0 is same with the state(5) to be set 00:24:32.658 [2024-02-14 20:25:09.974304] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877be0 is same with the state(5) to be set 00:24:32.658 [2024-02-14 20:25:09.974310] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877be0 is same with the state(5) to be set 00:24:32.658 [2024-02-14 20:25:09.974315] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877be0 is same with the state(5) to be set 00:24:32.658 [2024-02-14 20:25:09.974321] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877be0 is same with the state(5) to be set 00:24:32.658 [2024-02-14 20:25:09.974327] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877be0 is same with the state(5) to be set 00:24:32.658 [2024-02-14 20:25:09.974333] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877be0 is same with the state(5) to be set 00:24:32.658 [2024-02-14 20:25:09.974339] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877be0 is same with the state(5) to be set 00:24:32.658 [2024-02-14 20:25:09.974344] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877be0 is same with the state(5) to be set 00:24:32.658 [2024-02-14 20:25:09.974350] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877be0 is same with the state(5) to be set 00:24:32.658 [2024-02-14 20:25:09.974356] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877be0 is same with the state(5) to be set 00:24:32.658 [2024-02-14 20:25:09.974362] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877be0 is same with the state(5) to be set 00:24:32.658 [2024-02-14 20:25:09.974368] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877be0 is same with the state(5) to be set 00:24:32.658 [2024-02-14 20:25:09.974374] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877be0 is same with the state(5) to be set 00:24:32.658 [2024-02-14 20:25:09.974379] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877be0 is same with the state(5) to be set 00:24:32.658 [2024-02-14 20:25:09.974386] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877be0 is same with the state(5) to be set 00:24:32.658 [2024-02-14 20:25:09.974392] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877be0 is same with the state(5) to be set 00:24:32.658 [2024-02-14 20:25:09.974398] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877be0 is same with the state(5) to be set 00:24:32.658 [2024-02-14 20:25:09.974404] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877be0 is same with the state(5) to be set 00:24:32.658 [2024-02-14 20:25:09.974410] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877be0 is same with the state(5) to be set 00:24:32.658 [2024-02-14 20:25:09.974416] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877be0 is same with the state(5) to be set 00:24:32.658 [2024-02-14 20:25:09.974424] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877be0 is same with the state(5) to be set 00:24:32.658 [2024-02-14 20:25:09.974430] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877be0 is same with the state(5) to be set 00:24:32.658 [2024-02-14 20:25:09.974436] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877be0 is same with the state(5) to be set 00:24:32.658 [2024-02-14 20:25:09.974442] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877be0 is same with the state(5) to be set 00:24:32.658 [2024-02-14 20:25:09.974448] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877be0 is same with the state(5) to be set 00:24:32.658 [2024-02-14 20:25:09.974454] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877be0 is same with the state(5) to be set 00:24:32.658 [2024-02-14 20:25:09.974459] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877be0 is same with the state(5) to be set 00:24:32.658 [2024-02-14 20:25:09.974465] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877be0 is same with the state(5) to be set 00:24:32.658 [2024-02-14 20:25:09.974471] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877be0 is same with the state(5) to be set 00:24:32.658 [2024-02-14 20:25:09.974477] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877be0 is same with the state(5) to be set 00:24:32.658 [2024-02-14 20:25:09.974483] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877be0 is same with the state(5) to be set 00:24:32.658 [2024-02-14 20:25:09.974489] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877be0 is same with the state(5) to be set 00:24:32.658 [2024-02-14 20:25:09.974495] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877be0 is same with the state(5) to be set 00:24:32.658 [2024-02-14 20:25:09.974501] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877be0 is same with the state(5) to be set 00:24:32.658 [2024-02-14 20:25:09.974508] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877be0 is same with the state(5) to be set 00:24:32.658 [2024-02-14 20:25:09.974513] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877be0 is same with the state(5) to be set 00:24:32.658 [2024-02-14 20:25:09.974519] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877be0 is same with the state(5) to be set 00:24:32.658 [2024-02-14 20:25:09.974525] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877be0 is same with the state(5) to be set 00:24:32.658 [2024-02-14 20:25:09.974531] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877be0 is same with the state(5) to be set 00:24:32.658 [2024-02-14 20:25:09.974536] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877be0 is same with the state(5) to be set 00:24:32.658 [2024-02-14 20:25:09.974542] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877be0 is same with the state(5) to be set 00:24:32.658 [2024-02-14 20:25:09.974547] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877be0 is same with the state(5) to be set 00:24:32.658 [2024-02-14 20:25:09.974553] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877be0 is same with the state(5) to be set 00:24:32.658 [2024-02-14 20:25:09.974559] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877be0 is same with the state(5) to be set 00:24:32.658 [2024-02-14 20:25:09.974565] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877be0 is same with the state(5) to be set 00:24:32.658 [2024-02-14 20:25:09.974570] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877be0 is same with the state(5) to be set 00:24:32.658 [2024-02-14 20:25:09.974576] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877be0 is same with the state(5) to be set 00:24:32.658 [2024-02-14 20:25:09.974583] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877be0 is same with the state(5) to be set 00:24:32.658 [2024-02-14 20:25:09.974588] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877be0 is same with the state(5) to be set 00:24:32.658 [2024-02-14 20:25:09.974594] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877be0 is same with the state(5) to be set 00:24:32.658 [2024-02-14 20:25:09.974599] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877be0 is same with the state(5) to be set 00:24:32.658 [2024-02-14 20:25:09.974605] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877be0 is same with the state(5) to be set 00:24:32.659 [2024-02-14 20:25:09.977056] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878070 is same with the state(5) to be set 00:24:32.659 [2024-02-14 20:25:09.977075] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878070 is same with the state(5) to be set 00:24:32.659 [2024-02-14 20:25:09.977081] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878070 is same with the state(5) to be set 00:24:32.659 [2024-02-14 20:25:09.977087] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878070 is same with the state(5) to be set 00:24:32.659 [2024-02-14 20:25:09.977093] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878070 is same with the state(5) to be set 00:24:32.659 [2024-02-14 20:25:09.977100] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878070 is same with the state(5) to be set 00:24:32.659 [2024-02-14 20:25:09.977106] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878070 is same with the state(5) to be set 00:24:32.659 [2024-02-14 20:25:09.977111] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878070 is same with the state(5) to be set 00:24:32.659 [2024-02-14 20:25:09.977118] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878070 is same with the state(5) to be set 00:24:32.659 [2024-02-14 20:25:09.977123] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878070 is same with the state(5) to be set 00:24:32.659 [2024-02-14 20:25:09.977129] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878070 is same with the state(5) to be set 00:24:32.659 [2024-02-14 20:25:09.977135] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878070 is same with the state(5) to be set 00:24:32.659 [2024-02-14 20:25:09.977141] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878070 is same with the state(5) to be set 00:24:32.659 [2024-02-14 20:25:09.977147] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878070 is same with the state(5) to be set 00:24:32.659 [2024-02-14 20:25:09.977153] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878070 is same with the state(5) to be set 00:24:32.659 [2024-02-14 20:25:09.977159] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878070 is same with the state(5) to be set 00:24:32.659 [2024-02-14 20:25:09.977165] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878070 is same with the state(5) to be set 00:24:32.659 [2024-02-14 20:25:09.977171] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878070 is same with the state(5) to be set 00:24:32.659 [2024-02-14 20:25:09.977176] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878070 is same with the state(5) to be set 00:24:32.659 [2024-02-14 20:25:09.977182] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878070 is same with the state(5) to be set 00:24:32.659 [2024-02-14 20:25:09.977188] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878070 is same with the state(5) to be set 00:24:32.659 [2024-02-14 20:25:09.977194] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878070 is same with the state(5) to be set 00:24:32.659 [2024-02-14 20:25:09.977203] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878070 is same with the state(5) to be set 00:24:32.659 [2024-02-14 20:25:09.977209] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878070 is same with the state(5) to be set 00:24:32.659 [2024-02-14 20:25:09.977215] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878070 is same with the state(5) to be set 00:24:32.659 [2024-02-14 20:25:09.977220] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878070 is same with the state(5) to be set 00:24:32.659 [2024-02-14 20:25:09.977226] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878070 is same with the state(5) to be set 00:24:32.659 [2024-02-14 20:25:09.977232] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878070 is same with the state(5) to be set 00:24:32.659 [2024-02-14 20:25:09.977238] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878070 is same with the state(5) to be set 00:24:32.659 [2024-02-14 20:25:09.977243] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878070 is same with the state(5) to be set 00:24:32.659 [2024-02-14 20:25:09.977249] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878070 is same with the state(5) to be set 00:24:32.659 [2024-02-14 20:25:09.977254] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878070 is same with the state(5) to be set 00:24:32.659 [2024-02-14 20:25:09.977260] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878070 is same with the state(5) to be set 00:24:32.659 [2024-02-14 20:25:09.977267] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878070 is same with the state(5) to be set 00:24:32.659 [2024-02-14 20:25:09.977272] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878070 is same with the state(5) to be set 00:24:32.659 [2024-02-14 20:25:09.977278] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878070 is same with the state(5) to be set 00:24:32.659 [2024-02-14 20:25:09.977284] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878070 is same with the state(5) to be set 00:24:32.659 [2024-02-14 20:25:09.977290] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878070 is same with the state(5) to be set 00:24:32.659 [2024-02-14 20:25:09.977296] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878070 is same with the state(5) to be set 00:24:32.659 [2024-02-14 20:25:09.977302] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878070 is same with the state(5) to be set 00:24:32.659 [2024-02-14 20:25:09.977308] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878070 is same with the state(5) to be set 00:24:32.659 [2024-02-14 20:25:09.977314] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878070 is same with the state(5) to be set 00:24:32.659 [2024-02-14 20:25:09.977320] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878070 is same with the state(5) to be set 00:24:32.659 [2024-02-14 20:25:09.977326] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878070 is same with the state(5) to be set 00:24:32.659 [2024-02-14 20:25:09.977332] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878070 is same with the state(5) to be set 00:24:32.659 [2024-02-14 20:25:09.977337] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878070 is same with the state(5) to be set 00:24:32.659 [2024-02-14 20:25:09.977343] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878070 is same with the state(5) to be set 00:24:32.659 [2024-02-14 20:25:09.977349] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878070 is same with the state(5) to be set 00:24:32.659 [2024-02-14 20:25:09.977355] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878070 is same with the state(5) to be set 00:24:32.659 [2024-02-14 20:25:09.977362] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878070 is same with the state(5) to be set 00:24:32.659 [2024-02-14 20:25:09.977368] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878070 is same with the state(5) to be set 00:24:32.659 [2024-02-14 20:25:09.977374] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878070 is same with the state(5) to be set 00:24:32.659 [2024-02-14 20:25:09.977379] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878070 is same with the state(5) to be set 00:24:32.659 [2024-02-14 20:25:09.977385] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878070 is same with the state(5) to be set 00:24:32.659 [2024-02-14 20:25:09.977391] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878070 is same with the state(5) to be set 00:24:32.659 [2024-02-14 20:25:09.977397] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878070 is same with the state(5) to be set 00:24:32.659 [2024-02-14 20:25:09.977402] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878070 is same with the state(5) to be set 00:24:32.659 [2024-02-14 20:25:09.977408] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878070 is same with the state(5) to be set 00:24:32.659 [2024-02-14 20:25:09.977414] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878070 is same with the state(5) to be set 00:24:32.659 [2024-02-14 20:25:09.977420] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878070 is same with the state(5) to be set 00:24:32.659 [2024-02-14 20:25:09.977425] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878070 is same with the state(5) to be set 00:24:32.659 [2024-02-14 20:25:09.978498] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878500 is same with the state(5) to be set 00:24:32.659 [2024-02-14 20:25:09.978511] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878500 is same with the state(5) to be set 00:24:32.659 [2024-02-14 20:25:09.978518] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878500 is same with the state(5) to be set 00:24:32.660 [2024-02-14 20:25:09.978523] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878500 is same with the state(5) to be set 00:24:32.660 [2024-02-14 20:25:09.978517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.660 [2024-02-14 20:25:09.978530] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878500 is same with the state(5) to be set 00:24:32.660 [2024-02-14 20:25:09.978537] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878500 is same with the state(5) to be set 00:24:32.660 [2024-02-14 20:25:09.978540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.660 [2024-02-14 20:25:09.978543] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878500 is same with the state(5) to be set 00:24:32.660 [2024-02-14 20:25:09.978556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24448 len:12[2024-02-14 20:25:09.978557] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878500 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.660 he state(5) to be set 00:24:32.660 [2024-02-14 20:25:09.978565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-02-14 20:25:09.978565] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878500 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.660 he state(5) to be set 00:24:32.660 [2024-02-14 20:25:09.978576] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878500 is same with the state(5) to be set 00:24:32.660 [2024-02-14 20:25:09.978578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.660 [2024-02-14 20:25:09.978582] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878500 is same with the state(5) to be set 00:24:32.660 [2024-02-14 20:25:09.978589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.660 [2024-02-14 20:25:09.978590] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878500 is same with the state(5) to be set 00:24:32.660 [2024-02-14 20:25:09.978597] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878500 is same with the state(5) to be set 00:24:32.660 [2024-02-14 20:25:09.978598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.660 [2024-02-14 20:25:09.978603] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878500 is same with the state(5) to be set 00:24:32.660 [2024-02-14 20:25:09.978606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.660 [2024-02-14 20:25:09.978610] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878500 is same with the state(5) to be set 00:24:32.660 [2024-02-14 20:25:09.978615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.660 [2024-02-14 20:25:09.978616] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878500 is same with the state(5) to be set 00:24:32.660 [2024-02-14 20:25:09.978622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.660 [2024-02-14 20:25:09.978625] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878500 is same with the state(5) to be set 00:24:32.660 [2024-02-14 20:25:09.978631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18944 len:128[2024-02-14 20:25:09.978632] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878500 is same with t SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.660 he state(5) to be set 00:24:32.660 [2024-02-14 20:25:09.978640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-02-14 20:25:09.978640] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878500 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.660 he state(5) to be set 00:24:32.660 [2024-02-14 20:25:09.978653] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878500 is same with the state(5) to be set 00:24:32.660 [2024-02-14 20:25:09.978657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.660 [2024-02-14 20:25:09.978659] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878500 is same with the state(5) to be set 00:24:32.660 [2024-02-14 20:25:09.978665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.660 [2024-02-14 20:25:09.978666] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878500 is same with the state(5) to be set 00:24:32.660 [2024-02-14 20:25:09.978673] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878500 is same with the state(5) to be set 00:24:32.660 [2024-02-14 20:25:09.978674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.660 [2024-02-14 20:25:09.978679] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878500 is same with the state(5) to be set 00:24:32.660 [2024-02-14 20:25:09.978681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.660 [2024-02-14 20:25:09.978686] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878500 is same with the state(5) to be set 00:24:32.660 [2024-02-14 20:25:09.978690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19584 len:128[2024-02-14 20:25:09.978693] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878500 is same with t SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.660 he state(5) to be set 00:24:32.660 [2024-02-14 20:25:09.978700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-02-14 20:25:09.978701] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878500 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.660 he state(5) to be set 00:24:32.660 [2024-02-14 20:25:09.978711] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878500 is same with the state(5) to be set 00:24:32.660 [2024-02-14 20:25:09.978712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.660 [2024-02-14 20:25:09.978717] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878500 is same with the state(5) to be set 00:24:32.660 [2024-02-14 20:25:09.978719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.660 [2024-02-14 20:25:09.978724] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878500 is same with the state(5) to be set 00:24:32.660 [2024-02-14 20:25:09.978731] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878500 is same with t[2024-02-14 20:25:09.978731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19712 len:128he state(5) to be set 00:24:32.660 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.660 [2024-02-14 20:25:09.978740] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878500 is same with t[2024-02-14 20:25:09.978741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:24:32.660 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.660 [2024-02-14 20:25:09.978749] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878500 is same with the state(5) to be set 00:24:32.660 [2024-02-14 20:25:09.978751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.660 [2024-02-14 20:25:09.978756] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878500 is same with the state(5) to be set 00:24:32.660 [2024-02-14 20:25:09.978759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.660 [2024-02-14 20:25:09.978763] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878500 is same with the state(5) to be set 00:24:32.660 [2024-02-14 20:25:09.978768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.660 [2024-02-14 20:25:09.978770] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878500 is same with the state(5) to be set 00:24:32.660 [2024-02-14 20:25:09.978776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.660 [2024-02-14 20:25:09.978777] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878500 is same with the state(5) to be set 00:24:32.660 [2024-02-14 20:25:09.978784] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878500 is same with the state(5) to be set 00:24:32.660 [2024-02-14 20:25:09.978784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.660 [2024-02-14 20:25:09.978790] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878500 is same with the state(5) to be set 00:24:32.660 [2024-02-14 20:25:09.978794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.660 [2024-02-14 20:25:09.978798] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878500 is same with t[2024-02-14 20:25:09.978803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:25216 len:12he state(5) to be set 00:24:32.660 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.660 [2024-02-14 20:25:09.978812] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878500 is same with t[2024-02-14 20:25:09.978813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:24:32.660 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.660 [2024-02-14 20:25:09.978820] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878500 is same with the state(5) to be set 00:24:32.660 [2024-02-14 20:25:09.978823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.660 [2024-02-14 20:25:09.978827] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878500 is same with the state(5) to be set 00:24:32.660 [2024-02-14 20:25:09.978830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.660 [2024-02-14 20:25:09.978834] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878500 is same with the state(5) to be set 00:24:32.660 [2024-02-14 20:25:09.978839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.660 [2024-02-14 20:25:09.978841] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878500 is same with the state(5) to be set 00:24:32.660 [2024-02-14 20:25:09.978846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.660 [2024-02-14 20:25:09.978848] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878500 is same with the state(5) to be set 00:24:32.660 [2024-02-14 20:25:09.978855] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878500 is same with t[2024-02-14 20:25:09.978855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:25472 len:12he state(5) to be set 00:24:32.660 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.660 [2024-02-14 20:25:09.978864] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878500 is same with t[2024-02-14 20:25:09.978865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:24:32.660 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.660 [2024-02-14 20:25:09.978872] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878500 is same with the state(5) to be set 00:24:32.661 [2024-02-14 20:25:09.978875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.661 [2024-02-14 20:25:09.978879] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878500 is same with the state(5) to be set 00:24:32.661 [2024-02-14 20:25:09.978882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.661 [2024-02-14 20:25:09.978886] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878500 is same with the state(5) to be set 00:24:32.661 [2024-02-14 20:25:09.978891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.661 [2024-02-14 20:25:09.978893] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878500 is same with the state(5) to be set 00:24:32.661 [2024-02-14 20:25:09.978898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-02-14 20:25:09.978900] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878500 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.661 he state(5) to be set 00:24:32.661 [2024-02-14 20:25:09.978908] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878500 is same with t[2024-02-14 20:25:09.978909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20480 len:12he state(5) to be set 00:24:32.661 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.661 [2024-02-14 20:25:09.978916] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878500 is same with the state(5) to be set 00:24:32.661 [2024-02-14 20:25:09.978917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.661 [2024-02-14 20:25:09.978924] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878500 is same with the state(5) to be set 00:24:32.661 [2024-02-14 20:25:09.978927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.661 [2024-02-14 20:25:09.978931] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878500 is same with the state(5) to be set 00:24:32.661 [2024-02-14 20:25:09.978934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.661 [2024-02-14 20:25:09.978937] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878500 is same with the state(5) to be set 00:24:32.661 [2024-02-14 20:25:09.978943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:20864 len:12[2024-02-14 20:25:09.978944] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878500 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.661 he state(5) to be set 00:24:32.661 [2024-02-14 20:25:09.978952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-02-14 20:25:09.978952] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878500 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.661 he state(5) to be set 00:24:32.661 [2024-02-14 20:25:09.978962] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878500 is same with the state(5) to be set 00:24:32.661 [2024-02-14 20:25:09.978962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.661 [2024-02-14 20:25:09.978968] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878500 is same with the state(5) to be set 00:24:32.661 [2024-02-14 20:25:09.978970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.661 [2024-02-14 20:25:09.978975] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878500 is same with the state(5) to be set 00:24:32.661 [2024-02-14 20:25:09.978979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.661 [2024-02-14 20:25:09.978986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.661 [2024-02-14 20:25:09.978994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.661 [2024-02-14 20:25:09.979000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.661 [2024-02-14 20:25:09.979010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.661 [2024-02-14 20:25:09.979016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.661 [2024-02-14 20:25:09.979024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.661 [2024-02-14 20:25:09.979030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.661 [2024-02-14 20:25:09.979038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.661 [2024-02-14 20:25:09.979045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.661 [2024-02-14 20:25:09.979053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.661 [2024-02-14 20:25:09.979060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.661 [2024-02-14 20:25:09.979068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.661 [2024-02-14 20:25:09.979074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.661 [2024-02-14 20:25:09.979082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.661 [2024-02-14 20:25:09.979088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.661 [2024-02-14 20:25:09.979096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.661 [2024-02-14 20:25:09.979102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.661 [2024-02-14 20:25:09.979109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.661 [2024-02-14 20:25:09.979116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.661 [2024-02-14 20:25:09.979123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.661 [2024-02-14 20:25:09.979129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.661 [2024-02-14 20:25:09.979137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.661 [2024-02-14 20:25:09.979143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.661 [2024-02-14 20:25:09.979151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.661 [2024-02-14 20:25:09.979157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.661 [2024-02-14 20:25:09.979165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.661 [2024-02-14 20:25:09.979171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.661 [2024-02-14 20:25:09.979178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.661 [2024-02-14 20:25:09.979184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.661 [2024-02-14 20:25:09.979192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.661 [2024-02-14 20:25:09.979198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.661 [2024-02-14 20:25:09.979206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.661 [2024-02-14 20:25:09.979212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.661 [2024-02-14 20:25:09.979221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.661 [2024-02-14 20:25:09.979227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.661 [2024-02-14 20:25:09.979236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.661 [2024-02-14 20:25:09.979242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.661 [2024-02-14 20:25:09.979249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.661 [2024-02-14 20:25:09.979255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.661 [2024-02-14 20:25:09.979263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.661 [2024-02-14 20:25:09.979269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.661 [2024-02-14 20:25:09.979277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.661 [2024-02-14 20:25:09.979284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.661 [2024-02-14 20:25:09.979292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.661 [2024-02-14 20:25:09.979298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.661 [2024-02-14 20:25:09.979306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.661 [2024-02-14 20:25:09.979312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.661 [2024-02-14 20:25:09.979320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.661 [2024-02-14 20:25:09.979326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.661 [2024-02-14 20:25:09.979333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.661 [2024-02-14 20:25:09.979339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.661 [2024-02-14 20:25:09.979347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.661 [2024-02-14 20:25:09.979353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.662 [2024-02-14 20:25:09.979361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.662 [2024-02-14 20:25:09.979367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.662 [2024-02-14 20:25:09.979375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.662 [2024-02-14 20:25:09.979381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.662 [2024-02-14 20:25:09.979389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.662 [2024-02-14 20:25:09.979396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.662 [2024-02-14 20:25:09.979404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.662 [2024-02-14 20:25:09.979410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.662 [2024-02-14 20:25:09.979417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.662 [2024-02-14 20:25:09.979423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.662 [2024-02-14 20:25:09.979431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.662 [2024-02-14 20:25:09.979437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.662 [2024-02-14 20:25:09.979445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.662 [2024-02-14 20:25:09.979451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.662 [2024-02-14 20:25:09.979460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.662 [2024-02-14 20:25:09.979466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.662 [2024-02-14 20:25:09.979474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.662 [2024-02-14 20:25:09.979480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.662 [2024-02-14 20:25:09.979487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.662 [2024-02-14 20:25:09.979493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.662 [2024-02-14 20:25:09.979501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.662 [2024-02-14 20:25:09.979509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.662 [2024-02-14 20:25:09.979516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.662 [2024-02-14 20:25:09.979522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.662 [2024-02-14 20:25:09.979530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.662 [2024-02-14 20:25:09.979536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.662 [2024-02-14 20:25:09.979615] bdev_nvme.c:1576:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20c3c40 was disconnected and freed. reset controller. 00:24:32.662 [2024-02-14 20:25:09.979744] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878990 is same with the state(5) to be set 00:24:32.662 [2024-02-14 20:25:09.979762] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878990 is same with the state(5) to be set 00:24:32.662 [2024-02-14 20:25:09.979769] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878990 is same with the state(5) to be set 00:24:32.662 [2024-02-14 20:25:09.979778] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878990 is same with the state(5) to be set 00:24:32.662 [2024-02-14 20:25:09.979784] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878990 is same with the state(5) to be set 00:24:32.662 [2024-02-14 20:25:09.979972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.662 [2024-02-14 20:25:09.979992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.662 [2024-02-14 20:25:09.980004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.662 [2024-02-14 20:25:09.980010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.662 [2024-02-14 20:25:09.980019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.662 [2024-02-14 20:25:09.980025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.662 [2024-02-14 20:25:09.980033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.662 [2024-02-14 20:25:09.980039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.662 [2024-02-14 20:25:09.980048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.662 [2024-02-14 20:25:09.980054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.662 [2024-02-14 20:25:09.980062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.662 [2024-02-14 20:25:09.980068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.662 [2024-02-14 20:25:09.980076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.662 [2024-02-14 20:25:09.980082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.662 [2024-02-14 20:25:09.980090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.662 [2024-02-14 20:25:09.980096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.662 [2024-02-14 20:25:09.980104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.662 [2024-02-14 20:25:09.980110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.662 [2024-02-14 20:25:09.980118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.662 [2024-02-14 20:25:09.980127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.662 [2024-02-14 20:25:09.980135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.662 [2024-02-14 20:25:09.980142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.662 [2024-02-14 20:25:09.980150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.662 [2024-02-14 20:25:09.980158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.662 [2024-02-14 20:25:09.980168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.662 [2024-02-14 20:25:09.980175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.662 [2024-02-14 20:25:09.980183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.662 [2024-02-14 20:25:09.980190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.662 [2024-02-14 20:25:09.980198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.662 [2024-02-14 20:25:09.980204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.662 [2024-02-14 20:25:09.980212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.662 [2024-02-14 20:25:09.980218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.662 [2024-02-14 20:25:09.980226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.662 [2024-02-14 20:25:09.980232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.662 [2024-02-14 20:25:09.980240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.662 [2024-02-14 20:25:09.980246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.662 [2024-02-14 20:25:09.980254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.662 [2024-02-14 20:25:09.980260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.662 [2024-02-14 20:25:09.980268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.662 [2024-02-14 20:25:09.980274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.662 [2024-02-14 20:25:09.980282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.662 [2024-02-14 20:25:09.980288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.662 [2024-02-14 20:25:09.980296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.662 [2024-02-14 20:25:09.980302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.662 [2024-02-14 20:25:09.980310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.662 [2024-02-14 20:25:09.980316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.662 [2024-02-14 20:25:09.980324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.662 [2024-02-14 20:25:09.980330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.663 [2024-02-14 20:25:09.980338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.663 [2024-02-14 20:25:09.980346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.663 [2024-02-14 20:25:09.980354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.663 [2024-02-14 20:25:09.980364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.663 [2024-02-14 20:25:09.980372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.663 [2024-02-14 20:25:09.980378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.663 [2024-02-14 20:25:09.980386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.663 [2024-02-14 20:25:09.980394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.663 [2024-02-14 20:25:09.980402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.663 [2024-02-14 20:25:09.980408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.663 [2024-02-14 20:25:09.980416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.663 [2024-02-14 20:25:09.980422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.663 [2024-02-14 20:25:09.980430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.663 [2024-02-14 20:25:09.980436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.241 [2024-02-14 20:25:10.363941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.241 [2024-02-14 20:25:10.363988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.241 [2024-02-14 20:25:10.364004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.241 [2024-02-14 20:25:10.364015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.241 [2024-02-14 20:25:10.364039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.241 [2024-02-14 20:25:10.364069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.241 [2024-02-14 20:25:10.364097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.241 [2024-02-14 20:25:10.364123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.241 [2024-02-14 20:25:10.364151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.241 [2024-02-14 20:25:10.364175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.241 [2024-02-14 20:25:10.364205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.241 [2024-02-14 20:25:10.364232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.241 [2024-02-14 20:25:10.364263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.241 [2024-02-14 20:25:10.364295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.241 [2024-02-14 20:25:10.364324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.241 [2024-02-14 20:25:10.364349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.241 [2024-02-14 20:25:10.364377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.241 [2024-02-14 20:25:10.364401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.241 [2024-02-14 20:25:10.364428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.241 [2024-02-14 20:25:10.364450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.241 [2024-02-14 20:25:10.364477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.241 [2024-02-14 20:25:10.364502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.241 [2024-02-14 20:25:10.364531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.241 [2024-02-14 20:25:10.364553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.241 [2024-02-14 20:25:10.364581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.241 [2024-02-14 20:25:10.364607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.241 [2024-02-14 20:25:10.364636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.241 [2024-02-14 20:25:10.364677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.241 [2024-02-14 20:25:10.364697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.241 [2024-02-14 20:25:10.364719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.241 [2024-02-14 20:25:10.364748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.241 [2024-02-14 20:25:10.364770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.241 [2024-02-14 20:25:10.364799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.241 [2024-02-14 20:25:10.364823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.241 [2024-02-14 20:25:10.364854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.241 [2024-02-14 20:25:10.364879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.241 [2024-02-14 20:25:10.364910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.241 [2024-02-14 20:25:10.364939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.241 [2024-02-14 20:25:10.364967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.241 [2024-02-14 20:25:10.365001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.241 [2024-02-14 20:25:10.365031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.241 [2024-02-14 20:25:10.365053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.241 [2024-02-14 20:25:10.365081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.241 [2024-02-14 20:25:10.370312] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878990 is same with the state(5) to be set 00:24:33.241 [2024-02-14 20:25:10.370347] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878990 is same with the state(5) to be set 00:24:33.241 [2024-02-14 20:25:10.370358] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878990 is same with the state(5) to be set 00:24:33.241 [2024-02-14 20:25:10.370368] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878990 is same with the state(5) to be set 00:24:33.241 [2024-02-14 20:25:10.370377] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878990 is same with the state(5) to be set 00:24:33.241 [2024-02-14 20:25:10.370386] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878990 is same with the state(5) to be set 00:24:33.241 [2024-02-14 20:25:10.370395] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878990 is same with the state(5) to be set 00:24:33.241 [2024-02-14 20:25:10.370404] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878990 is same with the state(5) to be set 00:24:33.241 [2024-02-14 20:25:10.370414] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878990 is same with the state(5) to be set 00:24:33.241 [2024-02-14 20:25:10.370423] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878990 is same with the state(5) to be set 00:24:33.241 [2024-02-14 20:25:10.370432] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878990 is same with the state(5) to be set 00:24:33.241 [2024-02-14 20:25:10.370441] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878990 is same with the state(5) to be set 00:24:33.241 [2024-02-14 20:25:10.370450] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878990 is same with the state(5) to be set 00:24:33.241 [2024-02-14 20:25:10.370459] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878990 is same with the state(5) to be set 00:24:33.241 [2024-02-14 20:25:10.370468] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878990 is same with the state(5) to be set 00:24:33.241 [2024-02-14 20:25:10.370477] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878990 is same with the state(5) to be set 00:24:33.241 [2024-02-14 20:25:10.370486] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878990 is same with the state(5) to be set 00:24:33.241 [2024-02-14 20:25:10.370495] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878990 is same with the state(5) to be set 00:24:33.241 [2024-02-14 20:25:10.370504] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878990 is same with the state(5) to be set 00:24:33.241 [2024-02-14 20:25:10.370513] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878990 is same with the state(5) to be set 00:24:33.241 [2024-02-14 20:25:10.370526] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878990 is same with the state(5) to be set 00:24:33.241 [2024-02-14 20:25:10.370536] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878990 is same with the state(5) to be set 00:24:33.241 [2024-02-14 20:25:10.370545] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878990 is same with the state(5) to be set 00:24:33.241 [2024-02-14 20:25:10.370554] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878990 is same with the state(5) to be set 00:24:33.241 [2024-02-14 20:25:10.370562] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878990 is same with the state(5) to be set 00:24:33.241 [2024-02-14 20:25:10.370571] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878990 is same with the state(5) to be set 00:24:33.242 [2024-02-14 20:25:10.370581] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878990 is same with the state(5) to be set 00:24:33.242 [2024-02-14 20:25:10.370590] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878990 is same with the state(5) to be set 00:24:33.242 [2024-02-14 20:25:10.370599] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878990 is same with the state(5) to be set 00:24:33.242 [2024-02-14 20:25:10.370608] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878990 is same with the state(5) to be set 00:24:33.242 [2024-02-14 20:25:10.370617] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878990 is same with the state(5) to be set 00:24:33.242 [2024-02-14 20:25:10.370626] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878990 is same with the state(5) to be set 00:24:33.242 [2024-02-14 20:25:10.370635] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878990 is same with the state(5) to be set 00:24:33.242 [2024-02-14 20:25:10.370644] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878990 is same with the state(5) to be set 00:24:33.242 [2024-02-14 20:25:10.370663] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878990 is same with the state(5) to be set 00:24:33.242 [2024-02-14 20:25:10.370672] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878990 is same with the state(5) to be set 00:24:33.242 [2024-02-14 20:25:10.370682] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878990 is same with the state(5) to be set 00:24:33.242 [2024-02-14 20:25:10.370691] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878990 is same with the state(5) to be set 00:24:33.242 [2024-02-14 20:25:10.370700] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878990 is same with the state(5) to be set 00:24:33.242 [2024-02-14 20:25:10.370709] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878990 is same with the state(5) to be set 00:24:33.242 [2024-02-14 20:25:10.370719] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878990 is same with the state(5) to be set 00:24:33.242 [2024-02-14 20:25:10.370728] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878990 is same with the state(5) to be set 00:24:33.242 [2024-02-14 20:25:10.370737] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878990 is same with the state(5) to be set 00:24:33.242 [2024-02-14 20:25:10.370745] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878990 is same with the state(5) to be set 00:24:33.242 [2024-02-14 20:25:10.370755] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878990 is same with the state(5) to be set 00:24:33.242 [2024-02-14 20:25:10.370764] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878990 is same with the state(5) to be set 00:24:33.242 [2024-02-14 20:25:10.370773] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878990 is same with the state(5) to be set 00:24:33.242 [2024-02-14 20:25:10.370783] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878990 is same with the state(5) to be set 00:24:33.242 [2024-02-14 20:25:10.370794] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878990 is same with the state(5) to be set 00:24:33.242 [2024-02-14 20:25:10.370804] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878990 is same with the state(5) to be set 00:24:33.242 [2024-02-14 20:25:10.370813] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878990 is same with the state(5) to be set 00:24:33.242 [2024-02-14 20:25:10.370822] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878990 is same with the state(5) to be set 00:24:33.242 [2024-02-14 20:25:10.370831] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878990 is same with the state(5) to be set 00:24:33.242 [2024-02-14 20:25:10.371674] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878e20 is same with the state(5) to be set 00:24:33.242 [2024-02-14 20:25:10.371696] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878e20 is same with the state(5) to be set 00:24:33.242 [2024-02-14 20:25:10.371703] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878e20 is same with the state(5) to be set 00:24:33.242 [2024-02-14 20:25:10.371709] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878e20 is same with the state(5) to be set 00:24:33.242 [2024-02-14 20:25:10.371716] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878e20 is same with the state(5) to be set 00:24:33.242 [2024-02-14 20:25:10.371722] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878e20 is same with the state(5) to be set 00:24:33.242 [2024-02-14 20:25:10.371728] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878e20 is same with the state(5) to be set 00:24:33.242 [2024-02-14 20:25:10.371733] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878e20 is same with the state(5) to be set 00:24:33.242 [2024-02-14 20:25:10.371739] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878e20 is same with the state(5) to be set 00:24:33.242 [2024-02-14 20:25:10.371745] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878e20 is same with the state(5) to be set 00:24:33.242 [2024-02-14 20:25:10.371750] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878e20 is same with the state(5) to be set 00:24:33.242 [2024-02-14 20:25:10.371756] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878e20 is same with the state(5) to be set 00:24:33.242 [2024-02-14 20:25:10.371762] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878e20 is same with the state(5) to be set 00:24:33.242 [2024-02-14 20:25:10.371767] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878e20 is same with the state(5) to be set 00:24:33.242 [2024-02-14 20:25:10.371773] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878e20 is same with the state(5) to be set 00:24:33.242 [2024-02-14 20:25:10.371778] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878e20 is same with the state(5) to be set 00:24:33.242 [2024-02-14 20:25:10.371784] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878e20 is same with the state(5) to be set 00:24:33.242 [2024-02-14 20:25:10.371790] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878e20 is same with the state(5) to be set 00:24:33.242 [2024-02-14 20:25:10.371795] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878e20 is same with the state(5) to be set 00:24:33.242 [2024-02-14 20:25:10.371801] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878e20 is same with the state(5) to be set 00:24:33.242 [2024-02-14 20:25:10.371806] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878e20 is same with the state(5) to be set 00:24:33.242 [2024-02-14 20:25:10.371812] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878e20 is same with the state(5) to be set 00:24:33.242 [2024-02-14 20:25:10.371821] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878e20 is same with the state(5) to be set 00:24:33.242 [2024-02-14 20:25:10.371827] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878e20 is same with the state(5) to be set 00:24:33.242 [2024-02-14 20:25:10.371833] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878e20 is same with the state(5) to be set 00:24:33.242 [2024-02-14 20:25:10.371839] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878e20 is same with the state(5) to be set 00:24:33.242 [2024-02-14 20:25:10.371844] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878e20 is same with the state(5) to be set 00:24:33.242 [2024-02-14 20:25:10.371850] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878e20 is same with the state(5) to be set 00:24:33.242 [2024-02-14 20:25:10.371856] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878e20 is same with the state(5) to be set 00:24:33.242 [2024-02-14 20:25:10.371862] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878e20 is same with the state(5) to be set 00:24:33.242 [2024-02-14 20:25:10.371868] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878e20 is same with the state(5) to be set 00:24:33.242 [2024-02-14 20:25:10.371873] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878e20 is same with the state(5) to be set 00:24:33.242 [2024-02-14 20:25:10.371879] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878e20 is same with the state(5) to be set 00:24:33.242 [2024-02-14 20:25:10.371885] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878e20 is same with the state(5) to be set 00:24:33.242 [2024-02-14 20:25:10.371892] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878e20 is same with the state(5) to be set 00:24:33.242 [2024-02-14 20:25:10.371898] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878e20 is same with the state(5) to be set 00:24:33.242 [2024-02-14 20:25:10.371904] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878e20 is same with the state(5) to be set 00:24:33.242 [2024-02-14 20:25:10.371909] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878e20 is same with the state(5) to be set 00:24:33.242 [2024-02-14 20:25:10.371915] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878e20 is same with the state(5) to be set 00:24:33.242 [2024-02-14 20:25:10.371921] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878e20 is same with the state(5) to be set 00:24:33.242 [2024-02-14 20:25:10.371927] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878e20 is same with the state(5) to be set 00:24:33.242 [2024-02-14 20:25:10.371933] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878e20 is same with the state(5) to be set 00:24:33.242 [2024-02-14 20:25:10.371938] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878e20 is same with the state(5) to be set 00:24:33.242 [2024-02-14 20:25:10.371944] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878e20 is same with the state(5) to be set 00:24:33.242 [2024-02-14 20:25:10.371949] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878e20 is same with the state(5) to be set 00:24:33.242 [2024-02-14 20:25:10.371955] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878e20 is same with the state(5) to be set 00:24:33.242 [2024-02-14 20:25:10.371961] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878e20 is same with the state(5) to be set 00:24:33.242 [2024-02-14 20:25:10.371967] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878e20 is same with the state(5) to be set 00:24:33.242 [2024-02-14 20:25:10.371972] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878e20 is same with the state(5) to be set 00:24:33.242 [2024-02-14 20:25:10.371980] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878e20 is same with the state(5) to be set 00:24:33.242 [2024-02-14 20:25:10.371986] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878e20 is same with the state(5) to be set 00:24:33.242 [2024-02-14 20:25:10.371991] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878e20 is same with the state(5) to be set 00:24:33.242 [2024-02-14 20:25:10.371997] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878e20 is same with the state(5) to be set 00:24:33.242 [2024-02-14 20:25:10.372003] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878e20 is same with the state(5) to be set 00:24:33.242 [2024-02-14 20:25:10.372009] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878e20 is same with the state(5) to be set 00:24:33.242 [2024-02-14 20:25:10.372015] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878e20 is same with the state(5) to be set 00:24:33.242 [2024-02-14 20:25:10.372020] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878e20 is same with the state(5) to be set 00:24:33.243 [2024-02-14 20:25:10.372026] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878e20 is same with the state(5) to be set 00:24:33.243 [2024-02-14 20:25:10.372032] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878e20 is same with the state(5) to be set 00:24:33.243 [2024-02-14 20:25:10.372037] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878e20 is same with the state(5) to be set 00:24:33.243 [2024-02-14 20:25:10.372043] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878e20 is same with the state(5) to be set 00:24:33.243 [2024-02-14 20:25:10.372048] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878e20 is same with the state(5) to be set 00:24:33.243 [2024-02-14 20:25:10.372054] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878e20 is same with the state(5) to be set 00:24:33.243 [2024-02-14 20:25:10.375559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.243 [2024-02-14 20:25:10.375578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.243 [2024-02-14 20:25:10.375589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.243 [2024-02-14 20:25:10.375602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.243 [2024-02-14 20:25:10.375612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.243 [2024-02-14 20:25:10.375624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.243 [2024-02-14 20:25:10.375635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.243 [2024-02-14 20:25:10.375652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.243 [2024-02-14 20:25:10.375662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.243 [2024-02-14 20:25:10.375676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.243 [2024-02-14 20:25:10.375687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.243 [2024-02-14 20:25:10.375699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.243 [2024-02-14 20:25:10.375712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.243 [2024-02-14 20:25:10.375726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.243 [2024-02-14 20:25:10.375736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.243 [2024-02-14 20:25:10.375749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.243 [2024-02-14 20:25:10.375759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.243 [2024-02-14 20:25:10.375771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.243 [2024-02-14 20:25:10.375782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.243 [2024-02-14 20:25:10.375794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.243 [2024-02-14 20:25:10.375805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.243 [2024-02-14 20:25:10.375818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.243 [2024-02-14 20:25:10.375828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.243 [2024-02-14 20:25:10.375840] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2abd1d0 is same with the state(5) to be set 00:24:33.243 [2024-02-14 20:25:10.376265] bdev_nvme.c:1576:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2abd1d0 was disconnected and freed. reset controller. 00:24:33.243 [2024-02-14 20:25:10.376356] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:33.243 [2024-02-14 20:25:10.376371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.243 [2024-02-14 20:25:10.376383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:33.243 [2024-02-14 20:25:10.376393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.243 [2024-02-14 20:25:10.376404] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:33.243 [2024-02-14 20:25:10.376414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.243 [2024-02-14 20:25:10.376425] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:33.243 [2024-02-14 20:25:10.376435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.243 [2024-02-14 20:25:10.376445] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x196d250 is same with the state(5) to be set 00:24:33.243 [2024-02-14 20:25:10.376476] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x196dce0 (9): Bad file descriptor 00:24:33.243 [2024-02-14 20:25:10.376513] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:33.243 [2024-02-14 20:25:10.376526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.243 [2024-02-14 20:25:10.376537] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:33.243 [2024-02-14 20:25:10.376551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.243 [2024-02-14 20:25:10.376562] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:33.243 [2024-02-14 20:25:10.376572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.243 [2024-02-14 20:25:10.376582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:33.243 [2024-02-14 20:25:10.376593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.243 [2024-02-14 20:25:10.376603] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198e610 is same with the state(5) to be set 00:24:33.243 [2024-02-14 20:25:10.376636] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:33.243 [2024-02-14 20:25:10.376656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.243 [2024-02-14 20:25:10.376668] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:33.243 [2024-02-14 20:25:10.376679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.243 [2024-02-14 20:25:10.376691] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:33.243 [2024-02-14 20:25:10.376701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.243 [2024-02-14 20:25:10.376712] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:33.243 [2024-02-14 20:25:10.376722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.243 [2024-02-14 20:25:10.376731] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19536b0 is same with the state(5) to be set 00:24:33.243 [2024-02-14 20:25:10.376753] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b0310 (9): Bad file descriptor 00:24:33.243 [2024-02-14 20:25:10.376774] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1897630 (9): Bad file descriptor 00:24:33.243 [2024-02-14 20:25:10.376793] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x196add0 (9): Bad file descriptor 00:24:33.243 [2024-02-14 20:25:10.376824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:33.243 [2024-02-14 20:25:10.376837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.243 [2024-02-14 20:25:10.376847] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:33.243 [2024-02-14 20:25:10.376858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.243 [2024-02-14 20:25:10.376869] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:33.243 [2024-02-14 20:25:10.376879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.243 [2024-02-14 20:25:10.376890] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:33.243 [2024-02-14 20:25:10.376900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.243 [2024-02-14 20:25:10.376912] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18978e0 is same with the state(5) to be set 00:24:33.243 [2024-02-14 20:25:10.376948] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:33.243 [2024-02-14 20:25:10.376959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.243 [2024-02-14 20:25:10.376970] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:33.243 [2024-02-14 20:25:10.376981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.243 [2024-02-14 20:25:10.376992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:33.243 [2024-02-14 20:25:10.377001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.243 [2024-02-14 20:25:10.377012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:33.243 [2024-02-14 20:25:10.377022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.244 [2024-02-14 20:25:10.377032] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989d80 is same with the state(5) to be set 00:24:33.244 [2024-02-14 20:25:10.377055] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1953280 (9): Bad file descriptor 00:24:33.244 [2024-02-14 20:25:10.380461] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:24:33.244 [2024-02-14 20:25:10.381006] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:24:33.244 [2024-02-14 20:25:10.382307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.244 [2024-02-14 20:25:10.382674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.244 [2024-02-14 20:25:10.382692] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196add0 with addr=10.0.0.2, port=4420 00:24:33.244 [2024-02-14 20:25:10.382705] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x196add0 is same with the state(5) to be set 00:24:33.244 [2024-02-14 20:25:10.382767] nvme_tcp.c:1157:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:33.244 [2024-02-14 20:25:10.382831] nvme_tcp.c:1157:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:33.244 [2024-02-14 20:25:10.382878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.244 [2024-02-14 20:25:10.382895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.244 [2024-02-14 20:25:10.382915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.244 [2024-02-14 20:25:10.382926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.244 [2024-02-14 20:25:10.382939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.244 [2024-02-14 20:25:10.382950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.244 [2024-02-14 20:25:10.382963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.244 [2024-02-14 20:25:10.382974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.244 [2024-02-14 20:25:10.382992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.244 [2024-02-14 20:25:10.383002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.244 [2024-02-14 20:25:10.383015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.244 [2024-02-14 20:25:10.383026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.244 [2024-02-14 20:25:10.383039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.244 [2024-02-14 20:25:10.383049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.244 [2024-02-14 20:25:10.383062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.244 [2024-02-14 20:25:10.383072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.244 [2024-02-14 20:25:10.383085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.244 [2024-02-14 20:25:10.383095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.244 [2024-02-14 20:25:10.383107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.244 [2024-02-14 20:25:10.383118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.244 [2024-02-14 20:25:10.383130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.244 [2024-02-14 20:25:10.383140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.244 [2024-02-14 20:25:10.383153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.244 [2024-02-14 20:25:10.383164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.244 [2024-02-14 20:25:10.383176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.244 [2024-02-14 20:25:10.383186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.244 [2024-02-14 20:25:10.383199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.244 [2024-02-14 20:25:10.383209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.244 [2024-02-14 20:25:10.383222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.244 [2024-02-14 20:25:10.383232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.244 [2024-02-14 20:25:10.383245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.244 [2024-02-14 20:25:10.383255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.244 [2024-02-14 20:25:10.383268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.244 [2024-02-14 20:25:10.383280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.244 [2024-02-14 20:25:10.383293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.244 [2024-02-14 20:25:10.383303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.244 [2024-02-14 20:25:10.383315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.244 [2024-02-14 20:25:10.383326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.244 [2024-02-14 20:25:10.383339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.244 [2024-02-14 20:25:10.383349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.244 [2024-02-14 20:25:10.383362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.244 [2024-02-14 20:25:10.383372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.244 [2024-02-14 20:25:10.383384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.244 [2024-02-14 20:25:10.383395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.244 [2024-02-14 20:25:10.383407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.244 [2024-02-14 20:25:10.383417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.244 [2024-02-14 20:25:10.383430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.244 [2024-02-14 20:25:10.383440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.244 [2024-02-14 20:25:10.383453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.244 [2024-02-14 20:25:10.383463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.244 [2024-02-14 20:25:10.383476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.244 [2024-02-14 20:25:10.383486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.244 [2024-02-14 20:25:10.383498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.244 [2024-02-14 20:25:10.383509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.244 [2024-02-14 20:25:10.383521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.244 [2024-02-14 20:25:10.383532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.244 [2024-02-14 20:25:10.383544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.244 [2024-02-14 20:25:10.383554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.244 [2024-02-14 20:25:10.383569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.244 [2024-02-14 20:25:10.383580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.244 [2024-02-14 20:25:10.383592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.244 [2024-02-14 20:25:10.383603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.244 [2024-02-14 20:25:10.383615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.244 [2024-02-14 20:25:10.383625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.244 [2024-02-14 20:25:10.383638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.244 [2024-02-14 20:25:10.383658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.244 [2024-02-14 20:25:10.383671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.244 [2024-02-14 20:25:10.383681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.244 [2024-02-14 20:25:10.383694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.245 [2024-02-14 20:25:10.383704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.245 [2024-02-14 20:25:10.383717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.245 [2024-02-14 20:25:10.383727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.245 [2024-02-14 20:25:10.383741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.245 [2024-02-14 20:25:10.383752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.245 [2024-02-14 20:25:10.383765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.245 [2024-02-14 20:25:10.383776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.245 [2024-02-14 20:25:10.383788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.245 [2024-02-14 20:25:10.383799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.245 [2024-02-14 20:25:10.383812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.245 [2024-02-14 20:25:10.383822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.245 [2024-02-14 20:25:10.383835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.245 [2024-02-14 20:25:10.383845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.245 [2024-02-14 20:25:10.383857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.245 [2024-02-14 20:25:10.383870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.245 [2024-02-14 20:25:10.383883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.245 [2024-02-14 20:25:10.383894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.245 [2024-02-14 20:25:10.383906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.245 [2024-02-14 20:25:10.383916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.245 [2024-02-14 20:25:10.383929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.245 [2024-02-14 20:25:10.383939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.245 [2024-02-14 20:25:10.383952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.245 [2024-02-14 20:25:10.383963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.245 [2024-02-14 20:25:10.383975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.245 [2024-02-14 20:25:10.383986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.245 [2024-02-14 20:25:10.383999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.245 [2024-02-14 20:25:10.384009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.245 [2024-02-14 20:25:10.384022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.245 [2024-02-14 20:25:10.384032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.245 [2024-02-14 20:25:10.384044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.245 [2024-02-14 20:25:10.384055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.245 [2024-02-14 20:25:10.384068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.245 [2024-02-14 20:25:10.384078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.245 [2024-02-14 20:25:10.384091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.245 [2024-02-14 20:25:10.384101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.245 [2024-02-14 20:25:10.384114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.245 [2024-02-14 20:25:10.384124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.245 [2024-02-14 20:25:10.384137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.245 [2024-02-14 20:25:10.384147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.245 [2024-02-14 20:25:10.384162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.245 [2024-02-14 20:25:10.384173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.245 [2024-02-14 20:25:10.384186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.245 [2024-02-14 20:25:10.384196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.245 [2024-02-14 20:25:10.384209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.245 [2024-02-14 20:25:10.384219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.245 [2024-02-14 20:25:10.384231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.245 [2024-02-14 20:25:10.384241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.245 [2024-02-14 20:25:10.384254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.245 [2024-02-14 20:25:10.384264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.245 [2024-02-14 20:25:10.384277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.245 [2024-02-14 20:25:10.384286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.245 [2024-02-14 20:25:10.384299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.245 [2024-02-14 20:25:10.384309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.245 [2024-02-14 20:25:10.384322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.245 [2024-02-14 20:25:10.384332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.245 [2024-02-14 20:25:10.384345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.245 [2024-02-14 20:25:10.384355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.245 [2024-02-14 20:25:10.384367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.245 [2024-02-14 20:25:10.384377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.245 [2024-02-14 20:25:10.384389] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21080 is same with the state(5) to be set 00:24:33.245 [2024-02-14 20:25:10.384455] bdev_nvme.c:1576:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1f21080 was disconnected and freed. reset controller. 00:24:33.245 [2024-02-14 20:25:10.385760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.245 [2024-02-14 20:25:10.386069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.245 [2024-02-14 20:25:10.386085] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1953280 with addr=10.0.0.2, port=4420 00:24:33.245 [2024-02-14 20:25:10.386097] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1953280 is same with the state(5) to be set 00:24:33.246 [2024-02-14 20:25:10.386120] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x196add0 (9): Bad file descriptor 00:24:33.246 [2024-02-14 20:25:10.387669] nvme_tcp.c:1157:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:33.246 [2024-02-14 20:25:10.387724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.246 [2024-02-14 20:25:10.387738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.246 [2024-02-14 20:25:10.387756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.246 [2024-02-14 20:25:10.387767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.246 [2024-02-14 20:25:10.387780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.246 [2024-02-14 20:25:10.387789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.246 [2024-02-14 20:25:10.387802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.246 [2024-02-14 20:25:10.387813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.246 [2024-02-14 20:25:10.387825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.246 [2024-02-14 20:25:10.387835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.246 [2024-02-14 20:25:10.387848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.246 [2024-02-14 20:25:10.387858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.246 [2024-02-14 20:25:10.387871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.246 [2024-02-14 20:25:10.387881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.246 [2024-02-14 20:25:10.387894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.246 [2024-02-14 20:25:10.387904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.246 [2024-02-14 20:25:10.387917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.246 [2024-02-14 20:25:10.387927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.246 [2024-02-14 20:25:10.387939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.246 [2024-02-14 20:25:10.387949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.246 [2024-02-14 20:25:10.387962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.246 [2024-02-14 20:25:10.387972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.246 [2024-02-14 20:25:10.387985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.246 [2024-02-14 20:25:10.387995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.246 [2024-02-14 20:25:10.388012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.246 [2024-02-14 20:25:10.388023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.246 [2024-02-14 20:25:10.388035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.246 [2024-02-14 20:25:10.388045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.246 [2024-02-14 20:25:10.388058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.246 [2024-02-14 20:25:10.388068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.246 [2024-02-14 20:25:10.388080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.246 [2024-02-14 20:25:10.388091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.246 [2024-02-14 20:25:10.388103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.246 [2024-02-14 20:25:10.388113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.246 [2024-02-14 20:25:10.388126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.246 [2024-02-14 20:25:10.388136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.246 [2024-02-14 20:25:10.388149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.246 [2024-02-14 20:25:10.388159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.246 [2024-02-14 20:25:10.388171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.246 [2024-02-14 20:25:10.388182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.246 [2024-02-14 20:25:10.388194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.246 [2024-02-14 20:25:10.388204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.246 [2024-02-14 20:25:10.388217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.246 [2024-02-14 20:25:10.388226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.246 [2024-02-14 20:25:10.388239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.246 [2024-02-14 20:25:10.388249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.246 [2024-02-14 20:25:10.388262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.246 [2024-02-14 20:25:10.388272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.246 [2024-02-14 20:25:10.388285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.246 [2024-02-14 20:25:10.388297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.246 [2024-02-14 20:25:10.388310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.246 [2024-02-14 20:25:10.388320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.246 [2024-02-14 20:25:10.388332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.246 [2024-02-14 20:25:10.388343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.246 [2024-02-14 20:25:10.388356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.246 [2024-02-14 20:25:10.388367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.246 [2024-02-14 20:25:10.388379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.246 [2024-02-14 20:25:10.388389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.246 [2024-02-14 20:25:10.388402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.246 [2024-02-14 20:25:10.388412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.246 [2024-02-14 20:25:10.388424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.246 [2024-02-14 20:25:10.388435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.246 [2024-02-14 20:25:10.388448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.246 [2024-02-14 20:25:10.388458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.246 [2024-02-14 20:25:10.388471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.246 [2024-02-14 20:25:10.388481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.246 [2024-02-14 20:25:10.388494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.246 [2024-02-14 20:25:10.388504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.246 [2024-02-14 20:25:10.388517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.246 [2024-02-14 20:25:10.388527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.246 [2024-02-14 20:25:10.388540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.246 [2024-02-14 20:25:10.388550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.246 [2024-02-14 20:25:10.388563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.246 [2024-02-14 20:25:10.388573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.246 [2024-02-14 20:25:10.388590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.246 [2024-02-14 20:25:10.388600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.246 [2024-02-14 20:25:10.388613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.247 [2024-02-14 20:25:10.388623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.247 [2024-02-14 20:25:10.388637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.247 [2024-02-14 20:25:10.388653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.247 [2024-02-14 20:25:10.388666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.247 [2024-02-14 20:25:10.388677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.247 [2024-02-14 20:25:10.388690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.247 [2024-02-14 20:25:10.388700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.247 [2024-02-14 20:25:10.388712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.247 [2024-02-14 20:25:10.388723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.247 [2024-02-14 20:25:10.388735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.247 [2024-02-14 20:25:10.388746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.247 [2024-02-14 20:25:10.388759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.247 [2024-02-14 20:25:10.388769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.247 [2024-02-14 20:25:10.388782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.247 [2024-02-14 20:25:10.388792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.247 [2024-02-14 20:25:10.388804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.247 [2024-02-14 20:25:10.388815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.247 [2024-02-14 20:25:10.388827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.247 [2024-02-14 20:25:10.388838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.247 [2024-02-14 20:25:10.388850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.247 [2024-02-14 20:25:10.388860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.247 [2024-02-14 20:25:10.388872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.247 [2024-02-14 20:25:10.388885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.247 [2024-02-14 20:25:10.388897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.247 [2024-02-14 20:25:10.388907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.247 [2024-02-14 20:25:10.388920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.247 [2024-02-14 20:25:10.388930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.247 [2024-02-14 20:25:10.388942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.247 [2024-02-14 20:25:10.388952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.247 [2024-02-14 20:25:10.388965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.247 [2024-02-14 20:25:10.388975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.247 [2024-02-14 20:25:10.388988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.247 [2024-02-14 20:25:10.388998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.247 [2024-02-14 20:25:10.389011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.247 [2024-02-14 20:25:10.389021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.247 [2024-02-14 20:25:10.389034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.247 [2024-02-14 20:25:10.389044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.247 [2024-02-14 20:25:10.389056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.247 [2024-02-14 20:25:10.389066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.247 [2024-02-14 20:25:10.389079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.247 [2024-02-14 20:25:10.389089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.247 [2024-02-14 20:25:10.389101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.247 [2024-02-14 20:25:10.389112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.247 [2024-02-14 20:25:10.389124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.247 [2024-02-14 20:25:10.389134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.247 [2024-02-14 20:25:10.389147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.247 [2024-02-14 20:25:10.389157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.247 [2024-02-14 20:25:10.389172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.247 [2024-02-14 20:25:10.389182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.247 [2024-02-14 20:25:10.389195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.247 [2024-02-14 20:25:10.389205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.247 [2024-02-14 20:25:10.389216] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2409650 is same with the state(5) to be set 00:24:33.247 [2024-02-14 20:25:10.389291] bdev_nvme.c:1576:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2409650 was disconnected and freed. reset controller. 00:24:33.247 [2024-02-14 20:25:10.389349] nvme_tcp.c:1157:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:33.247 [2024-02-14 20:25:10.389482] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:24:33.247 [2024-02-14 20:25:10.389522] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1953280 (9): Bad file descriptor 00:24:33.247 [2024-02-14 20:25:10.389534] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:24:33.247 [2024-02-14 20:25:10.389543] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:24:33.247 [2024-02-14 20:25:10.389553] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:24:33.247 [2024-02-14 20:25:10.389573] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x196d250 (9): Bad file descriptor 00:24:33.247 [2024-02-14 20:25:10.389597] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x198e610 (9): Bad file descriptor 00:24:33.247 [2024-02-14 20:25:10.389612] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19536b0 (9): Bad file descriptor 00:24:33.247 [2024-02-14 20:25:10.389658] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18978e0 (9): Bad file descriptor 00:24:33.247 [2024-02-14 20:25:10.389680] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1989d80 (9): Bad file descriptor 00:24:33.247 [2024-02-14 20:25:10.391127] nvme_tcp.c:1157:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:33.247 [2024-02-14 20:25:10.391166] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:33.247 [2024-02-14 20:25:10.391185] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:24:33.247 [2024-02-14 20:25:10.391522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.247 [2024-02-14 20:25:10.391816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.247 [2024-02-14 20:25:10.391831] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196dce0 with addr=10.0.0.2, port=4420 00:24:33.247 [2024-02-14 20:25:10.391841] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x196dce0 is same with the state(5) to be set 00:24:33.247 [2024-02-14 20:25:10.391851] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:24:33.247 [2024-02-14 20:25:10.391859] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:24:33.247 [2024-02-14 20:25:10.391868] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:24:33.247 [2024-02-14 20:25:10.391895] bdev_nvme.c:2824:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:33.247 [2024-02-14 20:25:10.391947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.247 [2024-02-14 20:25:10.391964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.247 [2024-02-14 20:25:10.391978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.247 [2024-02-14 20:25:10.391988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.247 [2024-02-14 20:25:10.391999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.247 [2024-02-14 20:25:10.392008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.247 [2024-02-14 20:25:10.392019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.248 [2024-02-14 20:25:10.392029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.248 [2024-02-14 20:25:10.392039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.248 [2024-02-14 20:25:10.392048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.248 [2024-02-14 20:25:10.392059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.248 [2024-02-14 20:25:10.392068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.248 [2024-02-14 20:25:10.392079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.248 [2024-02-14 20:25:10.392088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.248 [2024-02-14 20:25:10.392099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.248 [2024-02-14 20:25:10.392108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.248 [2024-02-14 20:25:10.392119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.248 [2024-02-14 20:25:10.392128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.248 [2024-02-14 20:25:10.392139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.248 [2024-02-14 20:25:10.392147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.248 [2024-02-14 20:25:10.392158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.248 [2024-02-14 20:25:10.392167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.248 [2024-02-14 20:25:10.392178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.248 [2024-02-14 20:25:10.392187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.248 [2024-02-14 20:25:10.392197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.248 [2024-02-14 20:25:10.392206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.248 [2024-02-14 20:25:10.392219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.248 [2024-02-14 20:25:10.392228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.248 [2024-02-14 20:25:10.392238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.248 [2024-02-14 20:25:10.392247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.248 [2024-02-14 20:25:10.392258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.248 [2024-02-14 20:25:10.392267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.248 [2024-02-14 20:25:10.392278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.248 [2024-02-14 20:25:10.392287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.248 [2024-02-14 20:25:10.392298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.248 [2024-02-14 20:25:10.392307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.248 [2024-02-14 20:25:10.392318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.248 [2024-02-14 20:25:10.392327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.248 [2024-02-14 20:25:10.392337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.248 [2024-02-14 20:25:10.392347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.248 [2024-02-14 20:25:10.392358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.248 [2024-02-14 20:25:10.392367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.248 [2024-02-14 20:25:10.392377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.248 [2024-02-14 20:25:10.392386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.248 [2024-02-14 20:25:10.392397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.248 [2024-02-14 20:25:10.392406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.248 [2024-02-14 20:25:10.392417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.248 [2024-02-14 20:25:10.392426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.248 [2024-02-14 20:25:10.392437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.248 [2024-02-14 20:25:10.392446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.248 [2024-02-14 20:25:10.392457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.248 [2024-02-14 20:25:10.392467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.248 [2024-02-14 20:25:10.392478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.248 [2024-02-14 20:25:10.392487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.248 [2024-02-14 20:25:10.392499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.248 [2024-02-14 20:25:10.392508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.248 [2024-02-14 20:25:10.392519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.248 [2024-02-14 20:25:10.392528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.248 [2024-02-14 20:25:10.392539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.248 [2024-02-14 20:25:10.392548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.248 [2024-02-14 20:25:10.392560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.248 [2024-02-14 20:25:10.392569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.248 [2024-02-14 20:25:10.392580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.248 [2024-02-14 20:25:10.392589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.248 [2024-02-14 20:25:10.392600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.248 [2024-02-14 20:25:10.392609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.248 [2024-02-14 20:25:10.392620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.248 [2024-02-14 20:25:10.392629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.248 [2024-02-14 20:25:10.392640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.248 [2024-02-14 20:25:10.392657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.248 [2024-02-14 20:25:10.392669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.248 [2024-02-14 20:25:10.392677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.248 [2024-02-14 20:25:10.392688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.248 [2024-02-14 20:25:10.392697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.248 [2024-02-14 20:25:10.392708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.248 [2024-02-14 20:25:10.392716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.248 [2024-02-14 20:25:10.392730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.248 [2024-02-14 20:25:10.392739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.248 [2024-02-14 20:25:10.392750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.248 [2024-02-14 20:25:10.392759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.248 [2024-02-14 20:25:10.392769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.248 [2024-02-14 20:25:10.392778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.248 [2024-02-14 20:25:10.392789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.248 [2024-02-14 20:25:10.392798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.248 [2024-02-14 20:25:10.392809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.249 [2024-02-14 20:25:10.392818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.249 [2024-02-14 20:25:10.392829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.249 [2024-02-14 20:25:10.392839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.249 [2024-02-14 20:25:10.392850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.249 [2024-02-14 20:25:10.392859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.249 [2024-02-14 20:25:10.392870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.249 [2024-02-14 20:25:10.392879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.249 [2024-02-14 20:25:10.392890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.249 [2024-02-14 20:25:10.392899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.249 [2024-02-14 20:25:10.392909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.249 [2024-02-14 20:25:10.392918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.249 [2024-02-14 20:25:10.392929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.249 [2024-02-14 20:25:10.392938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.249 [2024-02-14 20:25:10.392949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.249 [2024-02-14 20:25:10.392957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.249 [2024-02-14 20:25:10.392968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.249 [2024-02-14 20:25:10.392978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.249 [2024-02-14 20:25:10.392989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.249 [2024-02-14 20:25:10.392998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.249 [2024-02-14 20:25:10.393009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.249 [2024-02-14 20:25:10.393018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.249 [2024-02-14 20:25:10.393029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.249 [2024-02-14 20:25:10.393037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.249 [2024-02-14 20:25:10.393048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.249 [2024-02-14 20:25:10.393057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.249 [2024-02-14 20:25:10.393068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.249 [2024-02-14 20:25:10.393077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.249 [2024-02-14 20:25:10.393087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.249 [2024-02-14 20:25:10.393096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.249 [2024-02-14 20:25:10.393107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.249 [2024-02-14 20:25:10.393116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.249 [2024-02-14 20:25:10.393127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.249 [2024-02-14 20:25:10.393136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.249 [2024-02-14 20:25:10.393147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.249 [2024-02-14 20:25:10.393157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.249 [2024-02-14 20:25:10.393168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.249 [2024-02-14 20:25:10.393177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.249 [2024-02-14 20:25:10.393188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.249 [2024-02-14 20:25:10.393198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.249 [2024-02-14 20:25:10.393210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.249 [2024-02-14 20:25:10.393218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.249 [2024-02-14 20:25:10.393231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.249 [2024-02-14 20:25:10.393241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.249 [2024-02-14 20:25:10.393251] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2400 is same with the state(5) to be set 00:24:33.249 [2024-02-14 20:25:10.394582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.249 [2024-02-14 20:25:10.394598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.249 [2024-02-14 20:25:10.394611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.249 [2024-02-14 20:25:10.394620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.249 [2024-02-14 20:25:10.394632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.249 [2024-02-14 20:25:10.394641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.249 [2024-02-14 20:25:10.394658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.249 [2024-02-14 20:25:10.394667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.249 [2024-02-14 20:25:10.394678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.249 [2024-02-14 20:25:10.394688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.249 [2024-02-14 20:25:10.394699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.249 [2024-02-14 20:25:10.394708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.249 [2024-02-14 20:25:10.394719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.249 [2024-02-14 20:25:10.394728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.249 [2024-02-14 20:25:10.394739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.249 [2024-02-14 20:25:10.394748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.249 [2024-02-14 20:25:10.394759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.249 [2024-02-14 20:25:10.394768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.249 [2024-02-14 20:25:10.394779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.249 [2024-02-14 20:25:10.394788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.249 [2024-02-14 20:25:10.394799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.249 [2024-02-14 20:25:10.394808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.249 [2024-02-14 20:25:10.394822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.249 [2024-02-14 20:25:10.394832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.249 [2024-02-14 20:25:10.394843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.249 [2024-02-14 20:25:10.394853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.249 [2024-02-14 20:25:10.394864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.249 [2024-02-14 20:25:10.394874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.249 [2024-02-14 20:25:10.394886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.249 [2024-02-14 20:25:10.394895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.249 [2024-02-14 20:25:10.394906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.249 [2024-02-14 20:25:10.394915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.249 [2024-02-14 20:25:10.394926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.249 [2024-02-14 20:25:10.394935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.249 [2024-02-14 20:25:10.394946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.250 [2024-02-14 20:25:10.394954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.250 [2024-02-14 20:25:10.394965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.250 [2024-02-14 20:25:10.394974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.250 [2024-02-14 20:25:10.394985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.250 [2024-02-14 20:25:10.394994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.250 [2024-02-14 20:25:10.395005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.250 [2024-02-14 20:25:10.395014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.250 [2024-02-14 20:25:10.395025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.250 [2024-02-14 20:25:10.395034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.250 [2024-02-14 20:25:10.395045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.250 [2024-02-14 20:25:10.395055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.250 [2024-02-14 20:25:10.395066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.250 [2024-02-14 20:25:10.395077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.250 [2024-02-14 20:25:10.395088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.250 [2024-02-14 20:25:10.395097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.250 [2024-02-14 20:25:10.395107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.250 [2024-02-14 20:25:10.395117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.250 [2024-02-14 20:25:10.395127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.250 [2024-02-14 20:25:10.395136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.250 [2024-02-14 20:25:10.395147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.250 [2024-02-14 20:25:10.395157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.250 [2024-02-14 20:25:10.395168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.250 [2024-02-14 20:25:10.395177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.250 [2024-02-14 20:25:10.395188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.250 [2024-02-14 20:25:10.395197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.250 [2024-02-14 20:25:10.395208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.250 [2024-02-14 20:25:10.395222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.250 [2024-02-14 20:25:10.395232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.250 [2024-02-14 20:25:10.395241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.250 [2024-02-14 20:25:10.395252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.250 [2024-02-14 20:25:10.395261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.250 [2024-02-14 20:25:10.395272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.250 [2024-02-14 20:25:10.395282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.250 [2024-02-14 20:25:10.395293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.250 [2024-02-14 20:25:10.395302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.250 [2024-02-14 20:25:10.395313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.250 [2024-02-14 20:25:10.395322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.250 [2024-02-14 20:25:10.395332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.250 [2024-02-14 20:25:10.395343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.250 [2024-02-14 20:25:10.395354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.250 [2024-02-14 20:25:10.395363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.250 [2024-02-14 20:25:10.395374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.250 [2024-02-14 20:25:10.395383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.250 [2024-02-14 20:25:10.395394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.250 [2024-02-14 20:25:10.395403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.250 [2024-02-14 20:25:10.395415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.250 [2024-02-14 20:25:10.395424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.250 [2024-02-14 20:25:10.395435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.250 [2024-02-14 20:25:10.395445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.250 [2024-02-14 20:25:10.395458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.250 [2024-02-14 20:25:10.395467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.250 [2024-02-14 20:25:10.395481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.250 [2024-02-14 20:25:10.395491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.250 [2024-02-14 20:25:10.395503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.250 [2024-02-14 20:25:10.395514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.250 [2024-02-14 20:25:10.395526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.250 [2024-02-14 20:25:10.395536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.250 [2024-02-14 20:25:10.395549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.250 [2024-02-14 20:25:10.395559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.250 [2024-02-14 20:25:10.395571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.250 [2024-02-14 20:25:10.395580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.250 [2024-02-14 20:25:10.395592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.250 [2024-02-14 20:25:10.395602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.250 [2024-02-14 20:25:10.395615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.250 [2024-02-14 20:25:10.395624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.250 [2024-02-14 20:25:10.395635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.250 [2024-02-14 20:25:10.395643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.251 [2024-02-14 20:25:10.395661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.251 [2024-02-14 20:25:10.395670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.251 [2024-02-14 20:25:10.395682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.251 [2024-02-14 20:25:10.395691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.251 [2024-02-14 20:25:10.395702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.251 [2024-02-14 20:25:10.395711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.251 [2024-02-14 20:25:10.395722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.251 [2024-02-14 20:25:10.395731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.251 [2024-02-14 20:25:10.395742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.251 [2024-02-14 20:25:10.395751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.251 [2024-02-14 20:25:10.395761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.251 [2024-02-14 20:25:10.395770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.251 [2024-02-14 20:25:10.395781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.251 [2024-02-14 20:25:10.395790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.251 [2024-02-14 20:25:10.395801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.251 [2024-02-14 20:25:10.395810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.251 [2024-02-14 20:25:10.395822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.251 [2024-02-14 20:25:10.395831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.251 [2024-02-14 20:25:10.395842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.251 [2024-02-14 20:25:10.395851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.251 [2024-02-14 20:25:10.395862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.251 [2024-02-14 20:25:10.395874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.251 [2024-02-14 20:25:10.395885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.251 [2024-02-14 20:25:10.395894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.251 [2024-02-14 20:25:10.395904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.251 [2024-02-14 20:25:10.395913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.251 [2024-02-14 20:25:10.395923] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b2550 is same with the state(5) to be set 00:24:33.251 [2024-02-14 20:25:10.397578] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:33.251 [2024-02-14 20:25:10.397605] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:33.251 [2024-02-14 20:25:10.397619] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:24:33.251 [2024-02-14 20:25:10.397963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.251 [2024-02-14 20:25:10.398444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.251 [2024-02-14 20:25:10.398459] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x198e610 with addr=10.0.0.2, port=4420 00:24:33.251 [2024-02-14 20:25:10.398469] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198e610 is same with the state(5) to be set 00:24:33.251 [2024-02-14 20:25:10.398483] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x196dce0 (9): Bad file descriptor 00:24:33.251 [2024-02-14 20:25:10.398895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.251 [2024-02-14 20:25:10.398912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.251 [2024-02-14 20:25:10.398928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.251 [2024-02-14 20:25:10.398938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.251 [2024-02-14 20:25:10.398950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.251 [2024-02-14 20:25:10.398960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.251 [2024-02-14 20:25:10.398971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.251 [2024-02-14 20:25:10.398980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.251 [2024-02-14 20:25:10.398991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.251 [2024-02-14 20:25:10.399001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.251 [2024-02-14 20:25:10.399012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.251 [2024-02-14 20:25:10.399021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.251 [2024-02-14 20:25:10.399036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.251 [2024-02-14 20:25:10.399046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.251 [2024-02-14 20:25:10.399057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.251 [2024-02-14 20:25:10.399066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.251 [2024-02-14 20:25:10.399077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.251 [2024-02-14 20:25:10.399086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.251 [2024-02-14 20:25:10.399097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.251 [2024-02-14 20:25:10.399106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.251 [2024-02-14 20:25:10.399117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.251 [2024-02-14 20:25:10.399126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.251 [2024-02-14 20:25:10.399137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.251 [2024-02-14 20:25:10.399146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.251 [2024-02-14 20:25:10.399157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.251 [2024-02-14 20:25:10.399166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.251 [2024-02-14 20:25:10.399178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.251 [2024-02-14 20:25:10.399188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.251 [2024-02-14 20:25:10.399199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.251 [2024-02-14 20:25:10.399218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.251 [2024-02-14 20:25:10.399227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.251 [2024-02-14 20:25:10.399234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.251 [2024-02-14 20:25:10.399243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.251 [2024-02-14 20:25:10.399250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.251 [2024-02-14 20:25:10.399258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.251 [2024-02-14 20:25:10.399264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.251 [2024-02-14 20:25:10.399273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.251 [2024-02-14 20:25:10.399281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.251 [2024-02-14 20:25:10.399290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.251 [2024-02-14 20:25:10.399297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.251 [2024-02-14 20:25:10.399305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.251 [2024-02-14 20:25:10.399312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.251 [2024-02-14 20:25:10.399320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.252 [2024-02-14 20:25:10.399327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.252 [2024-02-14 20:25:10.399335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.252 [2024-02-14 20:25:10.399342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.252 [2024-02-14 20:25:10.399350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.252 [2024-02-14 20:25:10.399357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.252 [2024-02-14 20:25:10.399365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.252 [2024-02-14 20:25:10.399372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.252 [2024-02-14 20:25:10.399381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.252 [2024-02-14 20:25:10.399387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.252 [2024-02-14 20:25:10.399396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.252 [2024-02-14 20:25:10.399403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.252 [2024-02-14 20:25:10.399411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.252 [2024-02-14 20:25:10.399418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.252 [2024-02-14 20:25:10.399426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.252 [2024-02-14 20:25:10.399433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.252 [2024-02-14 20:25:10.399441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.252 [2024-02-14 20:25:10.399448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.252 [2024-02-14 20:25:10.399456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.252 [2024-02-14 20:25:10.399463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.252 [2024-02-14 20:25:10.399474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.252 [2024-02-14 20:25:10.399481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.252 [2024-02-14 20:25:10.399489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.252 [2024-02-14 20:25:10.399496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.252 [2024-02-14 20:25:10.399504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.252 [2024-02-14 20:25:10.399512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.252 [2024-02-14 20:25:10.399521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.252 [2024-02-14 20:25:10.399528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.252 [2024-02-14 20:25:10.399536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.252 [2024-02-14 20:25:10.399543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.252 [2024-02-14 20:25:10.399552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.252 [2024-02-14 20:25:10.399558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.252 [2024-02-14 20:25:10.399567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.252 [2024-02-14 20:25:10.399574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.252 [2024-02-14 20:25:10.399582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.252 [2024-02-14 20:25:10.399589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.252 [2024-02-14 20:25:10.399598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.252 [2024-02-14 20:25:10.399605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.252 [2024-02-14 20:25:10.399613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.252 [2024-02-14 20:25:10.399620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.252 [2024-02-14 20:25:10.399629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.252 [2024-02-14 20:25:10.399636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.252 [2024-02-14 20:25:10.399644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.252 [2024-02-14 20:25:10.399657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.252 [2024-02-14 20:25:10.399666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.252 [2024-02-14 20:25:10.399677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.252 [2024-02-14 20:25:10.399685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.252 [2024-02-14 20:25:10.399693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.252 [2024-02-14 20:25:10.399701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.252 [2024-02-14 20:25:10.399709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.252 [2024-02-14 20:25:10.399718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.252 [2024-02-14 20:25:10.399725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.252 [2024-02-14 20:25:10.399734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.252 [2024-02-14 20:25:10.399741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.252 [2024-02-14 20:25:10.399749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.252 [2024-02-14 20:25:10.399756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.252 [2024-02-14 20:25:10.399764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.252 [2024-02-14 20:25:10.399771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.252 [2024-02-14 20:25:10.399779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.252 [2024-02-14 20:25:10.399787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.252 [2024-02-14 20:25:10.399795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.252 [2024-02-14 20:25:10.399802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.252 [2024-02-14 20:25:10.399810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.252 [2024-02-14 20:25:10.399817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.252 [2024-02-14 20:25:10.399826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.252 [2024-02-14 20:25:10.399832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.252 [2024-02-14 20:25:10.399840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.252 [2024-02-14 20:25:10.399847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.252 [2024-02-14 20:25:10.399856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.252 [2024-02-14 20:25:10.399863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.252 [2024-02-14 20:25:10.399871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.252 [2024-02-14 20:25:10.399880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.252 [2024-02-14 20:25:10.399888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.252 [2024-02-14 20:25:10.399895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.252 [2024-02-14 20:25:10.399903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.252 [2024-02-14 20:25:10.399910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.252 [2024-02-14 20:25:10.399918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.252 [2024-02-14 20:25:10.399925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.252 [2024-02-14 20:25:10.399934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.252 [2024-02-14 20:25:10.399940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.253 [2024-02-14 20:25:10.399949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.253 [2024-02-14 20:25:10.399956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.253 [2024-02-14 20:25:10.399964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.253 [2024-02-14 20:25:10.399971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.253 [2024-02-14 20:25:10.399980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.253 [2024-02-14 20:25:10.399986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.253 [2024-02-14 20:25:10.399994] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x274f0d0 is same with the state(5) to be set 00:24:33.253 [2024-02-14 20:25:10.400049] bdev_nvme.c:1576:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x274f0d0 was disconnected and freed. reset controller. 00:24:33.253 [2024-02-14 20:25:10.400069] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:24:33.253 [2024-02-14 20:25:10.400383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.253 [2024-02-14 20:25:10.400740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.253 [2024-02-14 20:25:10.400750] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1897630 with addr=10.0.0.2, port=4420 00:24:33.253 [2024-02-14 20:25:10.400758] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897630 is same with the state(5) to be set 00:24:33.253 [2024-02-14 20:25:10.401153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.253 [2024-02-14 20:25:10.401439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.253 [2024-02-14 20:25:10.401450] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b0310 with addr=10.0.0.2, port=4420 00:24:33.253 [2024-02-14 20:25:10.401457] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b0310 is same with the state(5) to be set 00:24:33.253 [2024-02-14 20:25:10.401466] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x198e610 (9): Bad file descriptor 00:24:33.253 [2024-02-14 20:25:10.401477] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:24:33.253 [2024-02-14 20:25:10.401484] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:24:33.253 [2024-02-14 20:25:10.401490] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:24:33.253 [2024-02-14 20:25:10.401528] bdev_nvme.c:2824:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:33.253 [2024-02-14 20:25:10.401539] bdev_nvme.c:2824:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:33.253 [2024-02-14 20:25:10.403012] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:33.253 [2024-02-14 20:25:10.403040] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:24:33.253 [2024-02-14 20:25:10.403478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.253 [2024-02-14 20:25:10.403782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.253 [2024-02-14 20:25:10.403794] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196add0 with addr=10.0.0.2, port=4420 00:24:33.253 [2024-02-14 20:25:10.403801] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x196add0 is same with the state(5) to be set 00:24:33.253 [2024-02-14 20:25:10.403812] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1897630 (9): Bad file descriptor 00:24:33.253 [2024-02-14 20:25:10.403820] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b0310 (9): Bad file descriptor 00:24:33.253 [2024-02-14 20:25:10.403828] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:24:33.253 [2024-02-14 20:25:10.403835] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:24:33.253 [2024-02-14 20:25:10.403842] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:24:33.253 [2024-02-14 20:25:10.403862] bdev_nvme.c:2824:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:33.253 [2024-02-14 20:25:10.403928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.253 [2024-02-14 20:25:10.403938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.253 [2024-02-14 20:25:10.403951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.253 [2024-02-14 20:25:10.403959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.253 [2024-02-14 20:25:10.403967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.253 [2024-02-14 20:25:10.403975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.253 [2024-02-14 20:25:10.403983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.253 [2024-02-14 20:25:10.403990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.253 [2024-02-14 20:25:10.403999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.253 [2024-02-14 20:25:10.404005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.253 [2024-02-14 20:25:10.404014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.253 [2024-02-14 20:25:10.404024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.253 [2024-02-14 20:25:10.404032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.253 [2024-02-14 20:25:10.404039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.253 [2024-02-14 20:25:10.404048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.253 [2024-02-14 20:25:10.404054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.253 [2024-02-14 20:25:10.404063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.253 [2024-02-14 20:25:10.404070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.253 [2024-02-14 20:25:10.404078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.253 [2024-02-14 20:25:10.404085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.253 [2024-02-14 20:25:10.404094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.253 [2024-02-14 20:25:10.404100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.253 [2024-02-14 20:25:10.404109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.253 [2024-02-14 20:25:10.404115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.253 [2024-02-14 20:25:10.404124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.253 [2024-02-14 20:25:10.404131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.253 [2024-02-14 20:25:10.404139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.253 [2024-02-14 20:25:10.404146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.253 [2024-02-14 20:25:10.404155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.253 [2024-02-14 20:25:10.404161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.253 [2024-02-14 20:25:10.404170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.253 [2024-02-14 20:25:10.404177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.253 [2024-02-14 20:25:10.404185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.253 [2024-02-14 20:25:10.404191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.253 [2024-02-14 20:25:10.404200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.253 [2024-02-14 20:25:10.404207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.253 [2024-02-14 20:25:10.404217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.253 [2024-02-14 20:25:10.404224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.253 [2024-02-14 20:25:10.404232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.253 [2024-02-14 20:25:10.404240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.253 [2024-02-14 20:25:10.404250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.253 [2024-02-14 20:25:10.404256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.253 [2024-02-14 20:25:10.404265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.253 [2024-02-14 20:25:10.404272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.253 [2024-02-14 20:25:10.404280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.253 [2024-02-14 20:25:10.404287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.253 [2024-02-14 20:25:10.404296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.253 [2024-02-14 20:25:10.404302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.254 [2024-02-14 20:25:10.404311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.254 [2024-02-14 20:25:10.404317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.254 [2024-02-14 20:25:10.404326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.254 [2024-02-14 20:25:10.404334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.254 [2024-02-14 20:25:10.404342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.254 [2024-02-14 20:25:10.404349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.254 [2024-02-14 20:25:10.404358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.254 [2024-02-14 20:25:10.404365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.254 [2024-02-14 20:25:10.404374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.254 [2024-02-14 20:25:10.404380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.254 [2024-02-14 20:25:10.404389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.254 [2024-02-14 20:25:10.404395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.254 [2024-02-14 20:25:10.404404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.254 [2024-02-14 20:25:10.404416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.254 [2024-02-14 20:25:10.404424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.254 [2024-02-14 20:25:10.404431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.254 [2024-02-14 20:25:10.404440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.254 [2024-02-14 20:25:10.404447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.254 [2024-02-14 20:25:10.404455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.254 [2024-02-14 20:25:10.404462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.254 [2024-02-14 20:25:10.404470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.254 [2024-02-14 20:25:10.404477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.254 [2024-02-14 20:25:10.404485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.254 [2024-02-14 20:25:10.404492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.254 [2024-02-14 20:25:10.404501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.254 [2024-02-14 20:25:10.404507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.254 [2024-02-14 20:25:10.404516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.254 [2024-02-14 20:25:10.404523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.254 [2024-02-14 20:25:10.404531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.254 [2024-02-14 20:25:10.404538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.254 [2024-02-14 20:25:10.404547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.254 [2024-02-14 20:25:10.404553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.254 [2024-02-14 20:25:10.404562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.254 [2024-02-14 20:25:10.404569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.254 [2024-02-14 20:25:10.404577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.254 [2024-02-14 20:25:10.404584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.254 [2024-02-14 20:25:10.404592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.254 [2024-02-14 20:25:10.404599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.254 [2024-02-14 20:25:10.404609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.254 [2024-02-14 20:25:10.404616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.254 [2024-02-14 20:25:10.404625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.254 [2024-02-14 20:25:10.404632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.254 [2024-02-14 20:25:10.404640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.254 [2024-02-14 20:25:10.404659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.254 [2024-02-14 20:25:10.404668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.254 [2024-02-14 20:25:10.404675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.254 [2024-02-14 20:25:10.404684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.254 [2024-02-14 20:25:10.404690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.254 [2024-02-14 20:25:10.404698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.254 [2024-02-14 20:25:10.404705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.254 [2024-02-14 20:25:10.404714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.254 [2024-02-14 20:25:10.404721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.254 [2024-02-14 20:25:10.404729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.254 [2024-02-14 20:25:10.404735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.254 [2024-02-14 20:25:10.404744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.254 [2024-02-14 20:25:10.404750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.254 [2024-02-14 20:25:10.404759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.254 [2024-02-14 20:25:10.404765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.254 [2024-02-14 20:25:10.404774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.254 [2024-02-14 20:25:10.404781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.254 [2024-02-14 20:25:10.404789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.254 [2024-02-14 20:25:10.404797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.254 [2024-02-14 20:25:10.404805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.254 [2024-02-14 20:25:10.404813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.254 [2024-02-14 20:25:10.404822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.254 [2024-02-14 20:25:10.404828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.254 [2024-02-14 20:25:10.404837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.254 [2024-02-14 20:25:10.404844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.255 [2024-02-14 20:25:10.404852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.255 [2024-02-14 20:25:10.404858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.255 [2024-02-14 20:25:10.404867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.255 [2024-02-14 20:25:10.404874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.255 [2024-02-14 20:25:10.404883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.255 [2024-02-14 20:25:10.404889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.255 [2024-02-14 20:25:10.404898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.255 [2024-02-14 20:25:10.404905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.255 [2024-02-14 20:25:10.404913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.255 [2024-02-14 20:25:10.404920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.255 [2024-02-14 20:25:10.404929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.255 [2024-02-14 20:25:10.404935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.255 [2024-02-14 20:25:10.404943] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2266940 is same with the state(5) to be set 00:24:33.255 [2024-02-14 20:25:10.405973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.255 [2024-02-14 20:25:10.405987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.255 [2024-02-14 20:25:10.405998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.255 [2024-02-14 20:25:10.406005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.255 [2024-02-14 20:25:10.406014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.255 [2024-02-14 20:25:10.406021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.255 [2024-02-14 20:25:10.406031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.255 [2024-02-14 20:25:10.406041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.255 [2024-02-14 20:25:10.406050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.255 [2024-02-14 20:25:10.406056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.255 [2024-02-14 20:25:10.406065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.255 [2024-02-14 20:25:10.406072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.255 [2024-02-14 20:25:10.406080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.255 [2024-02-14 20:25:10.406087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.255 [2024-02-14 20:25:10.406096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.255 [2024-02-14 20:25:10.406103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.255 [2024-02-14 20:25:10.406112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.255 [2024-02-14 20:25:10.406119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.255 [2024-02-14 20:25:10.406127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.255 [2024-02-14 20:25:10.406134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.255 [2024-02-14 20:25:10.406143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.255 [2024-02-14 20:25:10.406150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.255 [2024-02-14 20:25:10.406158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.255 [2024-02-14 20:25:10.406166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.255 [2024-02-14 20:25:10.406175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.255 [2024-02-14 20:25:10.406181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.255 [2024-02-14 20:25:10.406189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.255 [2024-02-14 20:25:10.406196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.255 [2024-02-14 20:25:10.406205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.255 [2024-02-14 20:25:10.406212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.255 [2024-02-14 20:25:10.406221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.255 [2024-02-14 20:25:10.406228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.255 [2024-02-14 20:25:10.406238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.255 [2024-02-14 20:25:10.406245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.255 [2024-02-14 20:25:10.406255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.255 [2024-02-14 20:25:10.406261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.255 [2024-02-14 20:25:10.406270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.255 [2024-02-14 20:25:10.406277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.255 [2024-02-14 20:25:10.406286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.255 [2024-02-14 20:25:10.406293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.255 [2024-02-14 20:25:10.406301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.255 [2024-02-14 20:25:10.406308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.255 [2024-02-14 20:25:10.406317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.255 [2024-02-14 20:25:10.406324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.255 [2024-02-14 20:25:10.406333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.255 [2024-02-14 20:25:10.406340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.255 [2024-02-14 20:25:10.406348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.255 [2024-02-14 20:25:10.406355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.255 [2024-02-14 20:25:10.406364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.255 [2024-02-14 20:25:10.406370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.255 [2024-02-14 20:25:10.406378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.255 [2024-02-14 20:25:10.406385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.255 [2024-02-14 20:25:10.406394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.255 [2024-02-14 20:25:10.406401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.255 [2024-02-14 20:25:10.406409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.255 [2024-02-14 20:25:10.406416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.255 [2024-02-14 20:25:10.406424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.255 [2024-02-14 20:25:10.406432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.255 [2024-02-14 20:25:10.406441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.255 [2024-02-14 20:25:10.406448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.255 [2024-02-14 20:25:10.406457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.255 [2024-02-14 20:25:10.406464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.255 [2024-02-14 20:25:10.406472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.255 [2024-02-14 20:25:10.406479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.255 [2024-02-14 20:25:10.406487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.256 [2024-02-14 20:25:10.406494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.256 [2024-02-14 20:25:10.406502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.256 [2024-02-14 20:25:10.406509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.256 [2024-02-14 20:25:10.406518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.256 [2024-02-14 20:25:10.406525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.256 [2024-02-14 20:25:10.406533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.256 [2024-02-14 20:25:10.406540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.256 [2024-02-14 20:25:10.406549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.256 [2024-02-14 20:25:10.406555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.256 [2024-02-14 20:25:10.406564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.256 [2024-02-14 20:25:10.406571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.256 [2024-02-14 20:25:10.406579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.256 [2024-02-14 20:25:10.406586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.256 [2024-02-14 20:25:10.406595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.256 [2024-02-14 20:25:10.406602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.256 [2024-02-14 20:25:10.406610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.256 [2024-02-14 20:25:10.406616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.256 [2024-02-14 20:25:10.406627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.256 [2024-02-14 20:25:10.406634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.256 [2024-02-14 20:25:10.406642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.256 [2024-02-14 20:25:10.406654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.256 [2024-02-14 20:25:10.406663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.256 [2024-02-14 20:25:10.406670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.256 [2024-02-14 20:25:10.406678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.256 [2024-02-14 20:25:10.406685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.256 [2024-02-14 20:25:10.406693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.256 [2024-02-14 20:25:10.406700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.256 [2024-02-14 20:25:10.406708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.256 [2024-02-14 20:25:10.406715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.256 [2024-02-14 20:25:10.406723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.256 [2024-02-14 20:25:10.406730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.256 [2024-02-14 20:25:10.406738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.256 [2024-02-14 20:25:10.406745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.256 [2024-02-14 20:25:10.406753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.256 [2024-02-14 20:25:10.406760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.256 [2024-02-14 20:25:10.406769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.256 [2024-02-14 20:25:10.406776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.256 [2024-02-14 20:25:10.406785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.256 [2024-02-14 20:25:10.406791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.256 [2024-02-14 20:25:10.406800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.256 [2024-02-14 20:25:10.406806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.256 [2024-02-14 20:25:10.406815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.256 [2024-02-14 20:25:10.406823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.256 [2024-02-14 20:25:10.406831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.256 [2024-02-14 20:25:10.406838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.256 [2024-02-14 20:25:10.406847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.256 [2024-02-14 20:25:10.406854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.256 [2024-02-14 20:25:10.406862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.256 [2024-02-14 20:25:10.406869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.256 [2024-02-14 20:25:10.406877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.256 [2024-02-14 20:25:10.406884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.256 [2024-02-14 20:25:10.406892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.256 [2024-02-14 20:25:10.406898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.256 [2024-02-14 20:25:10.406907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.256 [2024-02-14 20:25:10.406913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.256 [2024-02-14 20:25:10.406922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.256 [2024-02-14 20:25:10.406929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.256 [2024-02-14 20:25:10.406937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.256 [2024-02-14 20:25:10.406944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.256 [2024-02-14 20:25:10.406952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.256 [2024-02-14 20:25:10.406959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.256 [2024-02-14 20:25:10.406967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.256 [2024-02-14 20:25:10.406974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.256 [2024-02-14 20:25:10.406981] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ac3c0 is same with the state(5) to be set 00:24:33.256 [2024-02-14 20:25:10.408005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.256 [2024-02-14 20:25:10.408019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.256 [2024-02-14 20:25:10.408029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.256 [2024-02-14 20:25:10.408038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.256 [2024-02-14 20:25:10.408047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.256 [2024-02-14 20:25:10.408054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.256 [2024-02-14 20:25:10.408062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.256 [2024-02-14 20:25:10.408069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.256 [2024-02-14 20:25:10.408078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.256 [2024-02-14 20:25:10.408084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.256 [2024-02-14 20:25:10.408092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.256 [2024-02-14 20:25:10.408099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.256 [2024-02-14 20:25:10.408107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.256 [2024-02-14 20:25:10.408113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.256 [2024-02-14 20:25:10.408122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.257 [2024-02-14 20:25:10.408129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.257 [2024-02-14 20:25:10.408137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.257 [2024-02-14 20:25:10.408144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.257 [2024-02-14 20:25:10.408153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.257 [2024-02-14 20:25:10.408159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.257 [2024-02-14 20:25:10.408168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.257 [2024-02-14 20:25:10.408174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.257 [2024-02-14 20:25:10.408183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.257 [2024-02-14 20:25:10.408189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.257 [2024-02-14 20:25:10.408198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.257 [2024-02-14 20:25:10.408205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.257 [2024-02-14 20:25:10.408215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.257 [2024-02-14 20:25:10.408222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.257 [2024-02-14 20:25:10.408232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.257 [2024-02-14 20:25:10.408239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.257 [2024-02-14 20:25:10.408247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.257 [2024-02-14 20:25:10.408254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.257 [2024-02-14 20:25:10.408262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.257 [2024-02-14 20:25:10.408269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.257 [2024-02-14 20:25:10.408278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.257 [2024-02-14 20:25:10.408284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.257 [2024-02-14 20:25:10.408293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.257 [2024-02-14 20:25:10.408299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.257 [2024-02-14 20:25:10.408308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.257 [2024-02-14 20:25:10.408314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.257 [2024-02-14 20:25:10.408323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.257 [2024-02-14 20:25:10.408329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.257 [2024-02-14 20:25:10.408338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.257 [2024-02-14 20:25:10.408344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.257 [2024-02-14 20:25:10.408352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.257 [2024-02-14 20:25:10.408359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.257 [2024-02-14 20:25:10.408368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.257 [2024-02-14 20:25:10.408375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.257 [2024-02-14 20:25:10.408383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.257 [2024-02-14 20:25:10.408390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.257 [2024-02-14 20:25:10.408398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.257 [2024-02-14 20:25:10.408404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.257 [2024-02-14 20:25:10.408413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.257 [2024-02-14 20:25:10.408421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.257 [2024-02-14 20:25:10.408430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.257 [2024-02-14 20:25:10.408436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.257 [2024-02-14 20:25:10.408444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.257 [2024-02-14 20:25:10.408451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.257 [2024-02-14 20:25:10.408460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.257 [2024-02-14 20:25:10.408466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.257 [2024-02-14 20:25:10.408475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.257 [2024-02-14 20:25:10.408481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.257 [2024-02-14 20:25:10.408489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.257 [2024-02-14 20:25:10.408496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.257 [2024-02-14 20:25:10.408505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.257 [2024-02-14 20:25:10.408511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.257 [2024-02-14 20:25:10.408520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.257 [2024-02-14 20:25:10.408527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.257 [2024-02-14 20:25:10.408535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.257 [2024-02-14 20:25:10.408542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.257 [2024-02-14 20:25:10.408550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.257 [2024-02-14 20:25:10.408556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.257 [2024-02-14 20:25:10.408565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.257 [2024-02-14 20:25:10.408571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.257 [2024-02-14 20:25:10.408579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.257 [2024-02-14 20:25:10.408586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.257 [2024-02-14 20:25:10.408594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.257 [2024-02-14 20:25:10.408601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.257 [2024-02-14 20:25:10.408611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.257 [2024-02-14 20:25:10.408618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.257 [2024-02-14 20:25:10.408626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.257 [2024-02-14 20:25:10.408633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.257 [2024-02-14 20:25:10.408641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.257 [2024-02-14 20:25:10.408653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.257 [2024-02-14 20:25:10.408662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.257 [2024-02-14 20:25:10.408669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.257 [2024-02-14 20:25:10.408677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.257 [2024-02-14 20:25:10.408683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.257 [2024-02-14 20:25:10.408692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.257 [2024-02-14 20:25:10.408698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.257 [2024-02-14 20:25:10.408707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.257 [2024-02-14 20:25:10.408714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.257 [2024-02-14 20:25:10.408722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.258 [2024-02-14 20:25:10.408729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.258 [2024-02-14 20:25:10.408737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.258 [2024-02-14 20:25:10.408744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.258 [2024-02-14 20:25:10.408752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.258 [2024-02-14 20:25:10.408758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.258 [2024-02-14 20:25:10.408766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.258 [2024-02-14 20:25:10.408773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.258 [2024-02-14 20:25:10.408782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.258 [2024-02-14 20:25:10.408788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.258 [2024-02-14 20:25:10.408796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.258 [2024-02-14 20:25:10.408804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.258 [2024-02-14 20:25:10.408813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.258 [2024-02-14 20:25:10.408819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.258 [2024-02-14 20:25:10.408828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.258 [2024-02-14 20:25:10.408834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.258 [2024-02-14 20:25:10.408843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.258 [2024-02-14 20:25:10.408849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.258 [2024-02-14 20:25:10.408857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.258 [2024-02-14 20:25:10.408864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.258 [2024-02-14 20:25:10.408872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.258 [2024-02-14 20:25:10.408879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.258 [2024-02-14 20:25:10.408887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.258 [2024-02-14 20:25:10.408894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.258 [2024-02-14 20:25:10.408902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.258 [2024-02-14 20:25:10.408909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.258 [2024-02-14 20:25:10.408917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.258 [2024-02-14 20:25:10.408924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.258 [2024-02-14 20:25:10.408932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.258 [2024-02-14 20:25:10.408939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.258 [2024-02-14 20:25:10.408947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.258 [2024-02-14 20:25:10.408954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.258 [2024-02-14 20:25:10.408962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.258 [2024-02-14 20:25:10.408969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.258 [2024-02-14 20:25:10.408977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.258 [2024-02-14 20:25:10.408984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.258 [2024-02-14 20:25:10.408992] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x291a490 is same with the state(5) to be set 00:24:33.258 [2024-02-14 20:25:10.412525] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:33.258 [2024-02-14 20:25:10.412550] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:24:33.258 [2024-02-14 20:25:10.412562] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:24:33.258 task offset: 24320 on job bdev=Nvme4n1 fails 00:24:33.258 00:24:33.258 Latency(us) 00:24:33.258 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:33.258 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:33.258 Job: Nvme1n1 ended in about 0.86 seconds with error 00:24:33.258 Verification LBA range: start 0x0 length 0x400 00:24:33.258 Nvme1n1 : 0.86 190.46 11.90 74.33 0.00 240473.87 78892.86 493330.04 00:24:33.258 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:33.258 Job: Nvme2n1 ended in about 0.86 seconds with error 00:24:33.258 Verification LBA range: start 0x0 length 0x400 00:24:33.258 Nvme2n1 : 0.86 189.88 11.87 74.10 0.00 239030.65 79891.50 463370.73 00:24:33.258 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:33.258 Job: Nvme3n1 ended in about 0.85 seconds with error 00:24:33.258 Verification LBA range: start 0x0 length 0x400 00:24:33.258 Nvme3n1 : 0.85 192.03 12.00 74.94 0.00 234221.39 88379.98 445395.14 00:24:33.258 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:33.258 Job: Nvme4n1 ended in about 0.85 seconds with error 00:24:33.258 Verification LBA range: start 0x0 length 0x400 00:24:33.258 Nvme4n1 : 0.85 194.04 12.13 75.72 0.00 229628.95 83386.76 429416.84 00:24:33.258 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:33.258 Job: Nvme5n1 ended in about 0.87 seconds with error 00:24:33.258 Verification LBA range: start 0x0 length 0x400 00:24:33.258 Nvme5n1 : 0.87 187.93 11.75 73.34 0.00 235283.55 66409.81 483343.60 00:24:33.258 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:33.258 Job: Nvme6n1 ended in about 0.86 seconds with error 00:24:33.258 Verification LBA range: start 0x0 length 0x400 00:24:33.258 Nvme6n1 : 0.86 190.07 11.88 74.63 0.00 229958.63 24466.77 503316.48 00:24:33.258 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:33.258 Job: Nvme7n1 ended in about 0.87 seconds with error 00:24:33.258 Verification LBA range: start 0x0 length 0x400 00:24:33.258 Nvme7n1 : 0.87 187.50 11.72 73.17 0.00 231636.09 58171.00 493330.04 00:24:33.258 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:33.258 Job: Nvme8n1 ended in about 0.87 seconds with error 00:24:33.258 Verification LBA range: start 0x0 length 0x400 00:24:33.258 Nvme8n1 : 0.87 188.56 11.78 73.58 0.00 228163.83 76895.57 441400.56 00:24:33.258 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:33.258 Job: Nvme9n1 ended in about 0.88 seconds with error 00:24:33.258 Verification LBA range: start 0x0 length 0x400 00:24:33.258 Nvme9n1 : 0.88 187.07 11.69 73.00 0.00 227953.59 54925.41 501319.19 00:24:33.258 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:33.258 Job: Nvme10n1 ended in about 0.85 seconds with error 00:24:33.258 Verification LBA range: start 0x0 length 0x400 00:24:33.258 Nvme10n1 : 0.85 193.64 12.10 75.57 0.00 217491.92 76895.57 405449.39 00:24:33.258 =================================================================================================================== 00:24:33.258 Total : 1901.18 118.82 742.38 0.00 231384.87 24466.77 503316.48 00:24:33.258 [2024-02-14 20:25:10.439842] app.c: 908:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:24:33.258 [2024-02-14 20:25:10.439889] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:24:33.258 [2024-02-14 20:25:10.440276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.258 [2024-02-14 20:25:10.440626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.258 [2024-02-14 20:25:10.440639] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1989d80 with addr=10.0.0.2, port=4420 00:24:33.258 [2024-02-14 20:25:10.440657] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989d80 is same with the state(5) to be set 00:24:33.258 [2024-02-14 20:25:10.440673] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x196add0 (9): Bad file descriptor 00:24:33.258 [2024-02-14 20:25:10.440685] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:33.258 [2024-02-14 20:25:10.440693] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:33.258 [2024-02-14 20:25:10.440702] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:33.258 [2024-02-14 20:25:10.440718] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:24:33.258 [2024-02-14 20:25:10.440725] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:24:33.258 [2024-02-14 20:25:10.440732] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:24:33.258 [2024-02-14 20:25:10.440760] bdev_nvme.c:2824:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:33.258 [2024-02-14 20:25:10.440776] bdev_nvme.c:2824:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:33.258 [2024-02-14 20:25:10.440799] bdev_nvme.c:2824:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:33.258 [2024-02-14 20:25:10.441157] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:24:33.259 [2024-02-14 20:25:10.441174] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:24:33.259 [2024-02-14 20:25:10.441191] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:33.259 [2024-02-14 20:25:10.441199] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:33.259 [2024-02-14 20:25:10.441537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.259 [2024-02-14 20:25:10.441948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.259 [2024-02-14 20:25:10.441961] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196d250 with addr=10.0.0.2, port=4420 00:24:33.259 [2024-02-14 20:25:10.441971] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x196d250 is same with the state(5) to be set 00:24:33.259 [2024-02-14 20:25:10.442278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.259 [2024-02-14 20:25:10.442631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.259 [2024-02-14 20:25:10.442643] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19536b0 with addr=10.0.0.2, port=4420 00:24:33.259 [2024-02-14 20:25:10.442655] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19536b0 is same with the state(5) to be set 00:24:33.259 [2024-02-14 20:25:10.442946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.259 [2024-02-14 20:25:10.443229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.259 [2024-02-14 20:25:10.443240] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18978e0 with addr=10.0.0.2, port=4420 00:24:33.259 [2024-02-14 20:25:10.443248] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18978e0 is same with the state(5) to be set 00:24:33.259 [2024-02-14 20:25:10.443260] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1989d80 (9): Bad file descriptor 00:24:33.259 [2024-02-14 20:25:10.443274] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:24:33.259 [2024-02-14 20:25:10.443281] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:24:33.259 [2024-02-14 20:25:10.443289] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:24:33.259 [2024-02-14 20:25:10.443310] bdev_nvme.c:2824:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:33.259 [2024-02-14 20:25:10.443340] bdev_nvme.c:2824:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:33.259 [2024-02-14 20:25:10.443351] bdev_nvme.c:2824:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:33.259 [2024-02-14 20:25:10.444164] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:24:33.259 [2024-02-14 20:25:10.444194] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:33.259 [2024-02-14 20:25:10.444453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.259 [2024-02-14 20:25:10.444806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.259 [2024-02-14 20:25:10.444820] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1953280 with addr=10.0.0.2, port=4420 00:24:33.259 [2024-02-14 20:25:10.444828] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1953280 is same with the state(5) to be set 00:24:33.259 [2024-02-14 20:25:10.444839] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x196d250 (9): Bad file descriptor 00:24:33.259 [2024-02-14 20:25:10.444849] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19536b0 (9): Bad file descriptor 00:24:33.259 [2024-02-14 20:25:10.444858] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18978e0 (9): Bad file descriptor 00:24:33.259 [2024-02-14 20:25:10.444869] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:24:33.259 [2024-02-14 20:25:10.444876] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:24:33.259 [2024-02-14 20:25:10.444884] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:24:33.259 [2024-02-14 20:25:10.444952] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:24:33.259 [2024-02-14 20:25:10.444964] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:24:33.259 [2024-02-14 20:25:10.444973] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:33.259 [2024-02-14 20:25:10.444982] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:33.259 [2024-02-14 20:25:10.445865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.259 [2024-02-14 20:25:10.446152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.259 [2024-02-14 20:25:10.446163] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196dce0 with addr=10.0.0.2, port=4420 00:24:33.259 [2024-02-14 20:25:10.446171] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x196dce0 is same with the state(5) to be set 00:24:33.259 [2024-02-14 20:25:10.446181] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1953280 (9): Bad file descriptor 00:24:33.259 [2024-02-14 20:25:10.446190] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:24:33.259 [2024-02-14 20:25:10.446197] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:24:33.259 [2024-02-14 20:25:10.446205] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:24:33.259 [2024-02-14 20:25:10.446220] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:24:33.259 [2024-02-14 20:25:10.446226] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:24:33.259 [2024-02-14 20:25:10.446233] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:24:33.259 [2024-02-14 20:25:10.446244] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:24:33.259 [2024-02-14 20:25:10.446252] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:24:33.259 [2024-02-14 20:25:10.446259] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:24:33.259 [2024-02-14 20:25:10.446314] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:33.259 [2024-02-14 20:25:10.446322] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:33.259 [2024-02-14 20:25:10.446329] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:33.259 [2024-02-14 20:25:10.446625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.259 [2024-02-14 20:25:10.446897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.259 [2024-02-14 20:25:10.446909] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x198e610 with addr=10.0.0.2, port=4420 00:24:33.259 [2024-02-14 20:25:10.446917] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198e610 is same with the state(5) to be set 00:24:33.259 [2024-02-14 20:25:10.447115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.259 [2024-02-14 20:25:10.447698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.259 [2024-02-14 20:25:10.447710] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b0310 with addr=10.0.0.2, port=4420 00:24:33.259 [2024-02-14 20:25:10.447718] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b0310 is same with the state(5) to be set 00:24:33.259 [2024-02-14 20:25:10.448066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.259 [2024-02-14 20:25:10.448365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.259 [2024-02-14 20:25:10.448375] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1897630 with addr=10.0.0.2, port=4420 00:24:33.259 [2024-02-14 20:25:10.448383] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897630 is same with the state(5) to be set 00:24:33.259 [2024-02-14 20:25:10.448393] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x196dce0 (9): Bad file descriptor 00:24:33.259 [2024-02-14 20:25:10.448401] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:24:33.259 [2024-02-14 20:25:10.448407] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:24:33.259 [2024-02-14 20:25:10.448415] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:24:33.259 [2024-02-14 20:25:10.448460] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:33.259 [2024-02-14 20:25:10.448471] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x198e610 (9): Bad file descriptor 00:24:33.259 [2024-02-14 20:25:10.448481] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b0310 (9): Bad file descriptor 00:24:33.259 [2024-02-14 20:25:10.448490] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1897630 (9): Bad file descriptor 00:24:33.259 [2024-02-14 20:25:10.448498] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:24:33.259 [2024-02-14 20:25:10.448505] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:24:33.260 [2024-02-14 20:25:10.448515] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:24:33.260 [2024-02-14 20:25:10.448543] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:33.260 [2024-02-14 20:25:10.448550] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:24:33.260 [2024-02-14 20:25:10.448557] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:24:33.260 [2024-02-14 20:25:10.448564] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:24:33.260 [2024-02-14 20:25:10.448574] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:24:33.260 [2024-02-14 20:25:10.448580] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:24:33.260 [2024-02-14 20:25:10.448587] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:24:33.260 [2024-02-14 20:25:10.448596] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:33.260 [2024-02-14 20:25:10.448603] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:33.260 [2024-02-14 20:25:10.448610] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:33.260 [2024-02-14 20:25:10.448638] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:33.260 [2024-02-14 20:25:10.448645] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:33.260 [2024-02-14 20:25:10.448657] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:33.519 20:25:10 -- target/shutdown.sh@135 -- # nvmfpid= 00:24:33.519 20:25:10 -- target/shutdown.sh@138 -- # sleep 1 00:24:34.458 20:25:11 -- target/shutdown.sh@141 -- # kill -9 1880560 00:24:34.458 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 141: kill: (1880560) - No such process 00:24:34.458 20:25:11 -- target/shutdown.sh@141 -- # true 00:24:34.458 20:25:11 -- target/shutdown.sh@143 -- # stoptarget 00:24:34.458 20:25:11 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:24:34.458 20:25:11 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:34.458 20:25:11 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:34.458 20:25:11 -- target/shutdown.sh@45 -- # nvmftestfini 00:24:34.458 20:25:11 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:34.458 20:25:11 -- nvmf/common.sh@116 -- # sync 00:24:34.458 20:25:11 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:34.458 20:25:11 -- nvmf/common.sh@119 -- # set +e 00:24:34.458 20:25:11 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:34.458 20:25:11 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:34.458 rmmod nvme_tcp 00:24:34.458 rmmod nvme_fabrics 00:24:34.458 rmmod nvme_keyring 00:24:34.717 20:25:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:34.717 20:25:11 -- nvmf/common.sh@123 -- # set -e 00:24:34.717 20:25:11 -- nvmf/common.sh@124 -- # return 0 00:24:34.717 20:25:11 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:24:34.717 20:25:11 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:34.717 20:25:11 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:34.717 20:25:11 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:34.717 20:25:11 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:34.717 20:25:11 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:34.717 20:25:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:34.717 20:25:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:34.717 20:25:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:36.677 20:25:13 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:24:36.677 00:24:36.677 real 0m8.003s 00:24:36.677 user 0m19.806s 00:24:36.677 sys 0m1.220s 00:24:36.677 20:25:13 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:36.677 20:25:13 -- common/autotest_common.sh@10 -- # set +x 00:24:36.677 ************************************ 00:24:36.677 END TEST nvmf_shutdown_tc3 00:24:36.677 ************************************ 00:24:36.677 20:25:13 -- target/shutdown.sh@150 -- # trap - SIGINT SIGTERM EXIT 00:24:36.677 00:24:36.677 real 0m31.141s 00:24:36.677 user 1m16.544s 00:24:36.677 sys 0m8.410s 00:24:36.677 20:25:13 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:36.677 20:25:13 -- common/autotest_common.sh@10 -- # set +x 00:24:36.677 ************************************ 00:24:36.677 END TEST nvmf_shutdown 00:24:36.677 ************************************ 00:24:36.677 20:25:14 -- nvmf/nvmf.sh@85 -- # timing_exit target 00:24:36.677 20:25:14 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:36.677 20:25:14 -- common/autotest_common.sh@10 -- # set +x 00:24:36.677 20:25:14 -- nvmf/nvmf.sh@87 -- # timing_enter host 00:24:36.677 20:25:14 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:36.677 20:25:14 -- common/autotest_common.sh@10 -- # set +x 00:24:36.677 20:25:14 -- nvmf/nvmf.sh@89 -- # [[ 0 -eq 0 ]] 00:24:36.677 20:25:14 -- nvmf/nvmf.sh@90 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:24:36.677 20:25:14 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:24:36.677 20:25:14 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:24:36.677 20:25:14 -- common/autotest_common.sh@10 -- # set +x 00:24:36.677 ************************************ 00:24:36.677 START TEST nvmf_multicontroller 00:24:36.677 ************************************ 00:24:36.677 20:25:14 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:24:36.937 * Looking for test storage... 00:24:36.937 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:36.937 20:25:14 -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:36.937 20:25:14 -- nvmf/common.sh@7 -- # uname -s 00:24:36.937 20:25:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:36.937 20:25:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:36.937 20:25:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:36.937 20:25:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:36.937 20:25:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:36.937 20:25:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:36.937 20:25:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:36.937 20:25:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:36.937 20:25:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:36.937 20:25:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:36.937 20:25:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:24:36.937 20:25:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:24:36.937 20:25:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:36.937 20:25:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:36.937 20:25:14 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:36.937 20:25:14 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:36.937 20:25:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:36.937 20:25:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:36.937 20:25:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:36.937 20:25:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.937 20:25:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.937 20:25:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.937 20:25:14 -- paths/export.sh@5 -- # export PATH 00:24:36.937 20:25:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.937 20:25:14 -- nvmf/common.sh@46 -- # : 0 00:24:36.937 20:25:14 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:36.937 20:25:14 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:36.937 20:25:14 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:36.937 20:25:14 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:36.937 20:25:14 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:36.937 20:25:14 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:36.937 20:25:14 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:36.937 20:25:14 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:36.937 20:25:14 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:36.937 20:25:14 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:36.937 20:25:14 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:24:36.937 20:25:14 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:24:36.937 20:25:14 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:36.937 20:25:14 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:24:36.937 20:25:14 -- host/multicontroller.sh@23 -- # nvmftestinit 00:24:36.937 20:25:14 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:36.937 20:25:14 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:36.937 20:25:14 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:36.937 20:25:14 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:36.937 20:25:14 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:36.937 20:25:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:36.937 20:25:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:36.937 20:25:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:36.937 20:25:14 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:36.937 20:25:14 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:36.937 20:25:14 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:36.937 20:25:14 -- common/autotest_common.sh@10 -- # set +x 00:24:43.516 20:25:20 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:43.516 20:25:20 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:43.516 20:25:20 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:43.516 20:25:20 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:43.516 20:25:20 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:43.516 20:25:20 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:43.516 20:25:20 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:43.516 20:25:20 -- nvmf/common.sh@294 -- # net_devs=() 00:24:43.516 20:25:20 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:43.516 20:25:20 -- nvmf/common.sh@295 -- # e810=() 00:24:43.516 20:25:20 -- nvmf/common.sh@295 -- # local -ga e810 00:24:43.516 20:25:20 -- nvmf/common.sh@296 -- # x722=() 00:24:43.516 20:25:20 -- nvmf/common.sh@296 -- # local -ga x722 00:24:43.516 20:25:20 -- nvmf/common.sh@297 -- # mlx=() 00:24:43.516 20:25:20 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:43.516 20:25:20 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:43.516 20:25:20 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:43.516 20:25:20 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:43.516 20:25:20 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:43.516 20:25:20 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:43.516 20:25:20 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:43.516 20:25:20 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:43.516 20:25:20 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:43.516 20:25:20 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:43.516 20:25:20 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:43.516 20:25:20 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:43.516 20:25:20 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:43.516 20:25:20 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:43.516 20:25:20 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:24:43.516 20:25:20 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:24:43.516 20:25:20 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:24:43.516 20:25:20 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:43.516 20:25:20 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:43.516 20:25:20 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:43.516 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:43.516 20:25:20 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:43.516 20:25:20 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:43.516 20:25:20 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:43.516 20:25:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:43.516 20:25:20 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:43.516 20:25:20 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:43.516 20:25:20 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:43.516 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:43.516 20:25:20 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:43.516 20:25:20 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:43.516 20:25:20 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:43.516 20:25:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:43.516 20:25:20 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:43.516 20:25:20 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:43.516 20:25:20 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:24:43.516 20:25:20 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:24:43.516 20:25:20 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:43.516 20:25:20 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:43.516 20:25:20 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:43.516 20:25:20 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:43.516 20:25:20 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:43.516 Found net devices under 0000:af:00.0: cvl_0_0 00:24:43.516 20:25:20 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:43.516 20:25:20 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:43.516 20:25:20 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:43.516 20:25:20 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:43.516 20:25:20 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:43.516 20:25:20 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:43.516 Found net devices under 0000:af:00.1: cvl_0_1 00:24:43.516 20:25:20 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:43.516 20:25:20 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:43.516 20:25:20 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:43.516 20:25:20 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:43.516 20:25:20 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:24:43.516 20:25:20 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:24:43.516 20:25:20 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:43.516 20:25:20 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:43.516 20:25:20 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:43.516 20:25:20 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:24:43.516 20:25:20 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:43.516 20:25:20 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:43.516 20:25:20 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:24:43.516 20:25:20 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:43.516 20:25:20 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:43.516 20:25:20 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:24:43.516 20:25:20 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:24:43.516 20:25:20 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:24:43.516 20:25:20 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:43.516 20:25:20 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:43.516 20:25:20 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:43.516 20:25:20 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:24:43.517 20:25:20 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:43.517 20:25:20 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:43.517 20:25:20 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:43.517 20:25:20 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:24:43.517 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:43.517 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:24:43.517 00:24:43.517 --- 10.0.0.2 ping statistics --- 00:24:43.517 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:43.517 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:24:43.517 20:25:20 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:43.517 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:43.517 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.341 ms 00:24:43.517 00:24:43.517 --- 10.0.0.1 ping statistics --- 00:24:43.517 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:43.517 rtt min/avg/max/mdev = 0.341/0.341/0.341/0.000 ms 00:24:43.517 20:25:20 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:43.517 20:25:20 -- nvmf/common.sh@410 -- # return 0 00:24:43.517 20:25:20 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:43.517 20:25:20 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:43.517 20:25:20 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:43.517 20:25:20 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:43.517 20:25:20 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:43.517 20:25:20 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:43.517 20:25:20 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:43.517 20:25:20 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:24:43.517 20:25:20 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:43.517 20:25:20 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:43.517 20:25:20 -- common/autotest_common.sh@10 -- # set +x 00:24:43.517 20:25:20 -- nvmf/common.sh@469 -- # nvmfpid=1885130 00:24:43.517 20:25:20 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:43.517 20:25:20 -- nvmf/common.sh@470 -- # waitforlisten 1885130 00:24:43.517 20:25:20 -- common/autotest_common.sh@817 -- # '[' -z 1885130 ']' 00:24:43.517 20:25:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:43.517 20:25:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:43.517 20:25:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:43.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:43.517 20:25:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:43.517 20:25:20 -- common/autotest_common.sh@10 -- # set +x 00:24:43.517 [2024-02-14 20:25:20.607989] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:24:43.517 [2024-02-14 20:25:20.608030] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:43.517 EAL: No free 2048 kB hugepages reported on node 1 00:24:43.517 [2024-02-14 20:25:20.665774] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:43.517 [2024-02-14 20:25:20.740377] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:43.517 [2024-02-14 20:25:20.740484] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:43.517 [2024-02-14 20:25:20.740492] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:43.517 [2024-02-14 20:25:20.740497] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:43.517 [2024-02-14 20:25:20.740529] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:43.517 [2024-02-14 20:25:20.740635] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:43.517 [2024-02-14 20:25:20.740636] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:44.087 20:25:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:44.087 20:25:21 -- common/autotest_common.sh@850 -- # return 0 00:24:44.087 20:25:21 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:44.087 20:25:21 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:44.087 20:25:21 -- common/autotest_common.sh@10 -- # set +x 00:24:44.087 20:25:21 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:44.087 20:25:21 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:44.087 20:25:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:44.087 20:25:21 -- common/autotest_common.sh@10 -- # set +x 00:24:44.087 [2024-02-14 20:25:21.436600] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:44.087 20:25:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:44.087 20:25:21 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:44.087 20:25:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:44.087 20:25:21 -- common/autotest_common.sh@10 -- # set +x 00:24:44.087 Malloc0 00:24:44.087 20:25:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:44.087 20:25:21 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:44.087 20:25:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:44.087 20:25:21 -- common/autotest_common.sh@10 -- # set +x 00:24:44.087 20:25:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:44.087 20:25:21 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:44.087 20:25:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:44.087 20:25:21 -- common/autotest_common.sh@10 -- # set +x 00:24:44.347 20:25:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:44.348 20:25:21 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:44.348 20:25:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:44.348 20:25:21 -- common/autotest_common.sh@10 -- # set +x 00:24:44.348 [2024-02-14 20:25:21.508177] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:44.348 20:25:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:44.348 20:25:21 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:44.348 20:25:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:44.348 20:25:21 -- common/autotest_common.sh@10 -- # set +x 00:24:44.348 [2024-02-14 20:25:21.516113] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:44.348 20:25:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:44.348 20:25:21 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:44.348 20:25:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:44.348 20:25:21 -- common/autotest_common.sh@10 -- # set +x 00:24:44.348 Malloc1 00:24:44.348 20:25:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:44.348 20:25:21 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:24:44.348 20:25:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:44.348 20:25:21 -- common/autotest_common.sh@10 -- # set +x 00:24:44.348 20:25:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:44.348 20:25:21 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:24:44.348 20:25:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:44.348 20:25:21 -- common/autotest_common.sh@10 -- # set +x 00:24:44.348 20:25:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:44.348 20:25:21 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:44.348 20:25:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:44.348 20:25:21 -- common/autotest_common.sh@10 -- # set +x 00:24:44.348 20:25:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:44.348 20:25:21 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:24:44.348 20:25:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:44.348 20:25:21 -- common/autotest_common.sh@10 -- # set +x 00:24:44.348 20:25:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:44.348 20:25:21 -- host/multicontroller.sh@44 -- # bdevperf_pid=1885273 00:24:44.348 20:25:21 -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:24:44.348 20:25:21 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:44.348 20:25:21 -- host/multicontroller.sh@47 -- # waitforlisten 1885273 /var/tmp/bdevperf.sock 00:24:44.348 20:25:21 -- common/autotest_common.sh@817 -- # '[' -z 1885273 ']' 00:24:44.348 20:25:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:44.348 20:25:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:44.348 20:25:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:44.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:44.348 20:25:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:44.348 20:25:21 -- common/autotest_common.sh@10 -- # set +x 00:24:45.286 20:25:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:45.286 20:25:22 -- common/autotest_common.sh@850 -- # return 0 00:24:45.286 20:25:22 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:24:45.286 20:25:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:45.286 20:25:22 -- common/autotest_common.sh@10 -- # set +x 00:24:45.286 NVMe0n1 00:24:45.286 20:25:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:45.286 20:25:22 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:45.286 20:25:22 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:24:45.286 20:25:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:45.286 20:25:22 -- common/autotest_common.sh@10 -- # set +x 00:24:45.286 20:25:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:45.286 1 00:24:45.286 20:25:22 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:24:45.286 20:25:22 -- common/autotest_common.sh@638 -- # local es=0 00:24:45.286 20:25:22 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:24:45.286 20:25:22 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:24:45.286 20:25:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:45.286 20:25:22 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:24:45.286 20:25:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:45.286 20:25:22 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:24:45.286 20:25:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:45.286 20:25:22 -- common/autotest_common.sh@10 -- # set +x 00:24:45.286 request: 00:24:45.286 { 00:24:45.286 "name": "NVMe0", 00:24:45.286 "trtype": "tcp", 00:24:45.286 "traddr": "10.0.0.2", 00:24:45.286 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:24:45.286 "hostaddr": "10.0.0.2", 00:24:45.286 "hostsvcid": "60000", 00:24:45.286 "adrfam": "ipv4", 00:24:45.286 "trsvcid": "4420", 00:24:45.286 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:45.286 "method": "bdev_nvme_attach_controller", 00:24:45.286 "req_id": 1 00:24:45.286 } 00:24:45.286 Got JSON-RPC error response 00:24:45.286 response: 00:24:45.286 { 00:24:45.286 "code": -114, 00:24:45.286 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:24:45.286 } 00:24:45.286 20:25:22 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:24:45.286 20:25:22 -- common/autotest_common.sh@641 -- # es=1 00:24:45.286 20:25:22 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:24:45.287 20:25:22 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:24:45.287 20:25:22 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:24:45.287 20:25:22 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:24:45.287 20:25:22 -- common/autotest_common.sh@638 -- # local es=0 00:24:45.287 20:25:22 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:24:45.287 20:25:22 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:24:45.287 20:25:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:45.287 20:25:22 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:24:45.287 20:25:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:45.287 20:25:22 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:24:45.287 20:25:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:45.287 20:25:22 -- common/autotest_common.sh@10 -- # set +x 00:24:45.287 request: 00:24:45.287 { 00:24:45.287 "name": "NVMe0", 00:24:45.287 "trtype": "tcp", 00:24:45.546 "traddr": "10.0.0.2", 00:24:45.546 "hostaddr": "10.0.0.2", 00:24:45.546 "hostsvcid": "60000", 00:24:45.546 "adrfam": "ipv4", 00:24:45.546 "trsvcid": "4420", 00:24:45.546 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:45.546 "method": "bdev_nvme_attach_controller", 00:24:45.546 "req_id": 1 00:24:45.546 } 00:24:45.546 Got JSON-RPC error response 00:24:45.546 response: 00:24:45.546 { 00:24:45.546 "code": -114, 00:24:45.546 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:24:45.546 } 00:24:45.546 20:25:22 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:24:45.546 20:25:22 -- common/autotest_common.sh@641 -- # es=1 00:24:45.546 20:25:22 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:24:45.546 20:25:22 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:24:45.546 20:25:22 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:24:45.546 20:25:22 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:24:45.546 20:25:22 -- common/autotest_common.sh@638 -- # local es=0 00:24:45.546 20:25:22 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:24:45.546 20:25:22 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:24:45.546 20:25:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:45.546 20:25:22 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:24:45.546 20:25:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:45.546 20:25:22 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:24:45.546 20:25:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:45.546 20:25:22 -- common/autotest_common.sh@10 -- # set +x 00:24:45.546 request: 00:24:45.546 { 00:24:45.546 "name": "NVMe0", 00:24:45.546 "trtype": "tcp", 00:24:45.546 "traddr": "10.0.0.2", 00:24:45.546 "hostaddr": "10.0.0.2", 00:24:45.546 "hostsvcid": "60000", 00:24:45.546 "adrfam": "ipv4", 00:24:45.546 "trsvcid": "4420", 00:24:45.546 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:45.546 "multipath": "disable", 00:24:45.546 "method": "bdev_nvme_attach_controller", 00:24:45.546 "req_id": 1 00:24:45.546 } 00:24:45.546 Got JSON-RPC error response 00:24:45.546 response: 00:24:45.546 { 00:24:45.546 "code": -114, 00:24:45.546 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:24:45.546 } 00:24:45.546 20:25:22 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:24:45.546 20:25:22 -- common/autotest_common.sh@641 -- # es=1 00:24:45.546 20:25:22 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:24:45.546 20:25:22 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:24:45.546 20:25:22 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:24:45.546 20:25:22 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:24:45.546 20:25:22 -- common/autotest_common.sh@638 -- # local es=0 00:24:45.546 20:25:22 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:24:45.546 20:25:22 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:24:45.546 20:25:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:45.546 20:25:22 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:24:45.547 20:25:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:45.547 20:25:22 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:24:45.547 20:25:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:45.547 20:25:22 -- common/autotest_common.sh@10 -- # set +x 00:24:45.547 request: 00:24:45.547 { 00:24:45.547 "name": "NVMe0", 00:24:45.547 "trtype": "tcp", 00:24:45.547 "traddr": "10.0.0.2", 00:24:45.547 "hostaddr": "10.0.0.2", 00:24:45.547 "hostsvcid": "60000", 00:24:45.547 "adrfam": "ipv4", 00:24:45.547 "trsvcid": "4420", 00:24:45.547 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:45.547 "multipath": "failover", 00:24:45.547 "method": "bdev_nvme_attach_controller", 00:24:45.547 "req_id": 1 00:24:45.547 } 00:24:45.547 Got JSON-RPC error response 00:24:45.547 response: 00:24:45.547 { 00:24:45.547 "code": -114, 00:24:45.547 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:24:45.547 } 00:24:45.547 20:25:22 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:24:45.547 20:25:22 -- common/autotest_common.sh@641 -- # es=1 00:24:45.547 20:25:22 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:24:45.547 20:25:22 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:24:45.547 20:25:22 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:24:45.547 20:25:22 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:45.547 20:25:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:45.547 20:25:22 -- common/autotest_common.sh@10 -- # set +x 00:24:45.547 00:24:45.547 20:25:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:45.547 20:25:22 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:45.547 20:25:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:45.547 20:25:22 -- common/autotest_common.sh@10 -- # set +x 00:24:45.547 20:25:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:45.547 20:25:22 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:24:45.547 20:25:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:45.547 20:25:22 -- common/autotest_common.sh@10 -- # set +x 00:24:45.547 00:24:45.547 20:25:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:45.547 20:25:22 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:45.547 20:25:22 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:24:45.547 20:25:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:45.547 20:25:22 -- common/autotest_common.sh@10 -- # set +x 00:24:45.547 20:25:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:45.547 20:25:22 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:24:45.547 20:25:22 -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:46.926 0 00:24:46.926 20:25:24 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:24:46.926 20:25:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:46.926 20:25:24 -- common/autotest_common.sh@10 -- # set +x 00:24:46.926 20:25:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:46.926 20:25:24 -- host/multicontroller.sh@100 -- # killprocess 1885273 00:24:46.926 20:25:24 -- common/autotest_common.sh@924 -- # '[' -z 1885273 ']' 00:24:46.926 20:25:24 -- common/autotest_common.sh@928 -- # kill -0 1885273 00:24:46.926 20:25:24 -- common/autotest_common.sh@929 -- # uname 00:24:46.926 20:25:24 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:24:46.926 20:25:24 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 1885273 00:24:46.926 20:25:24 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:24:46.926 20:25:24 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:24:46.926 20:25:24 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 1885273' 00:24:46.926 killing process with pid 1885273 00:24:46.926 20:25:24 -- common/autotest_common.sh@943 -- # kill 1885273 00:24:46.926 20:25:24 -- common/autotest_common.sh@948 -- # wait 1885273 00:24:46.926 20:25:24 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:46.926 20:25:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:46.926 20:25:24 -- common/autotest_common.sh@10 -- # set +x 00:24:46.926 20:25:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:46.926 20:25:24 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:46.926 20:25:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:46.926 20:25:24 -- common/autotest_common.sh@10 -- # set +x 00:24:47.186 20:25:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:47.186 20:25:24 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:24:47.186 20:25:24 -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:47.186 20:25:24 -- common/autotest_common.sh@1595 -- # read -r file 00:24:47.186 20:25:24 -- common/autotest_common.sh@1594 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:24:47.186 20:25:24 -- common/autotest_common.sh@1594 -- # sort -u 00:24:47.186 20:25:24 -- common/autotest_common.sh@1596 -- # cat 00:24:47.186 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:24:47.186 [2024-02-14 20:25:21.617323] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:24:47.186 [2024-02-14 20:25:21.617374] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1885273 ] 00:24:47.186 EAL: No free 2048 kB hugepages reported on node 1 00:24:47.186 [2024-02-14 20:25:21.679265] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:47.186 [2024-02-14 20:25:21.756563] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:47.186 [2024-02-14 20:25:22.923060] bdev.c:4548:bdev_name_add: *ERROR*: Bdev name 63ba8555-1af0-487b-92e7-64098e5db765 already exists 00:24:47.186 [2024-02-14 20:25:22.923090] bdev.c:7598:bdev_register: *ERROR*: Unable to add uuid:63ba8555-1af0-487b-92e7-64098e5db765 alias for bdev NVMe1n1 00:24:47.186 [2024-02-14 20:25:22.923100] bdev_nvme.c:4183:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:24:47.186 Running I/O for 1 seconds... 00:24:47.186 00:24:47.186 Latency(us) 00:24:47.186 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:47.186 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:24:47.186 NVMe0n1 : 1.00 23961.84 93.60 0.00 0.00 5327.32 1833.45 19972.88 00:24:47.186 =================================================================================================================== 00:24:47.186 Total : 23961.84 93.60 0.00 0.00 5327.32 1833.45 19972.88 00:24:47.186 Received shutdown signal, test time was about 1.000000 seconds 00:24:47.186 00:24:47.186 Latency(us) 00:24:47.186 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:47.186 =================================================================================================================== 00:24:47.186 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:47.186 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:24:47.186 20:25:24 -- common/autotest_common.sh@1601 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:47.186 20:25:24 -- common/autotest_common.sh@1595 -- # read -r file 00:24:47.186 20:25:24 -- host/multicontroller.sh@108 -- # nvmftestfini 00:24:47.186 20:25:24 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:47.186 20:25:24 -- nvmf/common.sh@116 -- # sync 00:24:47.186 20:25:24 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:47.186 20:25:24 -- nvmf/common.sh@119 -- # set +e 00:24:47.186 20:25:24 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:47.186 20:25:24 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:47.186 rmmod nvme_tcp 00:24:47.186 rmmod nvme_fabrics 00:24:47.186 rmmod nvme_keyring 00:24:47.187 20:25:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:47.187 20:25:24 -- nvmf/common.sh@123 -- # set -e 00:24:47.187 20:25:24 -- nvmf/common.sh@124 -- # return 0 00:24:47.187 20:25:24 -- nvmf/common.sh@477 -- # '[' -n 1885130 ']' 00:24:47.187 20:25:24 -- nvmf/common.sh@478 -- # killprocess 1885130 00:24:47.187 20:25:24 -- common/autotest_common.sh@924 -- # '[' -z 1885130 ']' 00:24:47.187 20:25:24 -- common/autotest_common.sh@928 -- # kill -0 1885130 00:24:47.187 20:25:24 -- common/autotest_common.sh@929 -- # uname 00:24:47.187 20:25:24 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:24:47.187 20:25:24 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 1885130 00:24:47.187 20:25:24 -- common/autotest_common.sh@930 -- # process_name=reactor_1 00:24:47.187 20:25:24 -- common/autotest_common.sh@934 -- # '[' reactor_1 = sudo ']' 00:24:47.187 20:25:24 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 1885130' 00:24:47.187 killing process with pid 1885130 00:24:47.187 20:25:24 -- common/autotest_common.sh@943 -- # kill 1885130 00:24:47.187 20:25:24 -- common/autotest_common.sh@948 -- # wait 1885130 00:24:47.446 20:25:24 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:47.446 20:25:24 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:47.446 20:25:24 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:47.446 20:25:24 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:47.446 20:25:24 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:47.446 20:25:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:47.446 20:25:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:47.446 20:25:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:49.983 20:25:26 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:24:49.983 00:24:49.984 real 0m12.720s 00:24:49.984 user 0m16.675s 00:24:49.984 sys 0m5.514s 00:24:49.984 20:25:26 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:49.984 20:25:26 -- common/autotest_common.sh@10 -- # set +x 00:24:49.984 ************************************ 00:24:49.984 END TEST nvmf_multicontroller 00:24:49.984 ************************************ 00:24:49.984 20:25:26 -- nvmf/nvmf.sh@91 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:49.984 20:25:26 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:24:49.984 20:25:26 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:24:49.984 20:25:26 -- common/autotest_common.sh@10 -- # set +x 00:24:49.984 ************************************ 00:24:49.984 START TEST nvmf_aer 00:24:49.984 ************************************ 00:24:49.984 20:25:26 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:49.984 * Looking for test storage... 00:24:49.984 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:49.984 20:25:26 -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:49.984 20:25:26 -- nvmf/common.sh@7 -- # uname -s 00:24:49.984 20:25:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:49.984 20:25:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:49.984 20:25:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:49.984 20:25:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:49.984 20:25:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:49.984 20:25:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:49.984 20:25:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:49.984 20:25:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:49.984 20:25:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:49.984 20:25:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:49.984 20:25:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:24:49.984 20:25:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:24:49.984 20:25:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:49.984 20:25:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:49.984 20:25:26 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:49.984 20:25:26 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:49.984 20:25:26 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:49.984 20:25:26 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:49.984 20:25:26 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:49.984 20:25:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.984 20:25:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.984 20:25:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.984 20:25:26 -- paths/export.sh@5 -- # export PATH 00:24:49.984 20:25:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.984 20:25:26 -- nvmf/common.sh@46 -- # : 0 00:24:49.984 20:25:26 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:49.984 20:25:26 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:49.984 20:25:26 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:49.984 20:25:26 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:49.984 20:25:26 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:49.984 20:25:26 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:49.984 20:25:26 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:49.984 20:25:26 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:49.984 20:25:26 -- host/aer.sh@11 -- # nvmftestinit 00:24:49.984 20:25:26 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:49.984 20:25:26 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:49.984 20:25:26 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:49.984 20:25:26 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:49.984 20:25:26 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:49.984 20:25:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:49.984 20:25:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:49.984 20:25:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:49.984 20:25:26 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:49.984 20:25:26 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:49.984 20:25:26 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:49.984 20:25:26 -- common/autotest_common.sh@10 -- # set +x 00:24:56.556 20:25:32 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:56.556 20:25:32 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:56.556 20:25:32 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:56.556 20:25:32 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:56.556 20:25:32 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:56.556 20:25:32 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:56.556 20:25:32 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:56.556 20:25:32 -- nvmf/common.sh@294 -- # net_devs=() 00:24:56.556 20:25:32 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:56.556 20:25:32 -- nvmf/common.sh@295 -- # e810=() 00:24:56.556 20:25:32 -- nvmf/common.sh@295 -- # local -ga e810 00:24:56.556 20:25:32 -- nvmf/common.sh@296 -- # x722=() 00:24:56.556 20:25:32 -- nvmf/common.sh@296 -- # local -ga x722 00:24:56.556 20:25:32 -- nvmf/common.sh@297 -- # mlx=() 00:24:56.556 20:25:32 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:56.556 20:25:32 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:56.556 20:25:32 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:56.556 20:25:32 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:56.556 20:25:32 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:56.556 20:25:32 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:56.556 20:25:32 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:56.556 20:25:32 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:56.556 20:25:32 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:56.556 20:25:32 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:56.556 20:25:32 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:56.556 20:25:32 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:56.556 20:25:32 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:56.556 20:25:32 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:56.556 20:25:32 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:24:56.556 20:25:32 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:24:56.556 20:25:32 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:24:56.556 20:25:32 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:56.556 20:25:32 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:56.556 20:25:32 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:56.556 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:56.556 20:25:32 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:56.556 20:25:32 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:56.556 20:25:32 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:56.556 20:25:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:56.556 20:25:32 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:56.556 20:25:32 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:56.556 20:25:32 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:56.556 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:56.556 20:25:32 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:56.556 20:25:32 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:56.556 20:25:32 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:56.556 20:25:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:56.556 20:25:32 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:56.556 20:25:32 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:56.556 20:25:32 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:24:56.556 20:25:32 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:24:56.556 20:25:32 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:56.556 20:25:32 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:56.556 20:25:32 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:56.556 20:25:32 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:56.556 20:25:32 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:56.556 Found net devices under 0000:af:00.0: cvl_0_0 00:24:56.556 20:25:32 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:56.556 20:25:32 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:56.556 20:25:32 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:56.556 20:25:32 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:56.556 20:25:32 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:56.557 20:25:32 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:56.557 Found net devices under 0000:af:00.1: cvl_0_1 00:24:56.557 20:25:32 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:56.557 20:25:32 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:56.557 20:25:32 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:56.557 20:25:32 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:56.557 20:25:32 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:24:56.557 20:25:32 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:24:56.557 20:25:32 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:56.557 20:25:32 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:56.557 20:25:32 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:56.557 20:25:32 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:24:56.557 20:25:32 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:56.557 20:25:32 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:56.557 20:25:32 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:24:56.557 20:25:32 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:56.557 20:25:32 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:56.557 20:25:32 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:24:56.557 20:25:32 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:24:56.557 20:25:32 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:24:56.557 20:25:32 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:56.557 20:25:32 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:56.557 20:25:32 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:56.557 20:25:32 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:24:56.557 20:25:32 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:56.557 20:25:32 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:56.557 20:25:32 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:56.557 20:25:33 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:24:56.557 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:56.557 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:24:56.557 00:24:56.557 --- 10.0.0.2 ping statistics --- 00:24:56.557 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:56.557 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:24:56.557 20:25:33 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:56.557 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:56.557 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.397 ms 00:24:56.557 00:24:56.557 --- 10.0.0.1 ping statistics --- 00:24:56.557 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:56.557 rtt min/avg/max/mdev = 0.397/0.397/0.397/0.000 ms 00:24:56.557 20:25:33 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:56.557 20:25:33 -- nvmf/common.sh@410 -- # return 0 00:24:56.557 20:25:33 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:56.557 20:25:33 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:56.557 20:25:33 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:56.557 20:25:33 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:56.557 20:25:33 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:56.557 20:25:33 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:56.557 20:25:33 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:56.557 20:25:33 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:24:56.557 20:25:33 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:56.557 20:25:33 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:56.557 20:25:33 -- common/autotest_common.sh@10 -- # set +x 00:24:56.557 20:25:33 -- nvmf/common.sh@469 -- # nvmfpid=1889645 00:24:56.557 20:25:33 -- nvmf/common.sh@470 -- # waitforlisten 1889645 00:24:56.557 20:25:33 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:56.557 20:25:33 -- common/autotest_common.sh@817 -- # '[' -z 1889645 ']' 00:24:56.557 20:25:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:56.557 20:25:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:56.557 20:25:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:56.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:56.557 20:25:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:56.557 20:25:33 -- common/autotest_common.sh@10 -- # set +x 00:24:56.557 [2024-02-14 20:25:33.092624] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:24:56.557 [2024-02-14 20:25:33.092672] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:56.557 EAL: No free 2048 kB hugepages reported on node 1 00:24:56.557 [2024-02-14 20:25:33.159358] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:56.557 [2024-02-14 20:25:33.235313] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:56.557 [2024-02-14 20:25:33.235413] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:56.557 [2024-02-14 20:25:33.235420] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:56.557 [2024-02-14 20:25:33.235426] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:56.557 [2024-02-14 20:25:33.235531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:56.557 [2024-02-14 20:25:33.235626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:56.557 [2024-02-14 20:25:33.235716] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:56.557 [2024-02-14 20:25:33.235717] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:56.557 20:25:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:56.557 20:25:33 -- common/autotest_common.sh@850 -- # return 0 00:24:56.557 20:25:33 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:56.557 20:25:33 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:56.557 20:25:33 -- common/autotest_common.sh@10 -- # set +x 00:24:56.557 20:25:33 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:56.557 20:25:33 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:56.557 20:25:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:56.557 20:25:33 -- common/autotest_common.sh@10 -- # set +x 00:24:56.557 [2024-02-14 20:25:33.932869] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:56.557 20:25:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:56.557 20:25:33 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:24:56.557 20:25:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:56.557 20:25:33 -- common/autotest_common.sh@10 -- # set +x 00:24:56.557 Malloc0 00:24:56.557 20:25:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:56.557 20:25:33 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:24:56.557 20:25:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:56.557 20:25:33 -- common/autotest_common.sh@10 -- # set +x 00:24:56.557 20:25:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:56.816 20:25:33 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:56.816 20:25:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:56.816 20:25:33 -- common/autotest_common.sh@10 -- # set +x 00:24:56.816 20:25:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:56.816 20:25:33 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:56.816 20:25:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:56.816 20:25:33 -- common/autotest_common.sh@10 -- # set +x 00:24:56.816 [2024-02-14 20:25:33.984187] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:56.816 20:25:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:56.817 20:25:33 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:24:56.817 20:25:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:56.817 20:25:33 -- common/autotest_common.sh@10 -- # set +x 00:24:56.817 [2024-02-14 20:25:33.992010] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:24:56.817 [ 00:24:56.817 { 00:24:56.817 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:56.817 "subtype": "Discovery", 00:24:56.817 "listen_addresses": [], 00:24:56.817 "allow_any_host": true, 00:24:56.817 "hosts": [] 00:24:56.817 }, 00:24:56.817 { 00:24:56.817 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:56.817 "subtype": "NVMe", 00:24:56.817 "listen_addresses": [ 00:24:56.817 { 00:24:56.817 "transport": "TCP", 00:24:56.817 "trtype": "TCP", 00:24:56.817 "adrfam": "IPv4", 00:24:56.817 "traddr": "10.0.0.2", 00:24:56.817 "trsvcid": "4420" 00:24:56.817 } 00:24:56.817 ], 00:24:56.817 "allow_any_host": true, 00:24:56.817 "hosts": [], 00:24:56.817 "serial_number": "SPDK00000000000001", 00:24:56.817 "model_number": "SPDK bdev Controller", 00:24:56.817 "max_namespaces": 2, 00:24:56.817 "min_cntlid": 1, 00:24:56.817 "max_cntlid": 65519, 00:24:56.817 "namespaces": [ 00:24:56.817 { 00:24:56.817 "nsid": 1, 00:24:56.817 "bdev_name": "Malloc0", 00:24:56.817 "name": "Malloc0", 00:24:56.817 "nguid": "1261AD510B864D0A8F1FE7009456816C", 00:24:56.817 "uuid": "1261ad51-0b86-4d0a-8f1f-e7009456816c" 00:24:56.817 } 00:24:56.817 ] 00:24:56.817 } 00:24:56.817 ] 00:24:56.817 20:25:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:56.817 20:25:34 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:24:56.817 20:25:34 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:24:56.817 20:25:34 -- host/aer.sh@33 -- # aerpid=1889683 00:24:56.817 20:25:34 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:24:56.817 20:25:34 -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:24:56.817 20:25:34 -- common/autotest_common.sh@1242 -- # local i=0 00:24:56.817 20:25:34 -- common/autotest_common.sh@1243 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:56.817 20:25:34 -- common/autotest_common.sh@1244 -- # '[' 0 -lt 200 ']' 00:24:56.817 20:25:34 -- common/autotest_common.sh@1245 -- # i=1 00:24:56.817 20:25:34 -- common/autotest_common.sh@1246 -- # sleep 0.1 00:24:56.817 EAL: No free 2048 kB hugepages reported on node 1 00:24:56.817 20:25:34 -- common/autotest_common.sh@1243 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:56.817 20:25:34 -- common/autotest_common.sh@1244 -- # '[' 1 -lt 200 ']' 00:24:56.817 20:25:34 -- common/autotest_common.sh@1245 -- # i=2 00:24:56.817 20:25:34 -- common/autotest_common.sh@1246 -- # sleep 0.1 00:24:56.817 20:25:34 -- common/autotest_common.sh@1243 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:56.817 20:25:34 -- common/autotest_common.sh@1244 -- # '[' 2 -lt 200 ']' 00:24:56.817 20:25:34 -- common/autotest_common.sh@1245 -- # i=3 00:24:56.817 20:25:34 -- common/autotest_common.sh@1246 -- # sleep 0.1 00:24:57.076 20:25:34 -- common/autotest_common.sh@1243 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:57.076 20:25:34 -- common/autotest_common.sh@1249 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:57.076 20:25:34 -- common/autotest_common.sh@1253 -- # return 0 00:24:57.076 20:25:34 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:24:57.076 20:25:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:57.076 20:25:34 -- common/autotest_common.sh@10 -- # set +x 00:24:57.076 Malloc1 00:24:57.076 20:25:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:57.076 20:25:34 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:24:57.076 20:25:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:57.076 20:25:34 -- common/autotest_common.sh@10 -- # set +x 00:24:57.076 20:25:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:57.076 20:25:34 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:24:57.076 20:25:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:57.076 20:25:34 -- common/autotest_common.sh@10 -- # set +x 00:24:57.076 [ 00:24:57.076 { 00:24:57.076 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:57.076 "subtype": "Discovery", 00:24:57.076 "listen_addresses": [], 00:24:57.076 "allow_any_host": true, 00:24:57.076 "hosts": [] 00:24:57.076 }, 00:24:57.076 { 00:24:57.076 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:57.076 "subtype": "NVMe", 00:24:57.076 "listen_addresses": [ 00:24:57.076 { 00:24:57.076 "transport": "TCP", 00:24:57.076 "trtype": "TCP", 00:24:57.076 "adrfam": "IPv4", 00:24:57.076 "traddr": "10.0.0.2", 00:24:57.076 "trsvcid": "4420" 00:24:57.076 } 00:24:57.076 ], 00:24:57.076 "allow_any_host": true, 00:24:57.076 "hosts": [], 00:24:57.076 "serial_number": "SPDK00000000000001", 00:24:57.076 "model_number": "SPDK bdev Controller", 00:24:57.076 "max_namespaces": 2, 00:24:57.076 "min_cntlid": 1, 00:24:57.076 "max_cntlid": 65519, 00:24:57.076 "namespaces": [ 00:24:57.076 { 00:24:57.076 "nsid": 1, 00:24:57.076 "bdev_name": "Malloc0", 00:24:57.076 "name": "Malloc0", 00:24:57.076 "nguid": "1261AD510B864D0A8F1FE7009456816C", 00:24:57.076 "uuid": "1261ad51-0b86-4d0a-8f1f-e7009456816c" 00:24:57.076 }, 00:24:57.076 { 00:24:57.076 "nsid": 2, 00:24:57.076 "bdev_name": "Malloc1", 00:24:57.076 Asynchronous Event Request test 00:24:57.076 Attaching to 10.0.0.2 00:24:57.076 Attached to 10.0.0.2 00:24:57.076 Registering asynchronous event callbacks... 00:24:57.076 Starting namespace attribute notice tests for all controllers... 00:24:57.076 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:24:57.076 aer_cb - Changed Namespace 00:24:57.076 Cleaning up... 00:24:57.076 "name": "Malloc1", 00:24:57.076 "nguid": "81240E908D6847F09311C191C0E3A2DA", 00:24:57.076 "uuid": "81240e90-8d68-47f0-9311-c191c0e3a2da" 00:24:57.076 } 00:24:57.076 ] 00:24:57.076 } 00:24:57.076 ] 00:24:57.076 20:25:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:57.076 20:25:34 -- host/aer.sh@43 -- # wait 1889683 00:24:57.076 20:25:34 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:24:57.076 20:25:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:57.076 20:25:34 -- common/autotest_common.sh@10 -- # set +x 00:24:57.076 20:25:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:57.076 20:25:34 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:24:57.076 20:25:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:57.077 20:25:34 -- common/autotest_common.sh@10 -- # set +x 00:24:57.077 20:25:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:57.077 20:25:34 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:57.077 20:25:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:57.077 20:25:34 -- common/autotest_common.sh@10 -- # set +x 00:24:57.077 20:25:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:57.077 20:25:34 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:24:57.077 20:25:34 -- host/aer.sh@51 -- # nvmftestfini 00:24:57.077 20:25:34 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:57.077 20:25:34 -- nvmf/common.sh@116 -- # sync 00:24:57.077 20:25:34 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:57.077 20:25:34 -- nvmf/common.sh@119 -- # set +e 00:24:57.077 20:25:34 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:57.077 20:25:34 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:57.077 rmmod nvme_tcp 00:24:57.077 rmmod nvme_fabrics 00:24:57.337 rmmod nvme_keyring 00:24:57.337 20:25:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:57.337 20:25:34 -- nvmf/common.sh@123 -- # set -e 00:24:57.337 20:25:34 -- nvmf/common.sh@124 -- # return 0 00:24:57.337 20:25:34 -- nvmf/common.sh@477 -- # '[' -n 1889645 ']' 00:24:57.337 20:25:34 -- nvmf/common.sh@478 -- # killprocess 1889645 00:24:57.337 20:25:34 -- common/autotest_common.sh@924 -- # '[' -z 1889645 ']' 00:24:57.337 20:25:34 -- common/autotest_common.sh@928 -- # kill -0 1889645 00:24:57.337 20:25:34 -- common/autotest_common.sh@929 -- # uname 00:24:57.337 20:25:34 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:24:57.337 20:25:34 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 1889645 00:24:57.337 20:25:34 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:24:57.337 20:25:34 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:24:57.337 20:25:34 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 1889645' 00:24:57.337 killing process with pid 1889645 00:24:57.337 20:25:34 -- common/autotest_common.sh@943 -- # kill 1889645 00:24:57.337 [2024-02-14 20:25:34.570447] app.c: 881:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:24:57.337 20:25:34 -- common/autotest_common.sh@948 -- # wait 1889645 00:24:57.596 20:25:34 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:57.596 20:25:34 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:57.596 20:25:34 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:57.596 20:25:34 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:57.596 20:25:34 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:57.596 20:25:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:57.596 20:25:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:57.596 20:25:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:59.505 20:25:36 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:24:59.505 00:24:59.505 real 0m10.002s 00:24:59.505 user 0m7.828s 00:24:59.505 sys 0m4.976s 00:24:59.505 20:25:36 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:59.505 20:25:36 -- common/autotest_common.sh@10 -- # set +x 00:24:59.505 ************************************ 00:24:59.505 END TEST nvmf_aer 00:24:59.505 ************************************ 00:24:59.505 20:25:36 -- nvmf/nvmf.sh@92 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:59.505 20:25:36 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:24:59.505 20:25:36 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:24:59.505 20:25:36 -- common/autotest_common.sh@10 -- # set +x 00:24:59.505 ************************************ 00:24:59.505 START TEST nvmf_async_init 00:24:59.505 ************************************ 00:24:59.505 20:25:36 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:59.765 * Looking for test storage... 00:24:59.765 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:59.765 20:25:36 -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:59.765 20:25:36 -- nvmf/common.sh@7 -- # uname -s 00:24:59.765 20:25:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:59.765 20:25:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:59.765 20:25:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:59.765 20:25:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:59.765 20:25:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:59.765 20:25:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:59.765 20:25:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:59.765 20:25:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:59.765 20:25:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:59.765 20:25:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:59.765 20:25:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:24:59.765 20:25:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:24:59.765 20:25:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:59.765 20:25:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:59.765 20:25:36 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:59.765 20:25:36 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:59.765 20:25:36 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:59.765 20:25:36 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:59.765 20:25:36 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:59.765 20:25:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.765 20:25:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.765 20:25:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.765 20:25:36 -- paths/export.sh@5 -- # export PATH 00:24:59.765 20:25:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.765 20:25:36 -- nvmf/common.sh@46 -- # : 0 00:24:59.765 20:25:36 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:59.765 20:25:36 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:59.765 20:25:36 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:59.765 20:25:36 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:59.765 20:25:36 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:59.765 20:25:36 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:59.765 20:25:36 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:59.765 20:25:36 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:59.765 20:25:36 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:24:59.765 20:25:36 -- host/async_init.sh@14 -- # null_block_size=512 00:24:59.765 20:25:36 -- host/async_init.sh@15 -- # null_bdev=null0 00:24:59.765 20:25:36 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:24:59.765 20:25:36 -- host/async_init.sh@20 -- # uuidgen 00:24:59.765 20:25:36 -- host/async_init.sh@20 -- # tr -d - 00:24:59.765 20:25:36 -- host/async_init.sh@20 -- # nguid=5c019cb2cb694599a54dddae4f27e750 00:24:59.766 20:25:36 -- host/async_init.sh@22 -- # nvmftestinit 00:24:59.766 20:25:36 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:59.766 20:25:36 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:59.766 20:25:36 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:59.766 20:25:36 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:59.766 20:25:36 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:59.766 20:25:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:59.766 20:25:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:59.766 20:25:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:59.766 20:25:36 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:59.766 20:25:36 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:59.766 20:25:36 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:59.766 20:25:36 -- common/autotest_common.sh@10 -- # set +x 00:25:06.335 20:25:42 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:06.335 20:25:42 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:06.336 20:25:42 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:06.336 20:25:42 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:06.336 20:25:42 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:06.336 20:25:42 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:06.336 20:25:42 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:06.336 20:25:42 -- nvmf/common.sh@294 -- # net_devs=() 00:25:06.336 20:25:42 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:06.336 20:25:42 -- nvmf/common.sh@295 -- # e810=() 00:25:06.336 20:25:42 -- nvmf/common.sh@295 -- # local -ga e810 00:25:06.336 20:25:42 -- nvmf/common.sh@296 -- # x722=() 00:25:06.336 20:25:42 -- nvmf/common.sh@296 -- # local -ga x722 00:25:06.336 20:25:42 -- nvmf/common.sh@297 -- # mlx=() 00:25:06.336 20:25:42 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:06.336 20:25:42 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:06.336 20:25:42 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:06.336 20:25:42 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:06.336 20:25:42 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:06.336 20:25:42 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:06.336 20:25:42 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:06.336 20:25:42 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:06.336 20:25:42 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:06.336 20:25:42 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:06.336 20:25:42 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:06.336 20:25:42 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:06.336 20:25:42 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:06.336 20:25:42 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:06.336 20:25:42 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:06.336 20:25:42 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:06.336 20:25:42 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:06.336 20:25:42 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:06.336 20:25:42 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:06.336 20:25:42 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:06.336 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:06.336 20:25:42 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:06.336 20:25:42 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:06.336 20:25:42 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:06.336 20:25:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:06.336 20:25:42 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:06.336 20:25:42 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:06.336 20:25:42 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:06.336 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:06.336 20:25:42 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:06.336 20:25:42 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:06.336 20:25:42 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:06.336 20:25:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:06.336 20:25:42 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:06.336 20:25:42 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:06.336 20:25:42 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:06.336 20:25:42 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:06.336 20:25:42 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:06.336 20:25:42 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:06.336 20:25:42 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:06.336 20:25:42 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:06.336 20:25:42 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:06.336 Found net devices under 0000:af:00.0: cvl_0_0 00:25:06.336 20:25:42 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:06.336 20:25:42 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:06.336 20:25:42 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:06.336 20:25:42 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:06.336 20:25:42 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:06.336 20:25:42 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:06.336 Found net devices under 0000:af:00.1: cvl_0_1 00:25:06.336 20:25:42 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:06.336 20:25:42 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:06.336 20:25:42 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:06.336 20:25:42 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:06.336 20:25:42 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:06.336 20:25:42 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:06.336 20:25:42 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:06.336 20:25:42 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:06.336 20:25:42 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:06.336 20:25:42 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:06.336 20:25:42 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:06.336 20:25:42 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:06.336 20:25:42 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:06.336 20:25:42 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:06.336 20:25:42 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:06.336 20:25:42 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:06.336 20:25:42 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:06.336 20:25:42 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:06.336 20:25:42 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:06.336 20:25:42 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:06.336 20:25:42 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:06.336 20:25:42 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:06.336 20:25:42 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:06.336 20:25:42 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:06.336 20:25:42 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:06.336 20:25:42 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:06.336 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:06.336 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.163 ms 00:25:06.336 00:25:06.336 --- 10.0.0.2 ping statistics --- 00:25:06.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:06.336 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:25:06.336 20:25:42 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:06.336 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:06.336 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.405 ms 00:25:06.336 00:25:06.336 --- 10.0.0.1 ping statistics --- 00:25:06.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:06.336 rtt min/avg/max/mdev = 0.405/0.405/0.405/0.000 ms 00:25:06.336 20:25:42 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:06.336 20:25:42 -- nvmf/common.sh@410 -- # return 0 00:25:06.336 20:25:42 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:06.336 20:25:42 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:06.336 20:25:42 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:06.336 20:25:42 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:06.336 20:25:42 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:06.336 20:25:42 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:06.336 20:25:42 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:06.336 20:25:42 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:25:06.336 20:25:42 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:06.336 20:25:42 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:06.336 20:25:42 -- common/autotest_common.sh@10 -- # set +x 00:25:06.336 20:25:42 -- nvmf/common.sh@469 -- # nvmfpid=1893678 00:25:06.336 20:25:42 -- nvmf/common.sh@470 -- # waitforlisten 1893678 00:25:06.336 20:25:42 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:25:06.336 20:25:42 -- common/autotest_common.sh@817 -- # '[' -z 1893678 ']' 00:25:06.336 20:25:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:06.336 20:25:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:06.336 20:25:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:06.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:06.336 20:25:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:06.336 20:25:42 -- common/autotest_common.sh@10 -- # set +x 00:25:06.336 [2024-02-14 20:25:42.984217] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:25:06.337 [2024-02-14 20:25:42.984261] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:06.337 EAL: No free 2048 kB hugepages reported on node 1 00:25:06.337 [2024-02-14 20:25:43.045109] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:06.337 [2024-02-14 20:25:43.120495] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:06.337 [2024-02-14 20:25:43.120597] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:06.337 [2024-02-14 20:25:43.120605] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:06.337 [2024-02-14 20:25:43.120611] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:06.337 [2024-02-14 20:25:43.120627] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:06.595 20:25:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:06.595 20:25:43 -- common/autotest_common.sh@850 -- # return 0 00:25:06.595 20:25:43 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:06.595 20:25:43 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:06.595 20:25:43 -- common/autotest_common.sh@10 -- # set +x 00:25:06.595 20:25:43 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:06.595 20:25:43 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:25:06.595 20:25:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:06.595 20:25:43 -- common/autotest_common.sh@10 -- # set +x 00:25:06.595 [2024-02-14 20:25:43.814998] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:06.595 20:25:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:06.595 20:25:43 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:25:06.595 20:25:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:06.595 20:25:43 -- common/autotest_common.sh@10 -- # set +x 00:25:06.595 null0 00:25:06.595 20:25:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:06.595 20:25:43 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:25:06.595 20:25:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:06.595 20:25:43 -- common/autotest_common.sh@10 -- # set +x 00:25:06.595 20:25:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:06.595 20:25:43 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:25:06.595 20:25:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:06.595 20:25:43 -- common/autotest_common.sh@10 -- # set +x 00:25:06.595 20:25:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:06.595 20:25:43 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 5c019cb2cb694599a54dddae4f27e750 00:25:06.595 20:25:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:06.595 20:25:43 -- common/autotest_common.sh@10 -- # set +x 00:25:06.595 20:25:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:06.595 20:25:43 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:06.595 20:25:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:06.595 20:25:43 -- common/autotest_common.sh@10 -- # set +x 00:25:06.595 [2024-02-14 20:25:43.859196] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:06.595 20:25:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:06.595 20:25:43 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:25:06.595 20:25:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:06.595 20:25:43 -- common/autotest_common.sh@10 -- # set +x 00:25:06.854 nvme0n1 00:25:06.854 20:25:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:06.854 20:25:44 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:06.854 20:25:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:06.854 20:25:44 -- common/autotest_common.sh@10 -- # set +x 00:25:06.854 [ 00:25:06.854 { 00:25:06.854 "name": "nvme0n1", 00:25:06.854 "aliases": [ 00:25:06.854 "5c019cb2-cb69-4599-a54d-ddae4f27e750" 00:25:06.854 ], 00:25:06.854 "product_name": "NVMe disk", 00:25:06.854 "block_size": 512, 00:25:06.854 "num_blocks": 2097152, 00:25:06.854 "uuid": "5c019cb2-cb69-4599-a54d-ddae4f27e750", 00:25:06.854 "assigned_rate_limits": { 00:25:06.854 "rw_ios_per_sec": 0, 00:25:06.854 "rw_mbytes_per_sec": 0, 00:25:06.854 "r_mbytes_per_sec": 0, 00:25:06.854 "w_mbytes_per_sec": 0 00:25:06.854 }, 00:25:06.854 "claimed": false, 00:25:06.854 "zoned": false, 00:25:06.854 "supported_io_types": { 00:25:06.854 "read": true, 00:25:06.854 "write": true, 00:25:06.854 "unmap": false, 00:25:06.854 "write_zeroes": true, 00:25:06.854 "flush": true, 00:25:06.854 "reset": true, 00:25:06.854 "compare": true, 00:25:06.854 "compare_and_write": true, 00:25:06.854 "abort": true, 00:25:06.854 "nvme_admin": true, 00:25:06.854 "nvme_io": true 00:25:06.854 }, 00:25:06.854 "driver_specific": { 00:25:06.854 "nvme": [ 00:25:06.854 { 00:25:06.854 "trid": { 00:25:06.854 "trtype": "TCP", 00:25:06.854 "adrfam": "IPv4", 00:25:06.854 "traddr": "10.0.0.2", 00:25:06.854 "trsvcid": "4420", 00:25:06.854 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:06.854 }, 00:25:06.854 "ctrlr_data": { 00:25:06.854 "cntlid": 1, 00:25:06.854 "vendor_id": "0x8086", 00:25:06.854 "model_number": "SPDK bdev Controller", 00:25:06.854 "serial_number": "00000000000000000000", 00:25:06.854 "firmware_revision": "24.05", 00:25:06.854 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:06.854 "oacs": { 00:25:06.854 "security": 0, 00:25:06.854 "format": 0, 00:25:06.854 "firmware": 0, 00:25:06.854 "ns_manage": 0 00:25:06.854 }, 00:25:06.854 "multi_ctrlr": true, 00:25:06.854 "ana_reporting": false 00:25:06.854 }, 00:25:06.854 "vs": { 00:25:06.854 "nvme_version": "1.3" 00:25:06.854 }, 00:25:06.854 "ns_data": { 00:25:06.854 "id": 1, 00:25:06.854 "can_share": true 00:25:06.854 } 00:25:06.854 } 00:25:06.854 ], 00:25:06.854 "mp_policy": "active_passive" 00:25:06.854 } 00:25:06.854 } 00:25:06.854 ] 00:25:06.854 20:25:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:06.854 20:25:44 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:25:06.854 20:25:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:06.854 20:25:44 -- common/autotest_common.sh@10 -- # set +x 00:25:06.854 [2024-02-14 20:25:44.115757] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:06.854 [2024-02-14 20:25:44.115822] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8df0 (9): Bad file descriptor 00:25:06.854 [2024-02-14 20:25:44.247739] bdev_nvme.c:2026:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:06.854 20:25:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:06.854 20:25:44 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:06.854 20:25:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:06.854 20:25:44 -- common/autotest_common.sh@10 -- # set +x 00:25:06.854 [ 00:25:06.854 { 00:25:06.854 "name": "nvme0n1", 00:25:06.854 "aliases": [ 00:25:06.854 "5c019cb2-cb69-4599-a54d-ddae4f27e750" 00:25:06.854 ], 00:25:06.854 "product_name": "NVMe disk", 00:25:06.854 "block_size": 512, 00:25:06.854 "num_blocks": 2097152, 00:25:06.854 "uuid": "5c019cb2-cb69-4599-a54d-ddae4f27e750", 00:25:06.854 "assigned_rate_limits": { 00:25:06.854 "rw_ios_per_sec": 0, 00:25:06.854 "rw_mbytes_per_sec": 0, 00:25:06.854 "r_mbytes_per_sec": 0, 00:25:06.854 "w_mbytes_per_sec": 0 00:25:06.854 }, 00:25:06.854 "claimed": false, 00:25:06.854 "zoned": false, 00:25:06.854 "supported_io_types": { 00:25:06.854 "read": true, 00:25:06.854 "write": true, 00:25:06.854 "unmap": false, 00:25:06.854 "write_zeroes": true, 00:25:06.854 "flush": true, 00:25:06.854 "reset": true, 00:25:06.854 "compare": true, 00:25:06.854 "compare_and_write": true, 00:25:06.854 "abort": true, 00:25:06.854 "nvme_admin": true, 00:25:06.854 "nvme_io": true 00:25:06.854 }, 00:25:06.854 "driver_specific": { 00:25:06.854 "nvme": [ 00:25:06.854 { 00:25:06.854 "trid": { 00:25:06.854 "trtype": "TCP", 00:25:06.854 "adrfam": "IPv4", 00:25:06.854 "traddr": "10.0.0.2", 00:25:06.854 "trsvcid": "4420", 00:25:06.854 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:06.854 }, 00:25:06.854 "ctrlr_data": { 00:25:06.854 "cntlid": 2, 00:25:06.854 "vendor_id": "0x8086", 00:25:06.854 "model_number": "SPDK bdev Controller", 00:25:06.854 "serial_number": "00000000000000000000", 00:25:06.854 "firmware_revision": "24.05", 00:25:06.854 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:06.854 "oacs": { 00:25:06.854 "security": 0, 00:25:06.854 "format": 0, 00:25:06.854 "firmware": 0, 00:25:06.854 "ns_manage": 0 00:25:06.854 }, 00:25:06.854 "multi_ctrlr": true, 00:25:06.854 "ana_reporting": false 00:25:06.854 }, 00:25:06.854 "vs": { 00:25:06.855 "nvme_version": "1.3" 00:25:06.855 }, 00:25:06.855 "ns_data": { 00:25:06.855 "id": 1, 00:25:06.855 "can_share": true 00:25:06.855 } 00:25:06.855 } 00:25:06.855 ], 00:25:06.855 "mp_policy": "active_passive" 00:25:06.855 } 00:25:06.855 } 00:25:06.855 ] 00:25:06.855 20:25:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:06.855 20:25:44 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:06.855 20:25:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:06.855 20:25:44 -- common/autotest_common.sh@10 -- # set +x 00:25:07.143 20:25:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:07.143 20:25:44 -- host/async_init.sh@53 -- # mktemp 00:25:07.143 20:25:44 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.n2xgJekFqv 00:25:07.143 20:25:44 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:25:07.143 20:25:44 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.n2xgJekFqv 00:25:07.143 20:25:44 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:25:07.143 20:25:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:07.143 20:25:44 -- common/autotest_common.sh@10 -- # set +x 00:25:07.143 20:25:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:07.143 20:25:44 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:25:07.143 20:25:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:07.143 20:25:44 -- common/autotest_common.sh@10 -- # set +x 00:25:07.143 [2024-02-14 20:25:44.304343] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:07.143 [2024-02-14 20:25:44.304445] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:07.143 20:25:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:07.143 20:25:44 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.n2xgJekFqv 00:25:07.143 20:25:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:07.143 20:25:44 -- common/autotest_common.sh@10 -- # set +x 00:25:07.143 20:25:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:07.143 20:25:44 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.n2xgJekFqv 00:25:07.143 20:25:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:07.143 20:25:44 -- common/autotest_common.sh@10 -- # set +x 00:25:07.143 [2024-02-14 20:25:44.324396] bdev_nvme_rpc.c: 478:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:07.143 nvme0n1 00:25:07.143 20:25:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:07.143 20:25:44 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:07.143 20:25:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:07.143 20:25:44 -- common/autotest_common.sh@10 -- # set +x 00:25:07.143 [ 00:25:07.143 { 00:25:07.143 "name": "nvme0n1", 00:25:07.143 "aliases": [ 00:25:07.143 "5c019cb2-cb69-4599-a54d-ddae4f27e750" 00:25:07.143 ], 00:25:07.143 "product_name": "NVMe disk", 00:25:07.143 "block_size": 512, 00:25:07.143 "num_blocks": 2097152, 00:25:07.143 "uuid": "5c019cb2-cb69-4599-a54d-ddae4f27e750", 00:25:07.143 "assigned_rate_limits": { 00:25:07.143 "rw_ios_per_sec": 0, 00:25:07.143 "rw_mbytes_per_sec": 0, 00:25:07.143 "r_mbytes_per_sec": 0, 00:25:07.143 "w_mbytes_per_sec": 0 00:25:07.143 }, 00:25:07.143 "claimed": false, 00:25:07.143 "zoned": false, 00:25:07.143 "supported_io_types": { 00:25:07.143 "read": true, 00:25:07.143 "write": true, 00:25:07.143 "unmap": false, 00:25:07.143 "write_zeroes": true, 00:25:07.143 "flush": true, 00:25:07.143 "reset": true, 00:25:07.143 "compare": true, 00:25:07.143 "compare_and_write": true, 00:25:07.143 "abort": true, 00:25:07.143 "nvme_admin": true, 00:25:07.143 "nvme_io": true 00:25:07.143 }, 00:25:07.143 "driver_specific": { 00:25:07.143 "nvme": [ 00:25:07.143 { 00:25:07.143 "trid": { 00:25:07.143 "trtype": "TCP", 00:25:07.143 "adrfam": "IPv4", 00:25:07.143 "traddr": "10.0.0.2", 00:25:07.143 "trsvcid": "4421", 00:25:07.143 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:07.143 }, 00:25:07.143 "ctrlr_data": { 00:25:07.143 "cntlid": 3, 00:25:07.143 "vendor_id": "0x8086", 00:25:07.143 "model_number": "SPDK bdev Controller", 00:25:07.143 "serial_number": "00000000000000000000", 00:25:07.143 "firmware_revision": "24.05", 00:25:07.143 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:07.143 "oacs": { 00:25:07.143 "security": 0, 00:25:07.143 "format": 0, 00:25:07.143 "firmware": 0, 00:25:07.143 "ns_manage": 0 00:25:07.143 }, 00:25:07.143 "multi_ctrlr": true, 00:25:07.143 "ana_reporting": false 00:25:07.143 }, 00:25:07.143 "vs": { 00:25:07.143 "nvme_version": "1.3" 00:25:07.143 }, 00:25:07.143 "ns_data": { 00:25:07.143 "id": 1, 00:25:07.143 "can_share": true 00:25:07.143 } 00:25:07.143 } 00:25:07.143 ], 00:25:07.143 "mp_policy": "active_passive" 00:25:07.143 } 00:25:07.144 } 00:25:07.144 ] 00:25:07.144 20:25:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:07.144 20:25:44 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:07.144 20:25:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:07.144 20:25:44 -- common/autotest_common.sh@10 -- # set +x 00:25:07.144 20:25:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:07.144 20:25:44 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.n2xgJekFqv 00:25:07.144 20:25:44 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:25:07.144 20:25:44 -- host/async_init.sh@78 -- # nvmftestfini 00:25:07.144 20:25:44 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:07.144 20:25:44 -- nvmf/common.sh@116 -- # sync 00:25:07.144 20:25:44 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:07.144 20:25:44 -- nvmf/common.sh@119 -- # set +e 00:25:07.144 20:25:44 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:07.144 20:25:44 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:07.144 rmmod nvme_tcp 00:25:07.144 rmmod nvme_fabrics 00:25:07.144 rmmod nvme_keyring 00:25:07.144 20:25:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:07.144 20:25:44 -- nvmf/common.sh@123 -- # set -e 00:25:07.144 20:25:44 -- nvmf/common.sh@124 -- # return 0 00:25:07.144 20:25:44 -- nvmf/common.sh@477 -- # '[' -n 1893678 ']' 00:25:07.144 20:25:44 -- nvmf/common.sh@478 -- # killprocess 1893678 00:25:07.144 20:25:44 -- common/autotest_common.sh@924 -- # '[' -z 1893678 ']' 00:25:07.144 20:25:44 -- common/autotest_common.sh@928 -- # kill -0 1893678 00:25:07.144 20:25:44 -- common/autotest_common.sh@929 -- # uname 00:25:07.144 20:25:44 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:25:07.144 20:25:44 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 1893678 00:25:07.144 20:25:44 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:25:07.144 20:25:44 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:25:07.144 20:25:44 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 1893678' 00:25:07.144 killing process with pid 1893678 00:25:07.144 20:25:44 -- common/autotest_common.sh@943 -- # kill 1893678 00:25:07.144 20:25:44 -- common/autotest_common.sh@948 -- # wait 1893678 00:25:07.403 20:25:44 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:07.403 20:25:44 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:07.403 20:25:44 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:07.403 20:25:44 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:07.403 20:25:44 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:07.403 20:25:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:07.403 20:25:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:07.403 20:25:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:09.941 20:25:46 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:09.941 00:25:09.941 real 0m9.899s 00:25:09.941 user 0m3.617s 00:25:09.941 sys 0m4.817s 00:25:09.941 20:25:46 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:09.941 20:25:46 -- common/autotest_common.sh@10 -- # set +x 00:25:09.941 ************************************ 00:25:09.941 END TEST nvmf_async_init 00:25:09.941 ************************************ 00:25:09.942 20:25:46 -- nvmf/nvmf.sh@93 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:25:09.942 20:25:46 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:25:09.942 20:25:46 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:25:09.942 20:25:46 -- common/autotest_common.sh@10 -- # set +x 00:25:09.942 ************************************ 00:25:09.942 START TEST dma 00:25:09.942 ************************************ 00:25:09.942 20:25:46 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:25:09.942 * Looking for test storage... 00:25:09.942 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:09.942 20:25:46 -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:09.942 20:25:46 -- nvmf/common.sh@7 -- # uname -s 00:25:09.942 20:25:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:09.942 20:25:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:09.942 20:25:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:09.942 20:25:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:09.942 20:25:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:09.942 20:25:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:09.942 20:25:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:09.942 20:25:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:09.942 20:25:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:09.942 20:25:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:09.942 20:25:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:25:09.942 20:25:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:25:09.942 20:25:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:09.942 20:25:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:09.942 20:25:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:09.942 20:25:46 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:09.942 20:25:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:09.942 20:25:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:09.942 20:25:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:09.942 20:25:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.942 20:25:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.942 20:25:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.942 20:25:46 -- paths/export.sh@5 -- # export PATH 00:25:09.942 20:25:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.942 20:25:46 -- nvmf/common.sh@46 -- # : 0 00:25:09.942 20:25:46 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:09.942 20:25:46 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:09.942 20:25:46 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:09.942 20:25:46 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:09.942 20:25:46 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:09.942 20:25:46 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:09.942 20:25:46 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:09.942 20:25:46 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:09.942 20:25:46 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:25:09.942 20:25:46 -- host/dma.sh@13 -- # exit 0 00:25:09.942 00:25:09.942 real 0m0.111s 00:25:09.942 user 0m0.052s 00:25:09.942 sys 0m0.068s 00:25:09.942 20:25:46 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:09.942 20:25:46 -- common/autotest_common.sh@10 -- # set +x 00:25:09.942 ************************************ 00:25:09.942 END TEST dma 00:25:09.942 ************************************ 00:25:09.942 20:25:46 -- nvmf/nvmf.sh@96 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:25:09.942 20:25:46 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:25:09.942 20:25:46 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:25:09.942 20:25:46 -- common/autotest_common.sh@10 -- # set +x 00:25:09.942 ************************************ 00:25:09.942 START TEST nvmf_identify 00:25:09.942 ************************************ 00:25:09.942 20:25:46 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:25:09.942 * Looking for test storage... 00:25:09.942 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:09.942 20:25:47 -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:09.942 20:25:47 -- nvmf/common.sh@7 -- # uname -s 00:25:09.942 20:25:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:09.942 20:25:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:09.942 20:25:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:09.942 20:25:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:09.942 20:25:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:09.942 20:25:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:09.942 20:25:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:09.942 20:25:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:09.942 20:25:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:09.942 20:25:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:09.942 20:25:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:25:09.942 20:25:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:25:09.942 20:25:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:09.942 20:25:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:09.942 20:25:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:09.942 20:25:47 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:09.942 20:25:47 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:09.942 20:25:47 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:09.942 20:25:47 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:09.942 20:25:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.942 20:25:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.942 20:25:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.942 20:25:47 -- paths/export.sh@5 -- # export PATH 00:25:09.942 20:25:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.942 20:25:47 -- nvmf/common.sh@46 -- # : 0 00:25:09.942 20:25:47 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:09.942 20:25:47 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:09.942 20:25:47 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:09.942 20:25:47 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:09.942 20:25:47 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:09.942 20:25:47 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:09.942 20:25:47 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:09.942 20:25:47 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:09.942 20:25:47 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:09.942 20:25:47 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:09.942 20:25:47 -- host/identify.sh@14 -- # nvmftestinit 00:25:09.942 20:25:47 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:09.942 20:25:47 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:09.942 20:25:47 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:09.942 20:25:47 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:09.942 20:25:47 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:09.942 20:25:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:09.942 20:25:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:09.942 20:25:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:09.942 20:25:47 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:09.942 20:25:47 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:09.942 20:25:47 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:09.943 20:25:47 -- common/autotest_common.sh@10 -- # set +x 00:25:16.511 20:25:53 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:16.511 20:25:53 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:16.511 20:25:53 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:16.511 20:25:53 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:16.511 20:25:53 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:16.511 20:25:53 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:16.511 20:25:53 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:16.511 20:25:53 -- nvmf/common.sh@294 -- # net_devs=() 00:25:16.511 20:25:53 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:16.511 20:25:53 -- nvmf/common.sh@295 -- # e810=() 00:25:16.511 20:25:53 -- nvmf/common.sh@295 -- # local -ga e810 00:25:16.511 20:25:53 -- nvmf/common.sh@296 -- # x722=() 00:25:16.511 20:25:53 -- nvmf/common.sh@296 -- # local -ga x722 00:25:16.511 20:25:53 -- nvmf/common.sh@297 -- # mlx=() 00:25:16.511 20:25:53 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:16.511 20:25:53 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:16.511 20:25:53 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:16.511 20:25:53 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:16.511 20:25:53 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:16.511 20:25:53 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:16.511 20:25:53 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:16.511 20:25:53 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:16.511 20:25:53 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:16.511 20:25:53 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:16.511 20:25:53 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:16.511 20:25:53 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:16.511 20:25:53 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:16.511 20:25:53 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:16.511 20:25:53 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:16.511 20:25:53 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:16.511 20:25:53 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:16.511 20:25:53 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:16.511 20:25:53 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:16.511 20:25:53 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:16.511 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:16.511 20:25:53 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:16.511 20:25:53 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:16.511 20:25:53 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:16.511 20:25:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:16.511 20:25:53 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:16.511 20:25:53 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:16.511 20:25:53 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:16.511 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:16.511 20:25:53 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:16.511 20:25:53 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:16.511 20:25:53 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:16.511 20:25:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:16.511 20:25:53 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:16.511 20:25:53 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:16.511 20:25:53 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:16.511 20:25:53 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:16.511 20:25:53 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:16.511 20:25:53 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:16.511 20:25:53 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:16.511 20:25:53 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:16.511 20:25:53 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:16.511 Found net devices under 0000:af:00.0: cvl_0_0 00:25:16.511 20:25:53 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:16.511 20:25:53 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:16.511 20:25:53 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:16.511 20:25:53 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:16.511 20:25:53 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:16.511 20:25:53 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:16.511 Found net devices under 0000:af:00.1: cvl_0_1 00:25:16.511 20:25:53 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:16.511 20:25:53 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:16.511 20:25:53 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:16.511 20:25:53 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:16.511 20:25:53 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:16.511 20:25:53 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:16.511 20:25:53 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:16.511 20:25:53 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:16.511 20:25:53 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:16.511 20:25:53 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:16.511 20:25:53 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:16.511 20:25:53 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:16.511 20:25:53 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:16.511 20:25:53 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:16.511 20:25:53 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:16.511 20:25:53 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:16.511 20:25:53 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:16.511 20:25:53 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:16.511 20:25:53 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:16.511 20:25:53 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:16.511 20:25:53 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:16.511 20:25:53 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:16.511 20:25:53 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:16.511 20:25:53 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:16.511 20:25:53 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:16.511 20:25:53 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:16.511 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:16.511 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.215 ms 00:25:16.511 00:25:16.511 --- 10.0.0.2 ping statistics --- 00:25:16.511 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:16.511 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:25:16.511 20:25:53 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:16.511 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:16.511 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:25:16.511 00:25:16.511 --- 10.0.0.1 ping statistics --- 00:25:16.511 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:16.511 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:25:16.512 20:25:53 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:16.512 20:25:53 -- nvmf/common.sh@410 -- # return 0 00:25:16.512 20:25:53 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:16.512 20:25:53 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:16.512 20:25:53 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:16.512 20:25:53 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:16.512 20:25:53 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:16.512 20:25:53 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:16.512 20:25:53 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:16.512 20:25:53 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:25:16.512 20:25:53 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:16.512 20:25:53 -- common/autotest_common.sh@10 -- # set +x 00:25:16.512 20:25:53 -- host/identify.sh@19 -- # nvmfpid=1897782 00:25:16.512 20:25:53 -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:16.512 20:25:53 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:16.512 20:25:53 -- host/identify.sh@23 -- # waitforlisten 1897782 00:25:16.512 20:25:53 -- common/autotest_common.sh@817 -- # '[' -z 1897782 ']' 00:25:16.512 20:25:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:16.512 20:25:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:16.512 20:25:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:16.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:16.512 20:25:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:16.512 20:25:53 -- common/autotest_common.sh@10 -- # set +x 00:25:16.512 [2024-02-14 20:25:53.400126] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:25:16.512 [2024-02-14 20:25:53.400170] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:16.512 EAL: No free 2048 kB hugepages reported on node 1 00:25:16.512 [2024-02-14 20:25:53.463977] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:16.512 [2024-02-14 20:25:53.535157] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:16.512 [2024-02-14 20:25:53.535286] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:16.512 [2024-02-14 20:25:53.535294] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:16.512 [2024-02-14 20:25:53.535300] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:16.512 [2024-02-14 20:25:53.535352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:16.512 [2024-02-14 20:25:53.535450] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:16.512 [2024-02-14 20:25:53.535539] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:16.512 [2024-02-14 20:25:53.535540] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:17.085 20:25:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:17.085 20:25:54 -- common/autotest_common.sh@850 -- # return 0 00:25:17.085 20:25:54 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:17.085 20:25:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:17.085 20:25:54 -- common/autotest_common.sh@10 -- # set +x 00:25:17.085 [2024-02-14 20:25:54.213802] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:17.085 20:25:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:17.085 20:25:54 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:25:17.085 20:25:54 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:17.085 20:25:54 -- common/autotest_common.sh@10 -- # set +x 00:25:17.085 20:25:54 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:17.085 20:25:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:17.085 20:25:54 -- common/autotest_common.sh@10 -- # set +x 00:25:17.085 Malloc0 00:25:17.085 20:25:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:17.085 20:25:54 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:17.085 20:25:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:17.085 20:25:54 -- common/autotest_common.sh@10 -- # set +x 00:25:17.085 20:25:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:17.085 20:25:54 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:25:17.085 20:25:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:17.085 20:25:54 -- common/autotest_common.sh@10 -- # set +x 00:25:17.085 20:25:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:17.085 20:25:54 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:17.085 20:25:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:17.085 20:25:54 -- common/autotest_common.sh@10 -- # set +x 00:25:17.085 [2024-02-14 20:25:54.301608] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:17.085 20:25:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:17.085 20:25:54 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:17.085 20:25:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:17.085 20:25:54 -- common/autotest_common.sh@10 -- # set +x 00:25:17.085 20:25:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:17.085 20:25:54 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:25:17.085 20:25:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:17.085 20:25:54 -- common/autotest_common.sh@10 -- # set +x 00:25:17.085 [2024-02-14 20:25:54.317443] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:25:17.085 [ 00:25:17.085 { 00:25:17.085 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:17.085 "subtype": "Discovery", 00:25:17.085 "listen_addresses": [ 00:25:17.085 { 00:25:17.085 "transport": "TCP", 00:25:17.085 "trtype": "TCP", 00:25:17.085 "adrfam": "IPv4", 00:25:17.085 "traddr": "10.0.0.2", 00:25:17.085 "trsvcid": "4420" 00:25:17.085 } 00:25:17.085 ], 00:25:17.085 "allow_any_host": true, 00:25:17.085 "hosts": [] 00:25:17.085 }, 00:25:17.085 { 00:25:17.085 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:17.085 "subtype": "NVMe", 00:25:17.085 "listen_addresses": [ 00:25:17.085 { 00:25:17.085 "transport": "TCP", 00:25:17.085 "trtype": "TCP", 00:25:17.085 "adrfam": "IPv4", 00:25:17.085 "traddr": "10.0.0.2", 00:25:17.085 "trsvcid": "4420" 00:25:17.085 } 00:25:17.085 ], 00:25:17.085 "allow_any_host": true, 00:25:17.085 "hosts": [], 00:25:17.085 "serial_number": "SPDK00000000000001", 00:25:17.085 "model_number": "SPDK bdev Controller", 00:25:17.085 "max_namespaces": 32, 00:25:17.085 "min_cntlid": 1, 00:25:17.085 "max_cntlid": 65519, 00:25:17.085 "namespaces": [ 00:25:17.085 { 00:25:17.085 "nsid": 1, 00:25:17.085 "bdev_name": "Malloc0", 00:25:17.085 "name": "Malloc0", 00:25:17.085 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:25:17.085 "eui64": "ABCDEF0123456789", 00:25:17.085 "uuid": "96819dac-f707-47ff-be37-69301ca7c0f4" 00:25:17.085 } 00:25:17.085 ] 00:25:17.085 } 00:25:17.085 ] 00:25:17.085 20:25:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:17.085 20:25:54 -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:25:17.085 [2024-02-14 20:25:54.351244] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:25:17.085 [2024-02-14 20:25:54.351281] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1898022 ] 00:25:17.085 EAL: No free 2048 kB hugepages reported on node 1 00:25:17.085 [2024-02-14 20:25:54.379973] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:25:17.085 [2024-02-14 20:25:54.380016] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:25:17.085 [2024-02-14 20:25:54.380021] nvme_tcp.c:2246:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:25:17.085 [2024-02-14 20:25:54.380033] nvme_tcp.c:2264:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:25:17.085 [2024-02-14 20:25:54.380040] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:25:17.085 [2024-02-14 20:25:54.380605] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:25:17.085 [2024-02-14 20:25:54.380635] nvme_tcp.c:1485:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1560b80 0 00:25:17.085 [2024-02-14 20:25:54.394654] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:25:17.085 [2024-02-14 20:25:54.394673] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:25:17.085 [2024-02-14 20:25:54.394677] nvme_tcp.c:1531:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:25:17.085 [2024-02-14 20:25:54.394680] nvme_tcp.c:1532:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:25:17.085 [2024-02-14 20:25:54.394721] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.085 [2024-02-14 20:25:54.394727] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.085 [2024-02-14 20:25:54.394730] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1560b80) 00:25:17.085 [2024-02-14 20:25:54.394743] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:25:17.085 [2024-02-14 20:25:54.394760] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c88d0, cid 0, qid 0 00:25:17.085 [2024-02-14 20:25:54.402658] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.085 [2024-02-14 20:25:54.402666] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.086 [2024-02-14 20:25:54.402669] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.086 [2024-02-14 20:25:54.402673] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15c88d0) on tqpair=0x1560b80 00:25:17.086 [2024-02-14 20:25:54.402686] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:25:17.086 [2024-02-14 20:25:54.402692] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:25:17.086 [2024-02-14 20:25:54.402700] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:25:17.086 [2024-02-14 20:25:54.402714] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.086 [2024-02-14 20:25:54.402718] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.086 [2024-02-14 20:25:54.402721] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1560b80) 00:25:17.086 [2024-02-14 20:25:54.402727] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.086 [2024-02-14 20:25:54.402740] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c88d0, cid 0, qid 0 00:25:17.086 [2024-02-14 20:25:54.402985] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.086 [2024-02-14 20:25:54.402996] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.086 [2024-02-14 20:25:54.402999] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.086 [2024-02-14 20:25:54.403003] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15c88d0) on tqpair=0x1560b80 00:25:17.086 [2024-02-14 20:25:54.403012] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:25:17.086 [2024-02-14 20:25:54.403020] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:25:17.086 [2024-02-14 20:25:54.403028] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.086 [2024-02-14 20:25:54.403031] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.086 [2024-02-14 20:25:54.403034] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1560b80) 00:25:17.086 [2024-02-14 20:25:54.403042] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.086 [2024-02-14 20:25:54.403055] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c88d0, cid 0, qid 0 00:25:17.086 [2024-02-14 20:25:54.403182] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.086 [2024-02-14 20:25:54.403190] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.086 [2024-02-14 20:25:54.403194] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.086 [2024-02-14 20:25:54.403197] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15c88d0) on tqpair=0x1560b80 00:25:17.086 [2024-02-14 20:25:54.403203] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:25:17.086 [2024-02-14 20:25:54.403211] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:25:17.086 [2024-02-14 20:25:54.403217] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.086 [2024-02-14 20:25:54.403220] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.086 [2024-02-14 20:25:54.403224] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1560b80) 00:25:17.086 [2024-02-14 20:25:54.403230] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.086 [2024-02-14 20:25:54.403242] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c88d0, cid 0, qid 0 00:25:17.086 [2024-02-14 20:25:54.403367] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.086 [2024-02-14 20:25:54.403375] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.086 [2024-02-14 20:25:54.403378] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.086 [2024-02-14 20:25:54.403382] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15c88d0) on tqpair=0x1560b80 00:25:17.086 [2024-02-14 20:25:54.403388] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:25:17.086 [2024-02-14 20:25:54.403401] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.086 [2024-02-14 20:25:54.403405] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.086 [2024-02-14 20:25:54.403408] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1560b80) 00:25:17.086 [2024-02-14 20:25:54.403414] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.086 [2024-02-14 20:25:54.403426] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c88d0, cid 0, qid 0 00:25:17.086 [2024-02-14 20:25:54.403547] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.086 [2024-02-14 20:25:54.403555] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.086 [2024-02-14 20:25:54.403558] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.086 [2024-02-14 20:25:54.403562] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15c88d0) on tqpair=0x1560b80 00:25:17.086 [2024-02-14 20:25:54.403567] nvme_ctrlr.c:3736:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:25:17.086 [2024-02-14 20:25:54.403571] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:25:17.086 [2024-02-14 20:25:54.403579] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:25:17.086 [2024-02-14 20:25:54.403685] nvme_ctrlr.c:3929:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:25:17.086 [2024-02-14 20:25:54.403689] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:25:17.086 [2024-02-14 20:25:54.403698] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.086 [2024-02-14 20:25:54.403702] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.086 [2024-02-14 20:25:54.403705] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1560b80) 00:25:17.086 [2024-02-14 20:25:54.403711] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.086 [2024-02-14 20:25:54.403723] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c88d0, cid 0, qid 0 00:25:17.086 [2024-02-14 20:25:54.403850] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.086 [2024-02-14 20:25:54.403858] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.086 [2024-02-14 20:25:54.403861] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.086 [2024-02-14 20:25:54.403864] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15c88d0) on tqpair=0x1560b80 00:25:17.086 [2024-02-14 20:25:54.403870] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:25:17.086 [2024-02-14 20:25:54.403880] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.086 [2024-02-14 20:25:54.403883] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.086 [2024-02-14 20:25:54.403886] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1560b80) 00:25:17.086 [2024-02-14 20:25:54.403892] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.086 [2024-02-14 20:25:54.403904] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c88d0, cid 0, qid 0 00:25:17.086 [2024-02-14 20:25:54.404033] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.086 [2024-02-14 20:25:54.404041] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.086 [2024-02-14 20:25:54.404044] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.086 [2024-02-14 20:25:54.404047] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15c88d0) on tqpair=0x1560b80 00:25:17.086 [2024-02-14 20:25:54.404052] nvme_ctrlr.c:3771:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:25:17.086 [2024-02-14 20:25:54.404059] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:25:17.086 [2024-02-14 20:25:54.404067] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:25:17.086 [2024-02-14 20:25:54.404075] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:25:17.086 [2024-02-14 20:25:54.404083] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.086 [2024-02-14 20:25:54.404087] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.086 [2024-02-14 20:25:54.404089] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1560b80) 00:25:17.086 [2024-02-14 20:25:54.404096] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.086 [2024-02-14 20:25:54.404108] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c88d0, cid 0, qid 0 00:25:17.086 [2024-02-14 20:25:54.404330] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:17.086 [2024-02-14 20:25:54.404341] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:17.086 [2024-02-14 20:25:54.404344] nvme_tcp.c:1648:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:17.086 [2024-02-14 20:25:54.404347] nvme_tcp.c:1649:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1560b80): datao=0, datal=4096, cccid=0 00:25:17.086 [2024-02-14 20:25:54.404351] nvme_tcp.c:1660:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15c88d0) on tqpair(0x1560b80): expected_datao=0, payload_size=4096 00:25:17.086 [2024-02-14 20:25:54.404359] nvme_tcp.c:1451:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:17.086 [2024-02-14 20:25:54.404363] nvme_tcp.c:1235:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:17.086 [2024-02-14 20:25:54.404582] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.086 [2024-02-14 20:25:54.404588] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.086 [2024-02-14 20:25:54.404591] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.086 [2024-02-14 20:25:54.404594] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15c88d0) on tqpair=0x1560b80 00:25:17.086 [2024-02-14 20:25:54.404602] nvme_ctrlr.c:1971:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:25:17.086 [2024-02-14 20:25:54.404610] nvme_ctrlr.c:1975:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:25:17.087 [2024-02-14 20:25:54.404614] nvme_ctrlr.c:1978:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:25:17.087 [2024-02-14 20:25:54.404619] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:25:17.087 [2024-02-14 20:25:54.404622] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:25:17.087 [2024-02-14 20:25:54.404626] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:25:17.087 [2024-02-14 20:25:54.404635] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:25:17.087 [2024-02-14 20:25:54.404642] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.087 [2024-02-14 20:25:54.404645] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.087 [2024-02-14 20:25:54.404656] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1560b80) 00:25:17.087 [2024-02-14 20:25:54.404663] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:17.087 [2024-02-14 20:25:54.404675] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c88d0, cid 0, qid 0 00:25:17.087 [2024-02-14 20:25:54.404806] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.087 [2024-02-14 20:25:54.404815] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.087 [2024-02-14 20:25:54.404818] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.087 [2024-02-14 20:25:54.404822] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15c88d0) on tqpair=0x1560b80 00:25:17.087 [2024-02-14 20:25:54.404831] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.087 [2024-02-14 20:25:54.404834] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.087 [2024-02-14 20:25:54.404837] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1560b80) 00:25:17.087 [2024-02-14 20:25:54.404843] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.087 [2024-02-14 20:25:54.404848] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.087 [2024-02-14 20:25:54.404851] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.087 [2024-02-14 20:25:54.404854] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1560b80) 00:25:17.087 [2024-02-14 20:25:54.404859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.087 [2024-02-14 20:25:54.404864] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.087 [2024-02-14 20:25:54.404867] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.087 [2024-02-14 20:25:54.404870] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1560b80) 00:25:17.087 [2024-02-14 20:25:54.404875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.087 [2024-02-14 20:25:54.404879] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.087 [2024-02-14 20:25:54.404882] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.087 [2024-02-14 20:25:54.404885] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1560b80) 00:25:17.087 [2024-02-14 20:25:54.404890] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.087 [2024-02-14 20:25:54.404894] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:25:17.087 [2024-02-14 20:25:54.404907] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:25:17.087 [2024-02-14 20:25:54.404913] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.087 [2024-02-14 20:25:54.404916] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.087 [2024-02-14 20:25:54.404919] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1560b80) 00:25:17.087 [2024-02-14 20:25:54.404924] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.087 [2024-02-14 20:25:54.404937] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c88d0, cid 0, qid 0 00:25:17.087 [2024-02-14 20:25:54.404942] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c8a30, cid 1, qid 0 00:25:17.087 [2024-02-14 20:25:54.404946] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c8b90, cid 2, qid 0 00:25:17.087 [2024-02-14 20:25:54.404949] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c8cf0, cid 3, qid 0 00:25:17.087 [2024-02-14 20:25:54.404953] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c8e50, cid 4, qid 0 00:25:17.087 [2024-02-14 20:25:54.405118] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.087 [2024-02-14 20:25:54.405127] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.087 [2024-02-14 20:25:54.405130] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.087 [2024-02-14 20:25:54.405136] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15c8e50) on tqpair=0x1560b80 00:25:17.087 [2024-02-14 20:25:54.405142] nvme_ctrlr.c:2889:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:25:17.087 [2024-02-14 20:25:54.405147] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:25:17.087 [2024-02-14 20:25:54.405158] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.087 [2024-02-14 20:25:54.405162] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.087 [2024-02-14 20:25:54.405165] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1560b80) 00:25:17.087 [2024-02-14 20:25:54.405171] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.087 [2024-02-14 20:25:54.405183] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c8e50, cid 4, qid 0 00:25:17.087 [2024-02-14 20:25:54.405321] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:17.087 [2024-02-14 20:25:54.405330] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:17.087 [2024-02-14 20:25:54.405333] nvme_tcp.c:1648:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:17.087 [2024-02-14 20:25:54.405336] nvme_tcp.c:1649:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1560b80): datao=0, datal=4096, cccid=4 00:25:17.087 [2024-02-14 20:25:54.405340] nvme_tcp.c:1660:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15c8e50) on tqpair(0x1560b80): expected_datao=0, payload_size=4096 00:25:17.087 [2024-02-14 20:25:54.405347] nvme_tcp.c:1451:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:17.087 [2024-02-14 20:25:54.405350] nvme_tcp.c:1235:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:17.087 [2024-02-14 20:25:54.405562] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.087 [2024-02-14 20:25:54.405567] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.087 [2024-02-14 20:25:54.405570] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.087 [2024-02-14 20:25:54.405573] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15c8e50) on tqpair=0x1560b80 00:25:17.087 [2024-02-14 20:25:54.405587] nvme_ctrlr.c:4023:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:25:17.087 [2024-02-14 20:25:54.405607] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.087 [2024-02-14 20:25:54.405610] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.087 [2024-02-14 20:25:54.405613] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1560b80) 00:25:17.087 [2024-02-14 20:25:54.405620] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.087 [2024-02-14 20:25:54.405626] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.087 [2024-02-14 20:25:54.405629] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.087 [2024-02-14 20:25:54.405632] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1560b80) 00:25:17.087 [2024-02-14 20:25:54.405637] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.087 [2024-02-14 20:25:54.405660] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c8e50, cid 4, qid 0 00:25:17.087 [2024-02-14 20:25:54.405665] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c8fb0, cid 5, qid 0 00:25:17.087 [2024-02-14 20:25:54.405826] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:17.087 [2024-02-14 20:25:54.405835] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:17.087 [2024-02-14 20:25:54.405838] nvme_tcp.c:1648:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:17.087 [2024-02-14 20:25:54.405841] nvme_tcp.c:1649:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1560b80): datao=0, datal=1024, cccid=4 00:25:17.087 [2024-02-14 20:25:54.405845] nvme_tcp.c:1660:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15c8e50) on tqpair(0x1560b80): expected_datao=0, payload_size=1024 00:25:17.087 [2024-02-14 20:25:54.405855] nvme_tcp.c:1451:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:17.087 [2024-02-14 20:25:54.405858] nvme_tcp.c:1235:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:17.087 [2024-02-14 20:25:54.405863] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.087 [2024-02-14 20:25:54.405868] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.087 [2024-02-14 20:25:54.405871] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.087 [2024-02-14 20:25:54.405874] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15c8fb0) on tqpair=0x1560b80 00:25:17.087 [2024-02-14 20:25:54.446872] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.088 [2024-02-14 20:25:54.446887] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.088 [2024-02-14 20:25:54.446891] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.088 [2024-02-14 20:25:54.446894] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15c8e50) on tqpair=0x1560b80 00:25:17.088 [2024-02-14 20:25:54.446906] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.088 [2024-02-14 20:25:54.446911] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.088 [2024-02-14 20:25:54.446914] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1560b80) 00:25:17.088 [2024-02-14 20:25:54.446922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.088 [2024-02-14 20:25:54.446940] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c8e50, cid 4, qid 0 00:25:17.088 [2024-02-14 20:25:54.447083] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:17.088 [2024-02-14 20:25:54.447092] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:17.088 [2024-02-14 20:25:54.447095] nvme_tcp.c:1648:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:17.088 [2024-02-14 20:25:54.447098] nvme_tcp.c:1649:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1560b80): datao=0, datal=3072, cccid=4 00:25:17.088 [2024-02-14 20:25:54.447104] nvme_tcp.c:1660:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15c8e50) on tqpair(0x1560b80): expected_datao=0, payload_size=3072 00:25:17.088 [2024-02-14 20:25:54.447313] nvme_tcp.c:1451:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:17.088 [2024-02-14 20:25:54.447318] nvme_tcp.c:1235:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:17.088 [2024-02-14 20:25:54.447411] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.088 [2024-02-14 20:25:54.447420] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.088 [2024-02-14 20:25:54.447425] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.088 [2024-02-14 20:25:54.447428] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15c8e50) on tqpair=0x1560b80 00:25:17.088 [2024-02-14 20:25:54.447439] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.088 [2024-02-14 20:25:54.447443] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.088 [2024-02-14 20:25:54.447446] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1560b80) 00:25:17.088 [2024-02-14 20:25:54.447452] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.088 [2024-02-14 20:25:54.447469] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c8e50, cid 4, qid 0 00:25:17.088 [2024-02-14 20:25:54.447599] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:17.088 [2024-02-14 20:25:54.447607] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:17.088 [2024-02-14 20:25:54.447610] nvme_tcp.c:1648:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:17.088 [2024-02-14 20:25:54.447614] nvme_tcp.c:1649:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1560b80): datao=0, datal=8, cccid=4 00:25:17.088 [2024-02-14 20:25:54.447617] nvme_tcp.c:1660:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15c8e50) on tqpair(0x1560b80): expected_datao=0, payload_size=8 00:25:17.088 [2024-02-14 20:25:54.447629] nvme_tcp.c:1451:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:17.088 [2024-02-14 20:25:54.447633] nvme_tcp.c:1235:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:17.088 [2024-02-14 20:25:54.488004] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.088 [2024-02-14 20:25:54.488014] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.088 [2024-02-14 20:25:54.488017] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.088 [2024-02-14 20:25:54.488021] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15c8e50) on tqpair=0x1560b80 00:25:17.088 ===================================================== 00:25:17.088 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:17.088 ===================================================== 00:25:17.088 Controller Capabilities/Features 00:25:17.088 ================================ 00:25:17.088 Vendor ID: 0000 00:25:17.088 Subsystem Vendor ID: 0000 00:25:17.088 Serial Number: .................... 00:25:17.088 Model Number: ........................................ 00:25:17.088 Firmware Version: 24.05 00:25:17.088 Recommended Arb Burst: 0 00:25:17.088 IEEE OUI Identifier: 00 00 00 00:25:17.088 Multi-path I/O 00:25:17.088 May have multiple subsystem ports: No 00:25:17.088 May have multiple controllers: No 00:25:17.088 Associated with SR-IOV VF: No 00:25:17.088 Max Data Transfer Size: 131072 00:25:17.088 Max Number of Namespaces: 0 00:25:17.088 Max Number of I/O Queues: 1024 00:25:17.088 NVMe Specification Version (VS): 1.3 00:25:17.088 NVMe Specification Version (Identify): 1.3 00:25:17.088 Maximum Queue Entries: 128 00:25:17.088 Contiguous Queues Required: Yes 00:25:17.088 Arbitration Mechanisms Supported 00:25:17.088 Weighted Round Robin: Not Supported 00:25:17.088 Vendor Specific: Not Supported 00:25:17.088 Reset Timeout: 15000 ms 00:25:17.088 Doorbell Stride: 4 bytes 00:25:17.088 NVM Subsystem Reset: Not Supported 00:25:17.088 Command Sets Supported 00:25:17.088 NVM Command Set: Supported 00:25:17.088 Boot Partition: Not Supported 00:25:17.088 Memory Page Size Minimum: 4096 bytes 00:25:17.088 Memory Page Size Maximum: 4096 bytes 00:25:17.088 Persistent Memory Region: Not Supported 00:25:17.088 Optional Asynchronous Events Supported 00:25:17.088 Namespace Attribute Notices: Not Supported 00:25:17.088 Firmware Activation Notices: Not Supported 00:25:17.088 ANA Change Notices: Not Supported 00:25:17.088 PLE Aggregate Log Change Notices: Not Supported 00:25:17.088 LBA Status Info Alert Notices: Not Supported 00:25:17.088 EGE Aggregate Log Change Notices: Not Supported 00:25:17.088 Normal NVM Subsystem Shutdown event: Not Supported 00:25:17.088 Zone Descriptor Change Notices: Not Supported 00:25:17.088 Discovery Log Change Notices: Supported 00:25:17.088 Controller Attributes 00:25:17.088 128-bit Host Identifier: Not Supported 00:25:17.088 Non-Operational Permissive Mode: Not Supported 00:25:17.088 NVM Sets: Not Supported 00:25:17.088 Read Recovery Levels: Not Supported 00:25:17.088 Endurance Groups: Not Supported 00:25:17.088 Predictable Latency Mode: Not Supported 00:25:17.088 Traffic Based Keep ALive: Not Supported 00:25:17.088 Namespace Granularity: Not Supported 00:25:17.088 SQ Associations: Not Supported 00:25:17.088 UUID List: Not Supported 00:25:17.088 Multi-Domain Subsystem: Not Supported 00:25:17.088 Fixed Capacity Management: Not Supported 00:25:17.088 Variable Capacity Management: Not Supported 00:25:17.088 Delete Endurance Group: Not Supported 00:25:17.088 Delete NVM Set: Not Supported 00:25:17.088 Extended LBA Formats Supported: Not Supported 00:25:17.088 Flexible Data Placement Supported: Not Supported 00:25:17.088 00:25:17.088 Controller Memory Buffer Support 00:25:17.088 ================================ 00:25:17.088 Supported: No 00:25:17.088 00:25:17.088 Persistent Memory Region Support 00:25:17.088 ================================ 00:25:17.088 Supported: No 00:25:17.088 00:25:17.088 Admin Command Set Attributes 00:25:17.088 ============================ 00:25:17.088 Security Send/Receive: Not Supported 00:25:17.088 Format NVM: Not Supported 00:25:17.088 Firmware Activate/Download: Not Supported 00:25:17.088 Namespace Management: Not Supported 00:25:17.088 Device Self-Test: Not Supported 00:25:17.088 Directives: Not Supported 00:25:17.088 NVMe-MI: Not Supported 00:25:17.088 Virtualization Management: Not Supported 00:25:17.088 Doorbell Buffer Config: Not Supported 00:25:17.088 Get LBA Status Capability: Not Supported 00:25:17.088 Command & Feature Lockdown Capability: Not Supported 00:25:17.088 Abort Command Limit: 1 00:25:17.088 Async Event Request Limit: 4 00:25:17.088 Number of Firmware Slots: N/A 00:25:17.088 Firmware Slot 1 Read-Only: N/A 00:25:17.088 Firmware Activation Without Reset: N/A 00:25:17.088 Multiple Update Detection Support: N/A 00:25:17.088 Firmware Update Granularity: No Information Provided 00:25:17.089 Per-Namespace SMART Log: No 00:25:17.089 Asymmetric Namespace Access Log Page: Not Supported 00:25:17.089 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:17.089 Command Effects Log Page: Not Supported 00:25:17.089 Get Log Page Extended Data: Supported 00:25:17.089 Telemetry Log Pages: Not Supported 00:25:17.089 Persistent Event Log Pages: Not Supported 00:25:17.089 Supported Log Pages Log Page: May Support 00:25:17.089 Commands Supported & Effects Log Page: Not Supported 00:25:17.089 Feature Identifiers & Effects Log Page:May Support 00:25:17.089 NVMe-MI Commands & Effects Log Page: May Support 00:25:17.089 Data Area 4 for Telemetry Log: Not Supported 00:25:17.089 Error Log Page Entries Supported: 128 00:25:17.089 Keep Alive: Not Supported 00:25:17.089 00:25:17.089 NVM Command Set Attributes 00:25:17.089 ========================== 00:25:17.089 Submission Queue Entry Size 00:25:17.089 Max: 1 00:25:17.089 Min: 1 00:25:17.089 Completion Queue Entry Size 00:25:17.089 Max: 1 00:25:17.089 Min: 1 00:25:17.089 Number of Namespaces: 0 00:25:17.089 Compare Command: Not Supported 00:25:17.089 Write Uncorrectable Command: Not Supported 00:25:17.089 Dataset Management Command: Not Supported 00:25:17.089 Write Zeroes Command: Not Supported 00:25:17.089 Set Features Save Field: Not Supported 00:25:17.089 Reservations: Not Supported 00:25:17.089 Timestamp: Not Supported 00:25:17.089 Copy: Not Supported 00:25:17.089 Volatile Write Cache: Not Present 00:25:17.089 Atomic Write Unit (Normal): 1 00:25:17.089 Atomic Write Unit (PFail): 1 00:25:17.089 Atomic Compare & Write Unit: 1 00:25:17.089 Fused Compare & Write: Supported 00:25:17.089 Scatter-Gather List 00:25:17.089 SGL Command Set: Supported 00:25:17.089 SGL Keyed: Supported 00:25:17.089 SGL Bit Bucket Descriptor: Not Supported 00:25:17.089 SGL Metadata Pointer: Not Supported 00:25:17.089 Oversized SGL: Not Supported 00:25:17.089 SGL Metadata Address: Not Supported 00:25:17.089 SGL Offset: Supported 00:25:17.089 Transport SGL Data Block: Not Supported 00:25:17.089 Replay Protected Memory Block: Not Supported 00:25:17.089 00:25:17.089 Firmware Slot Information 00:25:17.089 ========================= 00:25:17.089 Active slot: 0 00:25:17.089 00:25:17.089 00:25:17.089 Error Log 00:25:17.089 ========= 00:25:17.089 00:25:17.089 Active Namespaces 00:25:17.089 ================= 00:25:17.089 Discovery Log Page 00:25:17.089 ================== 00:25:17.089 Generation Counter: 2 00:25:17.089 Number of Records: 2 00:25:17.089 Record Format: 0 00:25:17.089 00:25:17.089 Discovery Log Entry 0 00:25:17.089 ---------------------- 00:25:17.089 Transport Type: 3 (TCP) 00:25:17.089 Address Family: 1 (IPv4) 00:25:17.089 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:17.089 Entry Flags: 00:25:17.089 Duplicate Returned Information: 1 00:25:17.089 Explicit Persistent Connection Support for Discovery: 1 00:25:17.089 Transport Requirements: 00:25:17.089 Secure Channel: Not Required 00:25:17.089 Port ID: 0 (0x0000) 00:25:17.089 Controller ID: 65535 (0xffff) 00:25:17.089 Admin Max SQ Size: 128 00:25:17.089 Transport Service Identifier: 4420 00:25:17.089 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:17.089 Transport Address: 10.0.0.2 00:25:17.089 Discovery Log Entry 1 00:25:17.089 ---------------------- 00:25:17.089 Transport Type: 3 (TCP) 00:25:17.089 Address Family: 1 (IPv4) 00:25:17.089 Subsystem Type: 2 (NVM Subsystem) 00:25:17.089 Entry Flags: 00:25:17.089 Duplicate Returned Information: 0 00:25:17.089 Explicit Persistent Connection Support for Discovery: 0 00:25:17.089 Transport Requirements: 00:25:17.089 Secure Channel: Not Required 00:25:17.089 Port ID: 0 (0x0000) 00:25:17.089 Controller ID: 65535 (0xffff) 00:25:17.089 Admin Max SQ Size: 128 00:25:17.089 Transport Service Identifier: 4420 00:25:17.089 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:25:17.089 Transport Address: 10.0.0.2 [2024-02-14 20:25:54.488096] nvme_ctrlr.c:4208:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:25:17.089 [2024-02-14 20:25:54.488109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.089 [2024-02-14 20:25:54.488114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.089 [2024-02-14 20:25:54.488119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.089 [2024-02-14 20:25:54.488124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.089 [2024-02-14 20:25:54.488134] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.089 [2024-02-14 20:25:54.488138] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.089 [2024-02-14 20:25:54.488141] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1560b80) 00:25:17.089 [2024-02-14 20:25:54.488147] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.089 [2024-02-14 20:25:54.488160] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c8cf0, cid 3, qid 0 00:25:17.089 [2024-02-14 20:25:54.488286] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.089 [2024-02-14 20:25:54.488295] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.089 [2024-02-14 20:25:54.488298] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.089 [2024-02-14 20:25:54.488302] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15c8cf0) on tqpair=0x1560b80 00:25:17.089 [2024-02-14 20:25:54.488309] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.089 [2024-02-14 20:25:54.488313] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.089 [2024-02-14 20:25:54.488316] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1560b80) 00:25:17.089 [2024-02-14 20:25:54.488322] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.089 [2024-02-14 20:25:54.488338] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c8cf0, cid 3, qid 0 00:25:17.089 [2024-02-14 20:25:54.488470] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.089 [2024-02-14 20:25:54.488479] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.089 [2024-02-14 20:25:54.488482] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.089 [2024-02-14 20:25:54.488485] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15c8cf0) on tqpair=0x1560b80 00:25:17.089 [2024-02-14 20:25:54.488491] nvme_ctrlr.c:1069:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:25:17.089 [2024-02-14 20:25:54.488495] nvme_ctrlr.c:1072:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:25:17.089 [2024-02-14 20:25:54.488505] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.089 [2024-02-14 20:25:54.488508] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.089 [2024-02-14 20:25:54.488511] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1560b80) 00:25:17.089 [2024-02-14 20:25:54.488518] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.089 [2024-02-14 20:25:54.488532] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c8cf0, cid 3, qid 0 00:25:17.089 [2024-02-14 20:25:54.488666] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.089 [2024-02-14 20:25:54.488675] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.089 [2024-02-14 20:25:54.488678] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.089 [2024-02-14 20:25:54.488682] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15c8cf0) on tqpair=0x1560b80 00:25:17.089 [2024-02-14 20:25:54.488694] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.089 [2024-02-14 20:25:54.488697] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.089 [2024-02-14 20:25:54.488700] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1560b80) 00:25:17.089 [2024-02-14 20:25:54.488707] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.089 [2024-02-14 20:25:54.488719] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c8cf0, cid 3, qid 0 00:25:17.090 [2024-02-14 20:25:54.488852] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.090 [2024-02-14 20:25:54.488860] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.090 [2024-02-14 20:25:54.488863] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.090 [2024-02-14 20:25:54.488867] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15c8cf0) on tqpair=0x1560b80 00:25:17.090 [2024-02-14 20:25:54.488878] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.090 [2024-02-14 20:25:54.488881] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.090 [2024-02-14 20:25:54.488884] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1560b80) 00:25:17.090 [2024-02-14 20:25:54.488890] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.090 [2024-02-14 20:25:54.488902] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c8cf0, cid 3, qid 0 00:25:17.090 [2024-02-14 20:25:54.489038] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.090 [2024-02-14 20:25:54.489046] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.090 [2024-02-14 20:25:54.489049] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.090 [2024-02-14 20:25:54.489053] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15c8cf0) on tqpair=0x1560b80 00:25:17.090 [2024-02-14 20:25:54.489063] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.090 [2024-02-14 20:25:54.489066] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.090 [2024-02-14 20:25:54.489069] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1560b80) 00:25:17.090 [2024-02-14 20:25:54.489076] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.090 [2024-02-14 20:25:54.489087] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c8cf0, cid 3, qid 0 00:25:17.090 [2024-02-14 20:25:54.489212] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.090 [2024-02-14 20:25:54.489220] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.090 [2024-02-14 20:25:54.489223] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.090 [2024-02-14 20:25:54.489226] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15c8cf0) on tqpair=0x1560b80 00:25:17.090 [2024-02-14 20:25:54.489237] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.090 [2024-02-14 20:25:54.489241] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.090 [2024-02-14 20:25:54.489244] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1560b80) 00:25:17.090 [2024-02-14 20:25:54.489250] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.090 [2024-02-14 20:25:54.489265] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c8cf0, cid 3, qid 0 00:25:17.090 [2024-02-14 20:25:54.489397] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.090 [2024-02-14 20:25:54.489406] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.090 [2024-02-14 20:25:54.489409] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.090 [2024-02-14 20:25:54.489412] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15c8cf0) on tqpair=0x1560b80 00:25:17.090 [2024-02-14 20:25:54.489423] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.090 [2024-02-14 20:25:54.489426] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.090 [2024-02-14 20:25:54.489429] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1560b80) 00:25:17.090 [2024-02-14 20:25:54.489436] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.090 [2024-02-14 20:25:54.489447] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c8cf0, cid 3, qid 0 00:25:17.090 [2024-02-14 20:25:54.489583] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.090 [2024-02-14 20:25:54.489592] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.090 [2024-02-14 20:25:54.489595] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.090 [2024-02-14 20:25:54.489598] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15c8cf0) on tqpair=0x1560b80 00:25:17.090 [2024-02-14 20:25:54.489609] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.090 [2024-02-14 20:25:54.489612] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.090 [2024-02-14 20:25:54.489615] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1560b80) 00:25:17.090 [2024-02-14 20:25:54.489621] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.090 [2024-02-14 20:25:54.489633] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c8cf0, cid 3, qid 0 00:25:17.090 [2024-02-14 20:25:54.493656] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.090 [2024-02-14 20:25:54.493663] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.090 [2024-02-14 20:25:54.493665] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.090 [2024-02-14 20:25:54.493669] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15c8cf0) on tqpair=0x1560b80 00:25:17.090 [2024-02-14 20:25:54.493678] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.090 [2024-02-14 20:25:54.493682] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.090 [2024-02-14 20:25:54.493685] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1560b80) 00:25:17.090 [2024-02-14 20:25:54.493691] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.090 [2024-02-14 20:25:54.493702] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15c8cf0, cid 3, qid 0 00:25:17.090 [2024-02-14 20:25:54.493918] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.090 [2024-02-14 20:25:54.493927] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.090 [2024-02-14 20:25:54.493930] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.090 [2024-02-14 20:25:54.493933] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15c8cf0) on tqpair=0x1560b80 00:25:17.090 [2024-02-14 20:25:54.493942] nvme_ctrlr.c:1191:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:25:17.353 00:25:17.353 20:25:54 -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:25:17.353 [2024-02-14 20:25:54.529178] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:25:17.353 [2024-02-14 20:25:54.529224] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1898038 ] 00:25:17.353 EAL: No free 2048 kB hugepages reported on node 1 00:25:17.353 [2024-02-14 20:25:54.560675] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:25:17.353 [2024-02-14 20:25:54.560721] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:25:17.353 [2024-02-14 20:25:54.560725] nvme_tcp.c:2246:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:25:17.353 [2024-02-14 20:25:54.560736] nvme_tcp.c:2264:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:25:17.353 [2024-02-14 20:25:54.560743] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:25:17.353 [2024-02-14 20:25:54.561120] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:25:17.353 [2024-02-14 20:25:54.561142] nvme_tcp.c:1485:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1e1cb80 0 00:25:17.353 [2024-02-14 20:25:54.567656] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:25:17.353 [2024-02-14 20:25:54.567672] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:25:17.353 [2024-02-14 20:25:54.567676] nvme_tcp.c:1531:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:25:17.353 [2024-02-14 20:25:54.567679] nvme_tcp.c:1532:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:25:17.353 [2024-02-14 20:25:54.567710] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.353 [2024-02-14 20:25:54.567715] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.353 [2024-02-14 20:25:54.567718] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e1cb80) 00:25:17.353 [2024-02-14 20:25:54.567728] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:25:17.353 [2024-02-14 20:25:54.567745] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e848d0, cid 0, qid 0 00:25:17.353 [2024-02-14 20:25:54.574656] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.353 [2024-02-14 20:25:54.574663] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.353 [2024-02-14 20:25:54.574667] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.353 [2024-02-14 20:25:54.574670] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e848d0) on tqpair=0x1e1cb80 00:25:17.353 [2024-02-14 20:25:54.574678] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:25:17.353 [2024-02-14 20:25:54.574684] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:25:17.353 [2024-02-14 20:25:54.574688] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:25:17.353 [2024-02-14 20:25:54.574700] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.353 [2024-02-14 20:25:54.574703] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.353 [2024-02-14 20:25:54.574706] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e1cb80) 00:25:17.353 [2024-02-14 20:25:54.574713] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.353 [2024-02-14 20:25:54.574726] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e848d0, cid 0, qid 0 00:25:17.353 [2024-02-14 20:25:54.574945] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.353 [2024-02-14 20:25:54.574956] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.353 [2024-02-14 20:25:54.574959] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.353 [2024-02-14 20:25:54.574966] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e848d0) on tqpair=0x1e1cb80 00:25:17.353 [2024-02-14 20:25:54.574975] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:25:17.353 [2024-02-14 20:25:54.574983] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:25:17.353 [2024-02-14 20:25:54.574991] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.353 [2024-02-14 20:25:54.574994] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.353 [2024-02-14 20:25:54.574997] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e1cb80) 00:25:17.353 [2024-02-14 20:25:54.575005] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.353 [2024-02-14 20:25:54.575018] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e848d0, cid 0, qid 0 00:25:17.353 [2024-02-14 20:25:54.575200] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.353 [2024-02-14 20:25:54.575206] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.353 [2024-02-14 20:25:54.575208] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.353 [2024-02-14 20:25:54.575212] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e848d0) on tqpair=0x1e1cb80 00:25:17.353 [2024-02-14 20:25:54.575216] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:25:17.354 [2024-02-14 20:25:54.575224] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:25:17.354 [2024-02-14 20:25:54.575229] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.354 [2024-02-14 20:25:54.575232] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.354 [2024-02-14 20:25:54.575235] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e1cb80) 00:25:17.354 [2024-02-14 20:25:54.575241] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.354 [2024-02-14 20:25:54.575250] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e848d0, cid 0, qid 0 00:25:17.354 [2024-02-14 20:25:54.575387] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.354 [2024-02-14 20:25:54.575396] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.354 [2024-02-14 20:25:54.575399] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.354 [2024-02-14 20:25:54.575402] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e848d0) on tqpair=0x1e1cb80 00:25:17.354 [2024-02-14 20:25:54.575407] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:25:17.354 [2024-02-14 20:25:54.575417] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.354 [2024-02-14 20:25:54.575421] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.354 [2024-02-14 20:25:54.575424] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e1cb80) 00:25:17.354 [2024-02-14 20:25:54.575430] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.354 [2024-02-14 20:25:54.575441] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e848d0, cid 0, qid 0 00:25:17.354 [2024-02-14 20:25:54.575572] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.354 [2024-02-14 20:25:54.575581] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.354 [2024-02-14 20:25:54.575584] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.354 [2024-02-14 20:25:54.575587] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e848d0) on tqpair=0x1e1cb80 00:25:17.354 [2024-02-14 20:25:54.575591] nvme_ctrlr.c:3736:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:25:17.354 [2024-02-14 20:25:54.575598] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:25:17.354 [2024-02-14 20:25:54.575606] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:25:17.354 [2024-02-14 20:25:54.575711] nvme_ctrlr.c:3929:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:25:17.354 [2024-02-14 20:25:54.575715] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:25:17.354 [2024-02-14 20:25:54.575722] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.354 [2024-02-14 20:25:54.575725] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.354 [2024-02-14 20:25:54.575728] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e1cb80) 00:25:17.354 [2024-02-14 20:25:54.575735] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.354 [2024-02-14 20:25:54.575747] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e848d0, cid 0, qid 0 00:25:17.354 [2024-02-14 20:25:54.575872] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.354 [2024-02-14 20:25:54.575880] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.354 [2024-02-14 20:25:54.575883] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.354 [2024-02-14 20:25:54.575886] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e848d0) on tqpair=0x1e1cb80 00:25:17.354 [2024-02-14 20:25:54.575891] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:25:17.354 [2024-02-14 20:25:54.575901] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.354 [2024-02-14 20:25:54.575904] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.354 [2024-02-14 20:25:54.575907] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e1cb80) 00:25:17.354 [2024-02-14 20:25:54.575913] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.354 [2024-02-14 20:25:54.575925] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e848d0, cid 0, qid 0 00:25:17.354 [2024-02-14 20:25:54.576050] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.354 [2024-02-14 20:25:54.576058] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.354 [2024-02-14 20:25:54.576061] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.354 [2024-02-14 20:25:54.576064] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e848d0) on tqpair=0x1e1cb80 00:25:17.354 [2024-02-14 20:25:54.576069] nvme_ctrlr.c:3771:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:25:17.354 [2024-02-14 20:25:54.576073] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:25:17.354 [2024-02-14 20:25:54.576081] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:25:17.354 [2024-02-14 20:25:54.576089] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:25:17.354 [2024-02-14 20:25:54.576096] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.354 [2024-02-14 20:25:54.576099] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.354 [2024-02-14 20:25:54.576102] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e1cb80) 00:25:17.354 [2024-02-14 20:25:54.576109] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.354 [2024-02-14 20:25:54.576120] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e848d0, cid 0, qid 0 00:25:17.354 [2024-02-14 20:25:54.576269] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:17.354 [2024-02-14 20:25:54.576278] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:17.354 [2024-02-14 20:25:54.576281] nvme_tcp.c:1648:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:17.354 [2024-02-14 20:25:54.576284] nvme_tcp.c:1649:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e1cb80): datao=0, datal=4096, cccid=0 00:25:17.354 [2024-02-14 20:25:54.576288] nvme_tcp.c:1660:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e848d0) on tqpair(0x1e1cb80): expected_datao=0, payload_size=4096 00:25:17.354 [2024-02-14 20:25:54.576391] nvme_tcp.c:1451:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:17.354 [2024-02-14 20:25:54.576395] nvme_tcp.c:1235:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:17.354 [2024-02-14 20:25:54.576493] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.354 [2024-02-14 20:25:54.576501] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.354 [2024-02-14 20:25:54.576504] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.354 [2024-02-14 20:25:54.576507] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e848d0) on tqpair=0x1e1cb80 00:25:17.354 [2024-02-14 20:25:54.576515] nvme_ctrlr.c:1971:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:25:17.354 [2024-02-14 20:25:54.576522] nvme_ctrlr.c:1975:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:25:17.354 [2024-02-14 20:25:54.576526] nvme_ctrlr.c:1978:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:25:17.354 [2024-02-14 20:25:54.576530] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:25:17.354 [2024-02-14 20:25:54.576533] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:25:17.354 [2024-02-14 20:25:54.576537] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:25:17.354 [2024-02-14 20:25:54.576546] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:25:17.354 [2024-02-14 20:25:54.576552] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.354 [2024-02-14 20:25:54.576555] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.354 [2024-02-14 20:25:54.576558] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e1cb80) 00:25:17.354 [2024-02-14 20:25:54.576565] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:17.354 [2024-02-14 20:25:54.576577] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e848d0, cid 0, qid 0 00:25:17.354 [2024-02-14 20:25:54.576719] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.354 [2024-02-14 20:25:54.576728] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.354 [2024-02-14 20:25:54.576731] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.354 [2024-02-14 20:25:54.576734] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e848d0) on tqpair=0x1e1cb80 00:25:17.354 [2024-02-14 20:25:54.576741] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.354 [2024-02-14 20:25:54.576744] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.354 [2024-02-14 20:25:54.576747] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e1cb80) 00:25:17.354 [2024-02-14 20:25:54.576752] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.354 [2024-02-14 20:25:54.576757] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.354 [2024-02-14 20:25:54.576760] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.354 [2024-02-14 20:25:54.576763] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1e1cb80) 00:25:17.354 [2024-02-14 20:25:54.576768] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.354 [2024-02-14 20:25:54.576775] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.354 [2024-02-14 20:25:54.576779] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.354 [2024-02-14 20:25:54.576782] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1e1cb80) 00:25:17.354 [2024-02-14 20:25:54.576786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.355 [2024-02-14 20:25:54.576791] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.355 [2024-02-14 20:25:54.576794] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.355 [2024-02-14 20:25:54.576797] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e1cb80) 00:25:17.355 [2024-02-14 20:25:54.576802] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.355 [2024-02-14 20:25:54.576806] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:25:17.355 [2024-02-14 20:25:54.576816] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:25:17.355 [2024-02-14 20:25:54.576822] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.355 [2024-02-14 20:25:54.576825] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.355 [2024-02-14 20:25:54.576828] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e1cb80) 00:25:17.355 [2024-02-14 20:25:54.576834] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.355 [2024-02-14 20:25:54.576847] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e848d0, cid 0, qid 0 00:25:17.355 [2024-02-14 20:25:54.576851] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e84a30, cid 1, qid 0 00:25:17.355 [2024-02-14 20:25:54.576855] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e84b90, cid 2, qid 0 00:25:17.355 [2024-02-14 20:25:54.576859] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e84cf0, cid 3, qid 0 00:25:17.355 [2024-02-14 20:25:54.576863] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e84e50, cid 4, qid 0 00:25:17.355 [2024-02-14 20:25:54.577028] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.355 [2024-02-14 20:25:54.577037] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.355 [2024-02-14 20:25:54.577040] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.355 [2024-02-14 20:25:54.577043] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e84e50) on tqpair=0x1e1cb80 00:25:17.355 [2024-02-14 20:25:54.577048] nvme_ctrlr.c:2889:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:25:17.355 [2024-02-14 20:25:54.577053] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:25:17.355 [2024-02-14 20:25:54.577061] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:25:17.355 [2024-02-14 20:25:54.577066] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:25:17.355 [2024-02-14 20:25:54.577072] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.355 [2024-02-14 20:25:54.577075] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.355 [2024-02-14 20:25:54.577078] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e1cb80) 00:25:17.355 [2024-02-14 20:25:54.577084] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:17.355 [2024-02-14 20:25:54.577096] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e84e50, cid 4, qid 0 00:25:17.355 [2024-02-14 20:25:54.577275] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.355 [2024-02-14 20:25:54.577281] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.355 [2024-02-14 20:25:54.577283] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.355 [2024-02-14 20:25:54.577287] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e84e50) on tqpair=0x1e1cb80 00:25:17.355 [2024-02-14 20:25:54.577327] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:25:17.355 [2024-02-14 20:25:54.577336] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:25:17.355 [2024-02-14 20:25:54.577342] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.355 [2024-02-14 20:25:54.577346] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.355 [2024-02-14 20:25:54.577348] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e1cb80) 00:25:17.355 [2024-02-14 20:25:54.577354] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.355 [2024-02-14 20:25:54.577364] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e84e50, cid 4, qid 0 00:25:17.355 [2024-02-14 20:25:54.577503] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:17.355 [2024-02-14 20:25:54.577513] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:17.355 [2024-02-14 20:25:54.577516] nvme_tcp.c:1648:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:17.355 [2024-02-14 20:25:54.577519] nvme_tcp.c:1649:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e1cb80): datao=0, datal=4096, cccid=4 00:25:17.355 [2024-02-14 20:25:54.577522] nvme_tcp.c:1660:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e84e50) on tqpair(0x1e1cb80): expected_datao=0, payload_size=4096 00:25:17.355 [2024-02-14 20:25:54.577619] nvme_tcp.c:1451:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:17.355 [2024-02-14 20:25:54.577622] nvme_tcp.c:1235:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:17.355 [2024-02-14 20:25:54.617850] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.355 [2024-02-14 20:25:54.617865] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.355 [2024-02-14 20:25:54.617868] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.355 [2024-02-14 20:25:54.617872] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e84e50) on tqpair=0x1e1cb80 00:25:17.355 [2024-02-14 20:25:54.617883] nvme_ctrlr.c:4544:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:25:17.355 [2024-02-14 20:25:54.617898] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:25:17.355 [2024-02-14 20:25:54.617908] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:25:17.355 [2024-02-14 20:25:54.617914] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.355 [2024-02-14 20:25:54.617917] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.355 [2024-02-14 20:25:54.617920] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e1cb80) 00:25:17.355 [2024-02-14 20:25:54.617927] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.355 [2024-02-14 20:25:54.617939] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e84e50, cid 4, qid 0 00:25:17.355 [2024-02-14 20:25:54.618077] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:17.355 [2024-02-14 20:25:54.618086] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:17.355 [2024-02-14 20:25:54.618089] nvme_tcp.c:1648:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:17.355 [2024-02-14 20:25:54.618092] nvme_tcp.c:1649:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e1cb80): datao=0, datal=4096, cccid=4 00:25:17.355 [2024-02-14 20:25:54.618099] nvme_tcp.c:1660:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e84e50) on tqpair(0x1e1cb80): expected_datao=0, payload_size=4096 00:25:17.355 [2024-02-14 20:25:54.618300] nvme_tcp.c:1451:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:17.355 [2024-02-14 20:25:54.618303] nvme_tcp.c:1235:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:17.355 [2024-02-14 20:25:54.658861] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.355 [2024-02-14 20:25:54.658874] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.355 [2024-02-14 20:25:54.658877] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.355 [2024-02-14 20:25:54.658881] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e84e50) on tqpair=0x1e1cb80 00:25:17.355 [2024-02-14 20:25:54.658897] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:25:17.355 [2024-02-14 20:25:54.658907] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:25:17.355 [2024-02-14 20:25:54.658915] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.355 [2024-02-14 20:25:54.658918] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.355 [2024-02-14 20:25:54.658921] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e1cb80) 00:25:17.355 [2024-02-14 20:25:54.658927] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.355 [2024-02-14 20:25:54.658940] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e84e50, cid 4, qid 0 00:25:17.355 [2024-02-14 20:25:54.659077] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:17.355 [2024-02-14 20:25:54.659086] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:17.355 [2024-02-14 20:25:54.659089] nvme_tcp.c:1648:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:17.355 [2024-02-14 20:25:54.659092] nvme_tcp.c:1649:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e1cb80): datao=0, datal=4096, cccid=4 00:25:17.355 [2024-02-14 20:25:54.659096] nvme_tcp.c:1660:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e84e50) on tqpair(0x1e1cb80): expected_datao=0, payload_size=4096 00:25:17.355 [2024-02-14 20:25:54.659299] nvme_tcp.c:1451:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:17.355 [2024-02-14 20:25:54.659302] nvme_tcp.c:1235:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:17.355 [2024-02-14 20:25:54.699852] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.355 [2024-02-14 20:25:54.699867] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.356 [2024-02-14 20:25:54.699870] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.356 [2024-02-14 20:25:54.699874] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e84e50) on tqpair=0x1e1cb80 00:25:17.356 [2024-02-14 20:25:54.699883] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:25:17.356 [2024-02-14 20:25:54.699892] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:25:17.356 [2024-02-14 20:25:54.699900] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:25:17.356 [2024-02-14 20:25:54.699905] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:25:17.356 [2024-02-14 20:25:54.699910] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:25:17.356 [2024-02-14 20:25:54.699914] nvme_ctrlr.c:2977:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:25:17.356 [2024-02-14 20:25:54.699918] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:25:17.356 [2024-02-14 20:25:54.699925] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:25:17.356 [2024-02-14 20:25:54.699937] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.356 [2024-02-14 20:25:54.699941] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.356 [2024-02-14 20:25:54.699944] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e1cb80) 00:25:17.356 [2024-02-14 20:25:54.699951] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.356 [2024-02-14 20:25:54.699956] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.356 [2024-02-14 20:25:54.699959] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.356 [2024-02-14 20:25:54.699962] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1e1cb80) 00:25:17.356 [2024-02-14 20:25:54.699967] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.356 [2024-02-14 20:25:54.699982] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e84e50, cid 4, qid 0 00:25:17.356 [2024-02-14 20:25:54.699986] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e84fb0, cid 5, qid 0 00:25:17.356 [2024-02-14 20:25:54.700130] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.356 [2024-02-14 20:25:54.700138] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.356 [2024-02-14 20:25:54.700141] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.356 [2024-02-14 20:25:54.700145] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e84e50) on tqpair=0x1e1cb80 00:25:17.356 [2024-02-14 20:25:54.700151] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.356 [2024-02-14 20:25:54.700156] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.356 [2024-02-14 20:25:54.700158] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.356 [2024-02-14 20:25:54.700162] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e84fb0) on tqpair=0x1e1cb80 00:25:17.356 [2024-02-14 20:25:54.700172] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.356 [2024-02-14 20:25:54.700175] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.356 [2024-02-14 20:25:54.700178] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1e1cb80) 00:25:17.356 [2024-02-14 20:25:54.700184] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.356 [2024-02-14 20:25:54.700196] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e84fb0, cid 5, qid 0 00:25:17.356 [2024-02-14 20:25:54.700326] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.356 [2024-02-14 20:25:54.700334] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.356 [2024-02-14 20:25:54.700337] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.356 [2024-02-14 20:25:54.700340] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e84fb0) on tqpair=0x1e1cb80 00:25:17.356 [2024-02-14 20:25:54.700350] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.356 [2024-02-14 20:25:54.700353] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.356 [2024-02-14 20:25:54.700356] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1e1cb80) 00:25:17.356 [2024-02-14 20:25:54.700362] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.356 [2024-02-14 20:25:54.700373] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e84fb0, cid 5, qid 0 00:25:17.356 [2024-02-14 20:25:54.700503] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.356 [2024-02-14 20:25:54.700511] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.356 [2024-02-14 20:25:54.700514] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.356 [2024-02-14 20:25:54.700519] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e84fb0) on tqpair=0x1e1cb80 00:25:17.356 [2024-02-14 20:25:54.700530] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.356 [2024-02-14 20:25:54.700533] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.356 [2024-02-14 20:25:54.700536] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1e1cb80) 00:25:17.356 [2024-02-14 20:25:54.700542] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.356 [2024-02-14 20:25:54.700554] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e84fb0, cid 5, qid 0 00:25:17.356 [2024-02-14 20:25:54.704654] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.356 [2024-02-14 20:25:54.704666] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.356 [2024-02-14 20:25:54.704669] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.356 [2024-02-14 20:25:54.704672] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e84fb0) on tqpair=0x1e1cb80 00:25:17.356 [2024-02-14 20:25:54.704686] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.356 [2024-02-14 20:25:54.704690] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.356 [2024-02-14 20:25:54.704693] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1e1cb80) 00:25:17.356 [2024-02-14 20:25:54.704699] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.356 [2024-02-14 20:25:54.704705] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.356 [2024-02-14 20:25:54.704708] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.356 [2024-02-14 20:25:54.704711] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e1cb80) 00:25:17.356 [2024-02-14 20:25:54.704716] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.356 [2024-02-14 20:25:54.704721] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.356 [2024-02-14 20:25:54.704725] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.356 [2024-02-14 20:25:54.704727] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1e1cb80) 00:25:17.356 [2024-02-14 20:25:54.704732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.356 [2024-02-14 20:25:54.704738] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.356 [2024-02-14 20:25:54.704741] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.356 [2024-02-14 20:25:54.704744] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1e1cb80) 00:25:17.356 [2024-02-14 20:25:54.704749] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.356 [2024-02-14 20:25:54.704763] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e84fb0, cid 5, qid 0 00:25:17.356 [2024-02-14 20:25:54.704767] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e84e50, cid 4, qid 0 00:25:17.356 [2024-02-14 20:25:54.704771] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e85110, cid 6, qid 0 00:25:17.356 [2024-02-14 20:25:54.704775] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e85270, cid 7, qid 0 00:25:17.356 [2024-02-14 20:25:54.705129] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:17.356 [2024-02-14 20:25:54.705139] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:17.356 [2024-02-14 20:25:54.705142] nvme_tcp.c:1648:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:17.356 [2024-02-14 20:25:54.705145] nvme_tcp.c:1649:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e1cb80): datao=0, datal=8192, cccid=5 00:25:17.356 [2024-02-14 20:25:54.705152] nvme_tcp.c:1660:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e84fb0) on tqpair(0x1e1cb80): expected_datao=0, payload_size=8192 00:25:17.356 [2024-02-14 20:25:54.705158] nvme_tcp.c:1451:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:17.356 [2024-02-14 20:25:54.705161] nvme_tcp.c:1235:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:17.356 [2024-02-14 20:25:54.705166] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:17.357 [2024-02-14 20:25:54.705171] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:17.357 [2024-02-14 20:25:54.705174] nvme_tcp.c:1648:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:17.357 [2024-02-14 20:25:54.705176] nvme_tcp.c:1649:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e1cb80): datao=0, datal=512, cccid=4 00:25:17.357 [2024-02-14 20:25:54.705180] nvme_tcp.c:1660:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e84e50) on tqpair(0x1e1cb80): expected_datao=0, payload_size=512 00:25:17.357 [2024-02-14 20:25:54.705186] nvme_tcp.c:1451:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:17.357 [2024-02-14 20:25:54.705189] nvme_tcp.c:1235:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:17.357 [2024-02-14 20:25:54.705193] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:17.357 [2024-02-14 20:25:54.705198] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:17.357 [2024-02-14 20:25:54.705201] nvme_tcp.c:1648:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:17.357 [2024-02-14 20:25:54.705203] nvme_tcp.c:1649:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e1cb80): datao=0, datal=512, cccid=6 00:25:17.357 [2024-02-14 20:25:54.705207] nvme_tcp.c:1660:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e85110) on tqpair(0x1e1cb80): expected_datao=0, payload_size=512 00:25:17.357 [2024-02-14 20:25:54.705213] nvme_tcp.c:1451:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:17.357 [2024-02-14 20:25:54.705216] nvme_tcp.c:1235:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:17.357 [2024-02-14 20:25:54.705220] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:17.357 [2024-02-14 20:25:54.705225] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:17.357 [2024-02-14 20:25:54.705228] nvme_tcp.c:1648:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:17.357 [2024-02-14 20:25:54.705230] nvme_tcp.c:1649:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e1cb80): datao=0, datal=4096, cccid=7 00:25:17.357 [2024-02-14 20:25:54.705234] nvme_tcp.c:1660:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e85270) on tqpair(0x1e1cb80): expected_datao=0, payload_size=4096 00:25:17.357 [2024-02-14 20:25:54.705359] nvme_tcp.c:1451:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:17.357 [2024-02-14 20:25:54.705363] nvme_tcp.c:1235:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:17.357 [2024-02-14 20:25:54.745836] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.357 [2024-02-14 20:25:54.745849] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.357 [2024-02-14 20:25:54.745853] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.357 [2024-02-14 20:25:54.745856] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e84fb0) on tqpair=0x1e1cb80 00:25:17.357 [2024-02-14 20:25:54.745869] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.357 [2024-02-14 20:25:54.745874] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.357 [2024-02-14 20:25:54.745877] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.357 [2024-02-14 20:25:54.745880] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e84e50) on tqpair=0x1e1cb80 00:25:17.357 [2024-02-14 20:25:54.745888] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.357 [2024-02-14 20:25:54.745893] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.357 [2024-02-14 20:25:54.745896] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.357 [2024-02-14 20:25:54.745899] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e85110) on tqpair=0x1e1cb80 00:25:17.357 [2024-02-14 20:25:54.745905] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.357 [2024-02-14 20:25:54.745910] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.357 [2024-02-14 20:25:54.745915] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.357 [2024-02-14 20:25:54.745918] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e85270) on tqpair=0x1e1cb80 00:25:17.357 ===================================================== 00:25:17.357 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:17.357 ===================================================== 00:25:17.357 Controller Capabilities/Features 00:25:17.357 ================================ 00:25:17.357 Vendor ID: 8086 00:25:17.357 Subsystem Vendor ID: 8086 00:25:17.357 Serial Number: SPDK00000000000001 00:25:17.357 Model Number: SPDK bdev Controller 00:25:17.357 Firmware Version: 24.05 00:25:17.357 Recommended Arb Burst: 6 00:25:17.357 IEEE OUI Identifier: e4 d2 5c 00:25:17.357 Multi-path I/O 00:25:17.357 May have multiple subsystem ports: Yes 00:25:17.357 May have multiple controllers: Yes 00:25:17.357 Associated with SR-IOV VF: No 00:25:17.357 Max Data Transfer Size: 131072 00:25:17.357 Max Number of Namespaces: 32 00:25:17.357 Max Number of I/O Queues: 127 00:25:17.357 NVMe Specification Version (VS): 1.3 00:25:17.357 NVMe Specification Version (Identify): 1.3 00:25:17.357 Maximum Queue Entries: 128 00:25:17.357 Contiguous Queues Required: Yes 00:25:17.357 Arbitration Mechanisms Supported 00:25:17.357 Weighted Round Robin: Not Supported 00:25:17.357 Vendor Specific: Not Supported 00:25:17.357 Reset Timeout: 15000 ms 00:25:17.357 Doorbell Stride: 4 bytes 00:25:17.357 NVM Subsystem Reset: Not Supported 00:25:17.357 Command Sets Supported 00:25:17.357 NVM Command Set: Supported 00:25:17.357 Boot Partition: Not Supported 00:25:17.357 Memory Page Size Minimum: 4096 bytes 00:25:17.357 Memory Page Size Maximum: 4096 bytes 00:25:17.357 Persistent Memory Region: Not Supported 00:25:17.357 Optional Asynchronous Events Supported 00:25:17.357 Namespace Attribute Notices: Supported 00:25:17.357 Firmware Activation Notices: Not Supported 00:25:17.357 ANA Change Notices: Not Supported 00:25:17.357 PLE Aggregate Log Change Notices: Not Supported 00:25:17.357 LBA Status Info Alert Notices: Not Supported 00:25:17.357 EGE Aggregate Log Change Notices: Not Supported 00:25:17.357 Normal NVM Subsystem Shutdown event: Not Supported 00:25:17.357 Zone Descriptor Change Notices: Not Supported 00:25:17.357 Discovery Log Change Notices: Not Supported 00:25:17.357 Controller Attributes 00:25:17.357 128-bit Host Identifier: Supported 00:25:17.357 Non-Operational Permissive Mode: Not Supported 00:25:17.357 NVM Sets: Not Supported 00:25:17.357 Read Recovery Levels: Not Supported 00:25:17.357 Endurance Groups: Not Supported 00:25:17.357 Predictable Latency Mode: Not Supported 00:25:17.357 Traffic Based Keep ALive: Not Supported 00:25:17.357 Namespace Granularity: Not Supported 00:25:17.357 SQ Associations: Not Supported 00:25:17.357 UUID List: Not Supported 00:25:17.357 Multi-Domain Subsystem: Not Supported 00:25:17.357 Fixed Capacity Management: Not Supported 00:25:17.357 Variable Capacity Management: Not Supported 00:25:17.357 Delete Endurance Group: Not Supported 00:25:17.357 Delete NVM Set: Not Supported 00:25:17.357 Extended LBA Formats Supported: Not Supported 00:25:17.357 Flexible Data Placement Supported: Not Supported 00:25:17.357 00:25:17.357 Controller Memory Buffer Support 00:25:17.357 ================================ 00:25:17.357 Supported: No 00:25:17.357 00:25:17.357 Persistent Memory Region Support 00:25:17.357 ================================ 00:25:17.357 Supported: No 00:25:17.357 00:25:17.357 Admin Command Set Attributes 00:25:17.357 ============================ 00:25:17.357 Security Send/Receive: Not Supported 00:25:17.357 Format NVM: Not Supported 00:25:17.357 Firmware Activate/Download: Not Supported 00:25:17.357 Namespace Management: Not Supported 00:25:17.357 Device Self-Test: Not Supported 00:25:17.357 Directives: Not Supported 00:25:17.357 NVMe-MI: Not Supported 00:25:17.357 Virtualization Management: Not Supported 00:25:17.357 Doorbell Buffer Config: Not Supported 00:25:17.357 Get LBA Status Capability: Not Supported 00:25:17.357 Command & Feature Lockdown Capability: Not Supported 00:25:17.357 Abort Command Limit: 4 00:25:17.357 Async Event Request Limit: 4 00:25:17.357 Number of Firmware Slots: N/A 00:25:17.357 Firmware Slot 1 Read-Only: N/A 00:25:17.357 Firmware Activation Without Reset: N/A 00:25:17.357 Multiple Update Detection Support: N/A 00:25:17.357 Firmware Update Granularity: No Information Provided 00:25:17.357 Per-Namespace SMART Log: No 00:25:17.357 Asymmetric Namespace Access Log Page: Not Supported 00:25:17.357 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:25:17.357 Command Effects Log Page: Supported 00:25:17.357 Get Log Page Extended Data: Supported 00:25:17.357 Telemetry Log Pages: Not Supported 00:25:17.357 Persistent Event Log Pages: Not Supported 00:25:17.357 Supported Log Pages Log Page: May Support 00:25:17.357 Commands Supported & Effects Log Page: Not Supported 00:25:17.357 Feature Identifiers & Effects Log Page:May Support 00:25:17.357 NVMe-MI Commands & Effects Log Page: May Support 00:25:17.357 Data Area 4 for Telemetry Log: Not Supported 00:25:17.357 Error Log Page Entries Supported: 128 00:25:17.357 Keep Alive: Supported 00:25:17.357 Keep Alive Granularity: 10000 ms 00:25:17.357 00:25:17.357 NVM Command Set Attributes 00:25:17.358 ========================== 00:25:17.358 Submission Queue Entry Size 00:25:17.358 Max: 64 00:25:17.358 Min: 64 00:25:17.358 Completion Queue Entry Size 00:25:17.358 Max: 16 00:25:17.358 Min: 16 00:25:17.358 Number of Namespaces: 32 00:25:17.358 Compare Command: Supported 00:25:17.358 Write Uncorrectable Command: Not Supported 00:25:17.358 Dataset Management Command: Supported 00:25:17.358 Write Zeroes Command: Supported 00:25:17.358 Set Features Save Field: Not Supported 00:25:17.358 Reservations: Supported 00:25:17.358 Timestamp: Not Supported 00:25:17.358 Copy: Supported 00:25:17.358 Volatile Write Cache: Present 00:25:17.358 Atomic Write Unit (Normal): 1 00:25:17.358 Atomic Write Unit (PFail): 1 00:25:17.358 Atomic Compare & Write Unit: 1 00:25:17.358 Fused Compare & Write: Supported 00:25:17.358 Scatter-Gather List 00:25:17.358 SGL Command Set: Supported 00:25:17.358 SGL Keyed: Supported 00:25:17.358 SGL Bit Bucket Descriptor: Not Supported 00:25:17.358 SGL Metadata Pointer: Not Supported 00:25:17.358 Oversized SGL: Not Supported 00:25:17.358 SGL Metadata Address: Not Supported 00:25:17.358 SGL Offset: Supported 00:25:17.358 Transport SGL Data Block: Not Supported 00:25:17.358 Replay Protected Memory Block: Not Supported 00:25:17.358 00:25:17.358 Firmware Slot Information 00:25:17.358 ========================= 00:25:17.358 Active slot: 1 00:25:17.358 Slot 1 Firmware Revision: 24.05 00:25:17.358 00:25:17.358 00:25:17.358 Commands Supported and Effects 00:25:17.358 ============================== 00:25:17.358 Admin Commands 00:25:17.358 -------------- 00:25:17.358 Get Log Page (02h): Supported 00:25:17.358 Identify (06h): Supported 00:25:17.358 Abort (08h): Supported 00:25:17.358 Set Features (09h): Supported 00:25:17.358 Get Features (0Ah): Supported 00:25:17.358 Asynchronous Event Request (0Ch): Supported 00:25:17.358 Keep Alive (18h): Supported 00:25:17.358 I/O Commands 00:25:17.358 ------------ 00:25:17.358 Flush (00h): Supported LBA-Change 00:25:17.358 Write (01h): Supported LBA-Change 00:25:17.358 Read (02h): Supported 00:25:17.358 Compare (05h): Supported 00:25:17.358 Write Zeroes (08h): Supported LBA-Change 00:25:17.358 Dataset Management (09h): Supported LBA-Change 00:25:17.358 Copy (19h): Supported LBA-Change 00:25:17.358 Unknown (79h): Supported LBA-Change 00:25:17.358 Unknown (7Ah): Supported 00:25:17.358 00:25:17.358 Error Log 00:25:17.358 ========= 00:25:17.358 00:25:17.358 Arbitration 00:25:17.358 =========== 00:25:17.358 Arbitration Burst: 1 00:25:17.358 00:25:17.358 Power Management 00:25:17.358 ================ 00:25:17.358 Number of Power States: 1 00:25:17.358 Current Power State: Power State #0 00:25:17.358 Power State #0: 00:25:17.358 Max Power: 0.00 W 00:25:17.358 Non-Operational State: Operational 00:25:17.358 Entry Latency: Not Reported 00:25:17.358 Exit Latency: Not Reported 00:25:17.358 Relative Read Throughput: 0 00:25:17.358 Relative Read Latency: 0 00:25:17.358 Relative Write Throughput: 0 00:25:17.358 Relative Write Latency: 0 00:25:17.358 Idle Power: Not Reported 00:25:17.358 Active Power: Not Reported 00:25:17.358 Non-Operational Permissive Mode: Not Supported 00:25:17.358 00:25:17.358 Health Information 00:25:17.358 ================== 00:25:17.358 Critical Warnings: 00:25:17.358 Available Spare Space: OK 00:25:17.358 Temperature: OK 00:25:17.358 Device Reliability: OK 00:25:17.358 Read Only: No 00:25:17.358 Volatile Memory Backup: OK 00:25:17.358 Current Temperature: 0 Kelvin (-273 Celsius) 00:25:17.358 Temperature Threshold: [2024-02-14 20:25:54.746004] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.358 [2024-02-14 20:25:54.746009] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.358 [2024-02-14 20:25:54.746012] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1e1cb80) 00:25:17.358 [2024-02-14 20:25:54.746018] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.358 [2024-02-14 20:25:54.746031] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e85270, cid 7, qid 0 00:25:17.358 [2024-02-14 20:25:54.746154] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.358 [2024-02-14 20:25:54.746162] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.358 [2024-02-14 20:25:54.746166] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.358 [2024-02-14 20:25:54.746169] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e85270) on tqpair=0x1e1cb80 00:25:17.358 [2024-02-14 20:25:54.746196] nvme_ctrlr.c:4208:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:25:17.358 [2024-02-14 20:25:54.746207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.358 [2024-02-14 20:25:54.746213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.359 [2024-02-14 20:25:54.746218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.359 [2024-02-14 20:25:54.746223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.359 [2024-02-14 20:25:54.746230] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.359 [2024-02-14 20:25:54.746233] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.359 [2024-02-14 20:25:54.746236] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e1cb80) 00:25:17.359 [2024-02-14 20:25:54.746242] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.359 [2024-02-14 20:25:54.746255] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e84cf0, cid 3, qid 0 00:25:17.359 [2024-02-14 20:25:54.746382] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.359 [2024-02-14 20:25:54.746390] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.359 [2024-02-14 20:25:54.746393] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.359 [2024-02-14 20:25:54.746396] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e84cf0) on tqpair=0x1e1cb80 00:25:17.359 [2024-02-14 20:25:54.746403] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.359 [2024-02-14 20:25:54.746406] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.359 [2024-02-14 20:25:54.746409] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e1cb80) 00:25:17.359 [2024-02-14 20:25:54.746415] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.359 [2024-02-14 20:25:54.746431] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e84cf0, cid 3, qid 0 00:25:17.359 [2024-02-14 20:25:54.746564] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.359 [2024-02-14 20:25:54.746572] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.359 [2024-02-14 20:25:54.746575] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.359 [2024-02-14 20:25:54.746578] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e84cf0) on tqpair=0x1e1cb80 00:25:17.359 [2024-02-14 20:25:54.746583] nvme_ctrlr.c:1069:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:25:17.359 [2024-02-14 20:25:54.746590] nvme_ctrlr.c:1072:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:25:17.359 [2024-02-14 20:25:54.746600] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.359 [2024-02-14 20:25:54.746603] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.359 [2024-02-14 20:25:54.746606] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e1cb80) 00:25:17.359 [2024-02-14 20:25:54.746612] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.359 [2024-02-14 20:25:54.746624] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e84cf0, cid 3, qid 0 00:25:17.359 [2024-02-14 20:25:54.750655] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.359 [2024-02-14 20:25:54.750666] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.359 [2024-02-14 20:25:54.750669] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.359 [2024-02-14 20:25:54.750673] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e84cf0) on tqpair=0x1e1cb80 00:25:17.359 [2024-02-14 20:25:54.750685] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:17.359 [2024-02-14 20:25:54.750688] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:17.359 [2024-02-14 20:25:54.750691] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e1cb80) 00:25:17.359 [2024-02-14 20:25:54.750697] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.359 [2024-02-14 20:25:54.750710] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e84cf0, cid 3, qid 0 00:25:17.359 [2024-02-14 20:25:54.750920] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:17.359 [2024-02-14 20:25:54.750929] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:17.359 [2024-02-14 20:25:54.750932] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:17.359 [2024-02-14 20:25:54.750935] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e84cf0) on tqpair=0x1e1cb80 00:25:17.359 [2024-02-14 20:25:54.750943] nvme_ctrlr.c:1191:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 4 milliseconds 00:25:17.359 0 Kelvin (-273 Celsius) 00:25:17.359 Available Spare: 0% 00:25:17.359 Available Spare Threshold: 0% 00:25:17.359 Life Percentage Used: 0% 00:25:17.359 Data Units Read: 0 00:25:17.359 Data Units Written: 0 00:25:17.359 Host Read Commands: 0 00:25:17.359 Host Write Commands: 0 00:25:17.359 Controller Busy Time: 0 minutes 00:25:17.359 Power Cycles: 0 00:25:17.359 Power On Hours: 0 hours 00:25:17.359 Unsafe Shutdowns: 0 00:25:17.359 Unrecoverable Media Errors: 0 00:25:17.359 Lifetime Error Log Entries: 0 00:25:17.359 Warning Temperature Time: 0 minutes 00:25:17.359 Critical Temperature Time: 0 minutes 00:25:17.359 00:25:17.359 Number of Queues 00:25:17.359 ================ 00:25:17.359 Number of I/O Submission Queues: 127 00:25:17.359 Number of I/O Completion Queues: 127 00:25:17.359 00:25:17.359 Active Namespaces 00:25:17.359 ================= 00:25:17.359 Namespace ID:1 00:25:17.359 Error Recovery Timeout: Unlimited 00:25:17.359 Command Set Identifier: NVM (00h) 00:25:17.359 Deallocate: Supported 00:25:17.359 Deallocated/Unwritten Error: Not Supported 00:25:17.359 Deallocated Read Value: Unknown 00:25:17.359 Deallocate in Write Zeroes: Not Supported 00:25:17.359 Deallocated Guard Field: 0xFFFF 00:25:17.359 Flush: Supported 00:25:17.359 Reservation: Supported 00:25:17.359 Namespace Sharing Capabilities: Multiple Controllers 00:25:17.359 Size (in LBAs): 131072 (0GiB) 00:25:17.359 Capacity (in LBAs): 131072 (0GiB) 00:25:17.359 Utilization (in LBAs): 131072 (0GiB) 00:25:17.359 NGUID: ABCDEF0123456789ABCDEF0123456789 00:25:17.359 EUI64: ABCDEF0123456789 00:25:17.359 UUID: 96819dac-f707-47ff-be37-69301ca7c0f4 00:25:17.359 Thin Provisioning: Not Supported 00:25:17.359 Per-NS Atomic Units: Yes 00:25:17.359 Atomic Boundary Size (Normal): 0 00:25:17.359 Atomic Boundary Size (PFail): 0 00:25:17.359 Atomic Boundary Offset: 0 00:25:17.359 Maximum Single Source Range Length: 65535 00:25:17.359 Maximum Copy Length: 65535 00:25:17.359 Maximum Source Range Count: 1 00:25:17.359 NGUID/EUI64 Never Reused: No 00:25:17.359 Namespace Write Protected: No 00:25:17.359 Number of LBA Formats: 1 00:25:17.359 Current LBA Format: LBA Format #00 00:25:17.359 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:17.359 00:25:17.359 20:25:54 -- host/identify.sh@51 -- # sync 00:25:17.359 20:25:54 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:17.359 20:25:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:17.359 20:25:54 -- common/autotest_common.sh@10 -- # set +x 00:25:17.620 20:25:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:17.620 20:25:54 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:25:17.620 20:25:54 -- host/identify.sh@56 -- # nvmftestfini 00:25:17.620 20:25:54 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:17.620 20:25:54 -- nvmf/common.sh@116 -- # sync 00:25:17.620 20:25:54 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:17.620 20:25:54 -- nvmf/common.sh@119 -- # set +e 00:25:17.620 20:25:54 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:17.620 20:25:54 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:17.620 rmmod nvme_tcp 00:25:17.620 rmmod nvme_fabrics 00:25:17.620 rmmod nvme_keyring 00:25:17.620 20:25:54 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:17.620 20:25:54 -- nvmf/common.sh@123 -- # set -e 00:25:17.620 20:25:54 -- nvmf/common.sh@124 -- # return 0 00:25:17.620 20:25:54 -- nvmf/common.sh@477 -- # '[' -n 1897782 ']' 00:25:17.620 20:25:54 -- nvmf/common.sh@478 -- # killprocess 1897782 00:25:17.620 20:25:54 -- common/autotest_common.sh@924 -- # '[' -z 1897782 ']' 00:25:17.620 20:25:54 -- common/autotest_common.sh@928 -- # kill -0 1897782 00:25:17.620 20:25:54 -- common/autotest_common.sh@929 -- # uname 00:25:17.620 20:25:54 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:25:17.620 20:25:54 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 1897782 00:25:17.620 20:25:54 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:25:17.620 20:25:54 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:25:17.620 20:25:54 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 1897782' 00:25:17.620 killing process with pid 1897782 00:25:17.620 20:25:54 -- common/autotest_common.sh@943 -- # kill 1897782 00:25:17.620 [2024-02-14 20:25:54.874742] app.c: 881:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:25:17.620 20:25:54 -- common/autotest_common.sh@948 -- # wait 1897782 00:25:17.880 20:25:55 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:17.880 20:25:55 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:17.880 20:25:55 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:17.880 20:25:55 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:17.880 20:25:55 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:17.880 20:25:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:17.880 20:25:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:17.880 20:25:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:19.788 20:25:57 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:19.788 00:25:19.788 real 0m10.203s 00:25:19.788 user 0m7.883s 00:25:19.788 sys 0m5.175s 00:25:19.788 20:25:57 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:19.788 20:25:57 -- common/autotest_common.sh@10 -- # set +x 00:25:19.788 ************************************ 00:25:19.788 END TEST nvmf_identify 00:25:19.788 ************************************ 00:25:19.788 20:25:57 -- nvmf/nvmf.sh@97 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:25:19.788 20:25:57 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:25:19.788 20:25:57 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:25:19.788 20:25:57 -- common/autotest_common.sh@10 -- # set +x 00:25:19.788 ************************************ 00:25:19.788 START TEST nvmf_perf 00:25:19.788 ************************************ 00:25:19.788 20:25:57 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:25:20.047 * Looking for test storage... 00:25:20.047 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:20.047 20:25:57 -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:20.047 20:25:57 -- nvmf/common.sh@7 -- # uname -s 00:25:20.047 20:25:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:20.047 20:25:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:20.047 20:25:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:20.047 20:25:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:20.047 20:25:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:20.047 20:25:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:20.047 20:25:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:20.047 20:25:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:20.047 20:25:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:20.047 20:25:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:20.047 20:25:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:25:20.047 20:25:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:25:20.047 20:25:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:20.047 20:25:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:20.047 20:25:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:20.047 20:25:57 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:20.047 20:25:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:20.047 20:25:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:20.047 20:25:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:20.047 20:25:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.047 20:25:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.047 20:25:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.047 20:25:57 -- paths/export.sh@5 -- # export PATH 00:25:20.047 20:25:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.047 20:25:57 -- nvmf/common.sh@46 -- # : 0 00:25:20.047 20:25:57 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:20.047 20:25:57 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:20.047 20:25:57 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:20.047 20:25:57 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:20.047 20:25:57 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:20.047 20:25:57 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:20.047 20:25:57 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:20.047 20:25:57 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:20.047 20:25:57 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:20.047 20:25:57 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:20.047 20:25:57 -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:20.047 20:25:57 -- host/perf.sh@17 -- # nvmftestinit 00:25:20.047 20:25:57 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:20.047 20:25:57 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:20.047 20:25:57 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:20.047 20:25:57 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:20.047 20:25:57 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:20.047 20:25:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:20.047 20:25:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:20.047 20:25:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:20.047 20:25:57 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:20.047 20:25:57 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:20.047 20:25:57 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:20.047 20:25:57 -- common/autotest_common.sh@10 -- # set +x 00:25:26.620 20:26:03 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:26.620 20:26:03 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:26.620 20:26:03 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:26.620 20:26:03 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:26.620 20:26:03 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:26.620 20:26:03 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:26.620 20:26:03 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:26.620 20:26:03 -- nvmf/common.sh@294 -- # net_devs=() 00:25:26.620 20:26:03 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:26.620 20:26:03 -- nvmf/common.sh@295 -- # e810=() 00:25:26.620 20:26:03 -- nvmf/common.sh@295 -- # local -ga e810 00:25:26.620 20:26:03 -- nvmf/common.sh@296 -- # x722=() 00:25:26.620 20:26:03 -- nvmf/common.sh@296 -- # local -ga x722 00:25:26.620 20:26:03 -- nvmf/common.sh@297 -- # mlx=() 00:25:26.620 20:26:03 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:26.620 20:26:03 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:26.620 20:26:03 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:26.620 20:26:03 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:26.620 20:26:03 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:26.620 20:26:03 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:26.620 20:26:03 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:26.620 20:26:03 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:26.620 20:26:03 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:26.620 20:26:03 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:26.620 20:26:03 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:26.620 20:26:03 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:26.620 20:26:03 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:26.620 20:26:03 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:26.620 20:26:03 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:26.620 20:26:03 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:26.620 20:26:03 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:26.620 20:26:03 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:26.620 20:26:03 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:26.620 20:26:03 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:26.620 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:26.620 20:26:03 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:26.620 20:26:03 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:26.620 20:26:03 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:26.620 20:26:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:26.620 20:26:03 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:26.620 20:26:03 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:26.620 20:26:03 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:26.620 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:26.620 20:26:03 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:26.620 20:26:03 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:26.620 20:26:03 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:26.620 20:26:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:26.620 20:26:03 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:26.620 20:26:03 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:26.620 20:26:03 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:26.620 20:26:03 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:26.620 20:26:03 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:26.620 20:26:03 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:26.620 20:26:03 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:26.620 20:26:03 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:26.620 20:26:03 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:26.620 Found net devices under 0000:af:00.0: cvl_0_0 00:25:26.620 20:26:03 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:26.620 20:26:03 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:26.620 20:26:03 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:26.620 20:26:03 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:26.620 20:26:03 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:26.620 20:26:03 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:26.620 Found net devices under 0000:af:00.1: cvl_0_1 00:25:26.620 20:26:03 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:26.620 20:26:03 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:26.620 20:26:03 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:26.620 20:26:03 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:26.620 20:26:03 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:26.620 20:26:03 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:26.620 20:26:03 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:26.620 20:26:03 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:26.620 20:26:03 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:26.620 20:26:03 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:26.620 20:26:03 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:26.620 20:26:03 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:26.620 20:26:03 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:26.620 20:26:03 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:26.620 20:26:03 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:26.620 20:26:03 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:26.620 20:26:03 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:26.620 20:26:03 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:26.620 20:26:03 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:26.621 20:26:03 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:26.621 20:26:03 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:26.621 20:26:03 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:26.621 20:26:03 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:26.621 20:26:03 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:26.621 20:26:03 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:26.621 20:26:03 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:26.621 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:26.621 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.157 ms 00:25:26.621 00:25:26.621 --- 10.0.0.2 ping statistics --- 00:25:26.621 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:26.621 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:25:26.621 20:26:03 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:26.621 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:26.621 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.314 ms 00:25:26.621 00:25:26.621 --- 10.0.0.1 ping statistics --- 00:25:26.621 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:26.621 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:25:26.621 20:26:03 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:26.621 20:26:03 -- nvmf/common.sh@410 -- # return 0 00:25:26.621 20:26:03 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:26.621 20:26:03 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:26.621 20:26:03 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:26.621 20:26:03 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:26.621 20:26:03 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:26.621 20:26:03 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:26.621 20:26:03 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:26.621 20:26:03 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:25:26.621 20:26:03 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:26.621 20:26:03 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:26.621 20:26:03 -- common/autotest_common.sh@10 -- # set +x 00:25:26.621 20:26:03 -- nvmf/common.sh@469 -- # nvmfpid=1901827 00:25:26.621 20:26:03 -- nvmf/common.sh@470 -- # waitforlisten 1901827 00:25:26.621 20:26:03 -- common/autotest_common.sh@817 -- # '[' -z 1901827 ']' 00:25:26.621 20:26:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:26.621 20:26:03 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:26.621 20:26:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:26.621 20:26:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:26.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:26.621 20:26:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:26.621 20:26:03 -- common/autotest_common.sh@10 -- # set +x 00:25:26.621 [2024-02-14 20:26:03.482761] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:25:26.621 [2024-02-14 20:26:03.482805] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:26.621 EAL: No free 2048 kB hugepages reported on node 1 00:25:26.621 [2024-02-14 20:26:03.546582] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:26.621 [2024-02-14 20:26:03.622548] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:26.621 [2024-02-14 20:26:03.622672] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:26.621 [2024-02-14 20:26:03.622681] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:26.621 [2024-02-14 20:26:03.622687] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:26.621 [2024-02-14 20:26:03.622732] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:26.621 [2024-02-14 20:26:03.622750] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:26.621 [2024-02-14 20:26:03.622855] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:26.621 [2024-02-14 20:26:03.622856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:26.879 20:26:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:26.880 20:26:04 -- common/autotest_common.sh@850 -- # return 0 00:25:26.880 20:26:04 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:26.880 20:26:04 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:26.880 20:26:04 -- common/autotest_common.sh@10 -- # set +x 00:25:27.139 20:26:04 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:27.139 20:26:04 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:25:27.139 20:26:04 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:25:30.428 20:26:07 -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:25:30.428 20:26:07 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:25:30.428 20:26:07 -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:25:30.428 20:26:07 -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:30.428 20:26:07 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:25:30.428 20:26:07 -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:25:30.428 20:26:07 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:25:30.428 20:26:07 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:25:30.428 20:26:07 -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:30.687 [2024-02-14 20:26:07.852600] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:30.687 20:26:07 -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:30.687 20:26:08 -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:30.687 20:26:08 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:30.947 20:26:08 -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:30.947 20:26:08 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:31.205 20:26:08 -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:31.205 [2024-02-14 20:26:08.573614] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:31.205 20:26:08 -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:31.464 20:26:08 -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:25:31.464 20:26:08 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:25:31.464 20:26:08 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:25:31.464 20:26:08 -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:25:32.843 Initializing NVMe Controllers 00:25:32.843 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:25:32.843 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:25:32.843 Initialization complete. Launching workers. 00:25:32.843 ======================================================== 00:25:32.843 Latency(us) 00:25:32.843 Device Information : IOPS MiB/s Average min max 00:25:32.843 PCIE (0000:5e:00.0) NSID 1 from core 0: 102104.86 398.85 312.87 10.34 4408.32 00:25:32.843 ======================================================== 00:25:32.843 Total : 102104.86 398.85 312.87 10.34 4408.32 00:25:32.843 00:25:32.843 20:26:10 -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:32.843 EAL: No free 2048 kB hugepages reported on node 1 00:25:34.264 Initializing NVMe Controllers 00:25:34.264 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:34.264 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:34.264 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:34.264 Initialization complete. Launching workers. 00:25:34.264 ======================================================== 00:25:34.264 Latency(us) 00:25:34.264 Device Information : IOPS MiB/s Average min max 00:25:34.264 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 56.00 0.22 17963.12 463.00 45498.54 00:25:34.264 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 61.00 0.24 16467.30 6983.13 47899.60 00:25:34.264 ======================================================== 00:25:34.264 Total : 117.00 0.46 17183.25 463.00 47899.60 00:25:34.264 00:25:34.264 20:26:11 -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:34.264 EAL: No free 2048 kB hugepages reported on node 1 00:25:35.664 Initializing NVMe Controllers 00:25:35.664 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:35.664 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:35.664 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:35.664 Initialization complete. Launching workers. 00:25:35.664 ======================================================== 00:25:35.664 Latency(us) 00:25:35.664 Device Information : IOPS MiB/s Average min max 00:25:35.664 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8081.00 31.57 3973.46 836.11 8668.45 00:25:35.664 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3901.00 15.24 8240.74 6466.50 15805.73 00:25:35.664 ======================================================== 00:25:35.664 Total : 11982.00 46.80 5362.77 836.11 15805.73 00:25:35.664 00:25:35.664 20:26:12 -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:25:35.664 20:26:12 -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:25:35.664 20:26:12 -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:35.664 EAL: No free 2048 kB hugepages reported on node 1 00:25:38.202 Initializing NVMe Controllers 00:25:38.202 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:38.202 Controller IO queue size 128, less than required. 00:25:38.202 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:38.202 Controller IO queue size 128, less than required. 00:25:38.202 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:38.202 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:38.202 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:38.202 Initialization complete. Launching workers. 00:25:38.202 ======================================================== 00:25:38.202 Latency(us) 00:25:38.202 Device Information : IOPS MiB/s Average min max 00:25:38.202 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 879.46 219.87 149636.06 83204.06 221673.63 00:25:38.202 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 604.29 151.07 216431.84 70258.96 350222.93 00:25:38.202 ======================================================== 00:25:38.202 Total : 1483.75 370.94 176840.00 70258.96 350222.93 00:25:38.202 00:25:38.202 20:26:15 -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:25:38.202 EAL: No free 2048 kB hugepages reported on node 1 00:25:38.202 No valid NVMe controllers or AIO or URING devices found 00:25:38.202 Initializing NVMe Controllers 00:25:38.202 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:38.202 Controller IO queue size 128, less than required. 00:25:38.202 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:38.202 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:25:38.202 Controller IO queue size 128, less than required. 00:25:38.202 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:38.202 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:25:38.202 WARNING: Some requested NVMe devices were skipped 00:25:38.202 20:26:15 -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:25:38.202 EAL: No free 2048 kB hugepages reported on node 1 00:25:40.742 Initializing NVMe Controllers 00:25:40.742 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:40.742 Controller IO queue size 128, less than required. 00:25:40.742 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:40.742 Controller IO queue size 128, less than required. 00:25:40.742 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:40.742 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:40.742 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:40.742 Initialization complete. Launching workers. 00:25:40.742 00:25:40.742 ==================== 00:25:40.742 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:25:40.742 TCP transport: 00:25:40.742 polls: 57637 00:25:40.742 idle_polls: 16776 00:25:40.742 sock_completions: 40861 00:25:40.742 nvme_completions: 3719 00:25:40.742 submitted_requests: 5556 00:25:40.742 queued_requests: 1 00:25:40.742 00:25:40.742 ==================== 00:25:40.742 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:25:40.742 TCP transport: 00:25:40.742 polls: 57960 00:25:40.742 idle_polls: 15883 00:25:40.742 sock_completions: 42077 00:25:40.742 nvme_completions: 3753 00:25:40.742 submitted_requests: 5612 00:25:40.742 queued_requests: 1 00:25:40.742 ======================================================== 00:25:40.742 Latency(us) 00:25:40.742 Device Information : IOPS MiB/s Average min max 00:25:40.742 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 928.04 232.01 142523.63 73621.99 243348.69 00:25:40.742 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 936.53 234.13 139431.40 76241.42 217272.22 00:25:40.742 ======================================================== 00:25:40.742 Total : 1864.58 466.14 140970.47 73621.99 243348.69 00:25:40.742 00:25:40.742 20:26:18 -- host/perf.sh@66 -- # sync 00:25:40.742 20:26:18 -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:41.001 20:26:18 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:25:41.001 20:26:18 -- host/perf.sh@71 -- # '[' -n 0000:5e:00.0 ']' 00:25:41.001 20:26:18 -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:25:44.293 20:26:21 -- host/perf.sh@72 -- # ls_guid=4468bf72-d41c-4901-a78b-97dd25bca59e 00:25:44.293 20:26:21 -- host/perf.sh@73 -- # get_lvs_free_mb 4468bf72-d41c-4901-a78b-97dd25bca59e 00:25:44.293 20:26:21 -- common/autotest_common.sh@1341 -- # local lvs_uuid=4468bf72-d41c-4901-a78b-97dd25bca59e 00:25:44.293 20:26:21 -- common/autotest_common.sh@1342 -- # local lvs_info 00:25:44.293 20:26:21 -- common/autotest_common.sh@1343 -- # local fc 00:25:44.293 20:26:21 -- common/autotest_common.sh@1344 -- # local cs 00:25:44.293 20:26:21 -- common/autotest_common.sh@1345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:44.293 20:26:21 -- common/autotest_common.sh@1345 -- # lvs_info='[ 00:25:44.293 { 00:25:44.293 "uuid": "4468bf72-d41c-4901-a78b-97dd25bca59e", 00:25:44.293 "name": "lvs_0", 00:25:44.293 "base_bdev": "Nvme0n1", 00:25:44.293 "total_data_clusters": 238234, 00:25:44.293 "free_clusters": 238234, 00:25:44.293 "block_size": 512, 00:25:44.293 "cluster_size": 4194304 00:25:44.293 } 00:25:44.293 ]' 00:25:44.293 20:26:21 -- common/autotest_common.sh@1346 -- # jq '.[] | select(.uuid=="4468bf72-d41c-4901-a78b-97dd25bca59e") .free_clusters' 00:25:44.293 20:26:21 -- common/autotest_common.sh@1346 -- # fc=238234 00:25:44.293 20:26:21 -- common/autotest_common.sh@1347 -- # jq '.[] | select(.uuid=="4468bf72-d41c-4901-a78b-97dd25bca59e") .cluster_size' 00:25:44.293 20:26:21 -- common/autotest_common.sh@1347 -- # cs=4194304 00:25:44.293 20:26:21 -- common/autotest_common.sh@1350 -- # free_mb=952936 00:25:44.293 20:26:21 -- common/autotest_common.sh@1351 -- # echo 952936 00:25:44.293 952936 00:25:44.293 20:26:21 -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:25:44.293 20:26:21 -- host/perf.sh@78 -- # free_mb=20480 00:25:44.293 20:26:21 -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 4468bf72-d41c-4901-a78b-97dd25bca59e lbd_0 20480 00:25:44.863 20:26:22 -- host/perf.sh@80 -- # lb_guid=fc816aa8-7fbb-47c6-9b6f-cbf3209e9f97 00:25:44.863 20:26:22 -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore fc816aa8-7fbb-47c6-9b6f-cbf3209e9f97 lvs_n_0 00:25:45.431 20:26:22 -- host/perf.sh@83 -- # ls_nested_guid=057ada0f-319c-4f09-b7e0-71204695c072 00:25:45.431 20:26:22 -- host/perf.sh@84 -- # get_lvs_free_mb 057ada0f-319c-4f09-b7e0-71204695c072 00:25:45.431 20:26:22 -- common/autotest_common.sh@1341 -- # local lvs_uuid=057ada0f-319c-4f09-b7e0-71204695c072 00:25:45.431 20:26:22 -- common/autotest_common.sh@1342 -- # local lvs_info 00:25:45.431 20:26:22 -- common/autotest_common.sh@1343 -- # local fc 00:25:45.431 20:26:22 -- common/autotest_common.sh@1344 -- # local cs 00:25:45.431 20:26:22 -- common/autotest_common.sh@1345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:45.691 20:26:23 -- common/autotest_common.sh@1345 -- # lvs_info='[ 00:25:45.691 { 00:25:45.691 "uuid": "4468bf72-d41c-4901-a78b-97dd25bca59e", 00:25:45.691 "name": "lvs_0", 00:25:45.691 "base_bdev": "Nvme0n1", 00:25:45.691 "total_data_clusters": 238234, 00:25:45.691 "free_clusters": 233114, 00:25:45.691 "block_size": 512, 00:25:45.691 "cluster_size": 4194304 00:25:45.691 }, 00:25:45.691 { 00:25:45.691 "uuid": "057ada0f-319c-4f09-b7e0-71204695c072", 00:25:45.691 "name": "lvs_n_0", 00:25:45.691 "base_bdev": "fc816aa8-7fbb-47c6-9b6f-cbf3209e9f97", 00:25:45.691 "total_data_clusters": 5114, 00:25:45.691 "free_clusters": 5114, 00:25:45.691 "block_size": 512, 00:25:45.691 "cluster_size": 4194304 00:25:45.691 } 00:25:45.691 ]' 00:25:45.691 20:26:23 -- common/autotest_common.sh@1346 -- # jq '.[] | select(.uuid=="057ada0f-319c-4f09-b7e0-71204695c072") .free_clusters' 00:25:45.691 20:26:23 -- common/autotest_common.sh@1346 -- # fc=5114 00:25:45.691 20:26:23 -- common/autotest_common.sh@1347 -- # jq '.[] | select(.uuid=="057ada0f-319c-4f09-b7e0-71204695c072") .cluster_size' 00:25:45.691 20:26:23 -- common/autotest_common.sh@1347 -- # cs=4194304 00:25:45.691 20:26:23 -- common/autotest_common.sh@1350 -- # free_mb=20456 00:25:45.691 20:26:23 -- common/autotest_common.sh@1351 -- # echo 20456 00:25:45.691 20456 00:25:45.691 20:26:23 -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:25:45.691 20:26:23 -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 057ada0f-319c-4f09-b7e0-71204695c072 lbd_nest_0 20456 00:25:45.950 20:26:23 -- host/perf.sh@88 -- # lb_nested_guid=acdb04b0-428e-48f7-8db5-c7cb4bcaa6fe 00:25:45.950 20:26:23 -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:46.210 20:26:23 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:25:46.210 20:26:23 -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 acdb04b0-428e-48f7-8db5-c7cb4bcaa6fe 00:25:46.469 20:26:23 -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:46.469 20:26:23 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:25:46.469 20:26:23 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:25:46.469 20:26:23 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:25:46.469 20:26:23 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:25:46.469 20:26:23 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:46.469 EAL: No free 2048 kB hugepages reported on node 1 00:25:58.685 Initializing NVMe Controllers 00:25:58.685 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:58.685 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:58.685 Initialization complete. Launching workers. 00:25:58.685 ======================================================== 00:25:58.685 Latency(us) 00:25:58.685 Device Information : IOPS MiB/s Average min max 00:25:58.685 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 49.30 0.02 20356.55 328.96 48044.44 00:25:58.685 ======================================================== 00:25:58.685 Total : 49.30 0.02 20356.55 328.96 48044.44 00:25:58.685 00:25:58.685 20:26:34 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:25:58.685 20:26:34 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:58.685 EAL: No free 2048 kB hugepages reported on node 1 00:26:08.672 Initializing NVMe Controllers 00:26:08.672 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:08.672 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:08.672 Initialization complete. Launching workers. 00:26:08.672 ======================================================== 00:26:08.672 Latency(us) 00:26:08.672 Device Information : IOPS MiB/s Average min max 00:26:08.672 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 82.29 10.29 12161.95 3789.63 47884.87 00:26:08.672 ======================================================== 00:26:08.672 Total : 82.29 10.29 12161.95 3789.63 47884.87 00:26:08.672 00:26:08.672 20:26:44 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:26:08.672 20:26:44 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:08.672 20:26:44 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:08.672 EAL: No free 2048 kB hugepages reported on node 1 00:26:18.704 Initializing NVMe Controllers 00:26:18.704 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:18.704 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:18.704 Initialization complete. Launching workers. 00:26:18.704 ======================================================== 00:26:18.704 Latency(us) 00:26:18.704 Device Information : IOPS MiB/s Average min max 00:26:18.704 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7461.14 3.64 4289.27 438.77 12080.19 00:26:18.704 ======================================================== 00:26:18.704 Total : 7461.14 3.64 4289.27 438.77 12080.19 00:26:18.704 00:26:18.704 20:26:54 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:18.704 20:26:54 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:18.704 EAL: No free 2048 kB hugepages reported on node 1 00:26:28.689 Initializing NVMe Controllers 00:26:28.689 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:28.689 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:28.689 Initialization complete. Launching workers. 00:26:28.689 ======================================================== 00:26:28.689 Latency(us) 00:26:28.689 Device Information : IOPS MiB/s Average min max 00:26:28.689 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1594.20 199.27 20118.81 1457.46 68125.98 00:26:28.689 ======================================================== 00:26:28.689 Total : 1594.20 199.27 20118.81 1457.46 68125.98 00:26:28.689 00:26:28.689 20:27:05 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:26:28.689 20:27:05 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:28.689 20:27:05 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:28.689 EAL: No free 2048 kB hugepages reported on node 1 00:26:38.671 Initializing NVMe Controllers 00:26:38.671 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:38.671 Controller IO queue size 128, less than required. 00:26:38.671 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:38.671 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:38.671 Initialization complete. Launching workers. 00:26:38.671 ======================================================== 00:26:38.671 Latency(us) 00:26:38.671 Device Information : IOPS MiB/s Average min max 00:26:38.671 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15006.83 7.33 8530.19 1357.56 22384.77 00:26:38.671 ======================================================== 00:26:38.671 Total : 15006.83 7.33 8530.19 1357.56 22384.77 00:26:38.671 00:26:38.671 20:27:15 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:38.671 20:27:15 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:38.671 EAL: No free 2048 kB hugepages reported on node 1 00:26:48.658 Initializing NVMe Controllers 00:26:48.658 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:48.658 Controller IO queue size 128, less than required. 00:26:48.658 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:48.658 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:48.658 Initialization complete. Launching workers. 00:26:48.658 ======================================================== 00:26:48.658 Latency(us) 00:26:48.658 Device Information : IOPS MiB/s Average min max 00:26:48.658 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1172.80 146.60 109303.08 15066.39 241328.04 00:26:48.658 ======================================================== 00:26:48.658 Total : 1172.80 146.60 109303.08 15066.39 241328.04 00:26:48.658 00:26:48.658 20:27:25 -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:48.658 20:27:26 -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete acdb04b0-428e-48f7-8db5-c7cb4bcaa6fe 00:26:49.595 20:27:26 -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:26:49.595 20:27:26 -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete fc816aa8-7fbb-47c6-9b6f-cbf3209e9f97 00:26:49.855 20:27:27 -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:26:49.855 20:27:27 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:26:49.855 20:27:27 -- host/perf.sh@114 -- # nvmftestfini 00:26:50.114 20:27:27 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:50.114 20:27:27 -- nvmf/common.sh@116 -- # sync 00:26:50.114 20:27:27 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:50.114 20:27:27 -- nvmf/common.sh@119 -- # set +e 00:26:50.114 20:27:27 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:50.114 20:27:27 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:50.114 rmmod nvme_tcp 00:26:50.114 rmmod nvme_fabrics 00:26:50.114 rmmod nvme_keyring 00:26:50.114 20:27:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:50.114 20:27:27 -- nvmf/common.sh@123 -- # set -e 00:26:50.114 20:27:27 -- nvmf/common.sh@124 -- # return 0 00:26:50.114 20:27:27 -- nvmf/common.sh@477 -- # '[' -n 1901827 ']' 00:26:50.114 20:27:27 -- nvmf/common.sh@478 -- # killprocess 1901827 00:26:50.114 20:27:27 -- common/autotest_common.sh@924 -- # '[' -z 1901827 ']' 00:26:50.114 20:27:27 -- common/autotest_common.sh@928 -- # kill -0 1901827 00:26:50.114 20:27:27 -- common/autotest_common.sh@929 -- # uname 00:26:50.114 20:27:27 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:26:50.114 20:27:27 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 1901827 00:26:50.114 20:27:27 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:26:50.114 20:27:27 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:26:50.114 20:27:27 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 1901827' 00:26:50.114 killing process with pid 1901827 00:26:50.114 20:27:27 -- common/autotest_common.sh@943 -- # kill 1901827 00:26:50.114 20:27:27 -- common/autotest_common.sh@948 -- # wait 1901827 00:26:52.022 20:27:28 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:52.022 20:27:28 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:52.022 20:27:28 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:52.022 20:27:28 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:52.022 20:27:28 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:52.022 20:27:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:52.022 20:27:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:52.022 20:27:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:53.932 20:27:30 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:26:53.932 00:26:53.932 real 1m33.786s 00:26:53.932 user 5m36.608s 00:26:53.932 sys 0m14.118s 00:26:53.932 20:27:30 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:53.932 20:27:30 -- common/autotest_common.sh@10 -- # set +x 00:26:53.932 ************************************ 00:26:53.932 END TEST nvmf_perf 00:26:53.932 ************************************ 00:26:53.932 20:27:31 -- nvmf/nvmf.sh@98 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:26:53.932 20:27:31 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:26:53.932 20:27:31 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:26:53.932 20:27:31 -- common/autotest_common.sh@10 -- # set +x 00:26:53.932 ************************************ 00:26:53.932 START TEST nvmf_fio_host 00:26:53.932 ************************************ 00:26:53.932 20:27:31 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:26:53.932 * Looking for test storage... 00:26:53.932 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:53.932 20:27:31 -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:53.932 20:27:31 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:53.932 20:27:31 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:53.932 20:27:31 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:53.932 20:27:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.932 20:27:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.932 20:27:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.932 20:27:31 -- paths/export.sh@5 -- # export PATH 00:26:53.932 20:27:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.932 20:27:31 -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:53.932 20:27:31 -- nvmf/common.sh@7 -- # uname -s 00:26:53.932 20:27:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:53.932 20:27:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:53.932 20:27:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:53.932 20:27:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:53.932 20:27:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:53.932 20:27:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:53.932 20:27:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:53.932 20:27:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:53.932 20:27:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:53.932 20:27:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:53.932 20:27:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:26:53.932 20:27:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:26:53.932 20:27:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:53.932 20:27:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:53.932 20:27:31 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:53.932 20:27:31 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:53.932 20:27:31 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:53.932 20:27:31 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:53.932 20:27:31 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:53.932 20:27:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.932 20:27:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.932 20:27:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.932 20:27:31 -- paths/export.sh@5 -- # export PATH 00:26:53.932 20:27:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.932 20:27:31 -- nvmf/common.sh@46 -- # : 0 00:26:53.932 20:27:31 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:53.932 20:27:31 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:53.932 20:27:31 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:53.932 20:27:31 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:53.932 20:27:31 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:53.932 20:27:31 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:53.932 20:27:31 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:53.932 20:27:31 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:53.932 20:27:31 -- host/fio.sh@12 -- # nvmftestinit 00:26:53.932 20:27:31 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:53.932 20:27:31 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:53.932 20:27:31 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:53.932 20:27:31 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:53.932 20:27:31 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:53.932 20:27:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:53.932 20:27:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:53.932 20:27:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:53.932 20:27:31 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:26:53.932 20:27:31 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:26:53.932 20:27:31 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:53.932 20:27:31 -- common/autotest_common.sh@10 -- # set +x 00:26:59.261 20:27:36 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:59.261 20:27:36 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:59.261 20:27:36 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:59.261 20:27:36 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:59.261 20:27:36 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:59.261 20:27:36 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:59.261 20:27:36 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:59.261 20:27:36 -- nvmf/common.sh@294 -- # net_devs=() 00:26:59.261 20:27:36 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:59.261 20:27:36 -- nvmf/common.sh@295 -- # e810=() 00:26:59.261 20:27:36 -- nvmf/common.sh@295 -- # local -ga e810 00:26:59.261 20:27:36 -- nvmf/common.sh@296 -- # x722=() 00:26:59.261 20:27:36 -- nvmf/common.sh@296 -- # local -ga x722 00:26:59.261 20:27:36 -- nvmf/common.sh@297 -- # mlx=() 00:26:59.261 20:27:36 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:59.261 20:27:36 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:59.261 20:27:36 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:59.261 20:27:36 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:59.261 20:27:36 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:59.261 20:27:36 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:59.261 20:27:36 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:59.261 20:27:36 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:59.261 20:27:36 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:59.261 20:27:36 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:59.261 20:27:36 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:59.261 20:27:36 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:59.261 20:27:36 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:59.261 20:27:36 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:59.261 20:27:36 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:26:59.261 20:27:36 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:26:59.261 20:27:36 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:26:59.261 20:27:36 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:59.261 20:27:36 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:59.261 20:27:36 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:59.261 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:59.261 20:27:36 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:59.261 20:27:36 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:59.261 20:27:36 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:59.261 20:27:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:59.261 20:27:36 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:59.261 20:27:36 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:59.261 20:27:36 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:59.261 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:59.261 20:27:36 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:59.261 20:27:36 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:59.261 20:27:36 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:59.261 20:27:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:59.261 20:27:36 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:59.261 20:27:36 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:59.261 20:27:36 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:26:59.261 20:27:36 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:26:59.261 20:27:36 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:59.261 20:27:36 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:59.261 20:27:36 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:59.261 20:27:36 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:59.261 20:27:36 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:59.261 Found net devices under 0000:af:00.0: cvl_0_0 00:26:59.261 20:27:36 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:59.261 20:27:36 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:59.261 20:27:36 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:59.261 20:27:36 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:59.261 20:27:36 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:59.261 20:27:36 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:59.261 Found net devices under 0000:af:00.1: cvl_0_1 00:26:59.261 20:27:36 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:59.261 20:27:36 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:59.261 20:27:36 -- nvmf/common.sh@402 -- # is_hw=yes 00:26:59.261 20:27:36 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:26:59.261 20:27:36 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:26:59.261 20:27:36 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:26:59.261 20:27:36 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:59.261 20:27:36 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:59.261 20:27:36 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:59.261 20:27:36 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:26:59.261 20:27:36 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:59.261 20:27:36 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:59.261 20:27:36 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:26:59.261 20:27:36 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:59.261 20:27:36 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:59.261 20:27:36 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:26:59.261 20:27:36 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:26:59.261 20:27:36 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:26:59.261 20:27:36 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:59.522 20:27:36 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:59.522 20:27:36 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:59.522 20:27:36 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:26:59.522 20:27:36 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:59.522 20:27:36 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:59.522 20:27:36 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:59.522 20:27:36 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:26:59.522 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:59.522 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.269 ms 00:26:59.522 00:26:59.522 --- 10.0.0.2 ping statistics --- 00:26:59.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:59.522 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:26:59.522 20:27:36 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:59.522 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:59.522 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:26:59.522 00:26:59.522 --- 10.0.0.1 ping statistics --- 00:26:59.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:59.522 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:26:59.522 20:27:36 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:59.522 20:27:36 -- nvmf/common.sh@410 -- # return 0 00:26:59.522 20:27:36 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:59.522 20:27:36 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:59.522 20:27:36 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:59.522 20:27:36 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:59.522 20:27:36 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:59.522 20:27:36 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:59.522 20:27:36 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:59.522 20:27:36 -- host/fio.sh@14 -- # [[ y != y ]] 00:26:59.522 20:27:36 -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:26:59.522 20:27:36 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:59.522 20:27:36 -- common/autotest_common.sh@10 -- # set +x 00:26:59.522 20:27:36 -- host/fio.sh@22 -- # nvmfpid=1919990 00:26:59.522 20:27:36 -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:59.522 20:27:36 -- host/fio.sh@21 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:59.522 20:27:36 -- host/fio.sh@26 -- # waitforlisten 1919990 00:26:59.522 20:27:36 -- common/autotest_common.sh@817 -- # '[' -z 1919990 ']' 00:26:59.522 20:27:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:59.522 20:27:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:59.781 20:27:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:59.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:59.781 20:27:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:59.781 20:27:36 -- common/autotest_common.sh@10 -- # set +x 00:26:59.781 [2024-02-14 20:27:36.979607] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:26:59.782 [2024-02-14 20:27:36.979655] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:59.782 EAL: No free 2048 kB hugepages reported on node 1 00:26:59.782 [2024-02-14 20:27:37.041633] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:59.782 [2024-02-14 20:27:37.119140] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:59.782 [2024-02-14 20:27:37.119263] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:59.782 [2024-02-14 20:27:37.119271] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:59.782 [2024-02-14 20:27:37.119277] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:59.782 [2024-02-14 20:27:37.119325] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:59.782 [2024-02-14 20:27:37.119420] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:59.782 [2024-02-14 20:27:37.119507] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:59.782 [2024-02-14 20:27:37.119508] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:00.733 20:27:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:00.733 20:27:37 -- common/autotest_common.sh@850 -- # return 0 00:27:00.733 20:27:37 -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:00.733 20:27:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:00.733 20:27:37 -- common/autotest_common.sh@10 -- # set +x 00:27:00.733 [2024-02-14 20:27:37.776810] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:00.733 20:27:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:00.733 20:27:37 -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:27:00.733 20:27:37 -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:00.733 20:27:37 -- common/autotest_common.sh@10 -- # set +x 00:27:00.733 20:27:37 -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:00.733 20:27:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:00.733 20:27:37 -- common/autotest_common.sh@10 -- # set +x 00:27:00.733 Malloc1 00:27:00.733 20:27:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:00.733 20:27:37 -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:00.733 20:27:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:00.733 20:27:37 -- common/autotest_common.sh@10 -- # set +x 00:27:00.733 20:27:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:00.733 20:27:37 -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:00.733 20:27:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:00.733 20:27:37 -- common/autotest_common.sh@10 -- # set +x 00:27:00.733 20:27:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:00.733 20:27:37 -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:00.733 20:27:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:00.733 20:27:37 -- common/autotest_common.sh@10 -- # set +x 00:27:00.733 [2024-02-14 20:27:37.860134] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:00.733 20:27:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:00.733 20:27:37 -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:00.733 20:27:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:00.733 20:27:37 -- common/autotest_common.sh@10 -- # set +x 00:27:00.733 20:27:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:00.733 20:27:37 -- host/fio.sh@36 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:27:00.733 20:27:37 -- host/fio.sh@39 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:00.733 20:27:37 -- common/autotest_common.sh@1337 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:00.733 20:27:37 -- common/autotest_common.sh@1314 -- # local fio_dir=/usr/src/fio 00:27:00.733 20:27:37 -- common/autotest_common.sh@1316 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:00.733 20:27:37 -- common/autotest_common.sh@1316 -- # local sanitizers 00:27:00.733 20:27:37 -- common/autotest_common.sh@1317 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:00.733 20:27:37 -- common/autotest_common.sh@1318 -- # shift 00:27:00.733 20:27:37 -- common/autotest_common.sh@1320 -- # local asan_lib= 00:27:00.733 20:27:37 -- common/autotest_common.sh@1321 -- # for sanitizer in "${sanitizers[@]}" 00:27:00.733 20:27:37 -- common/autotest_common.sh@1322 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:00.733 20:27:37 -- common/autotest_common.sh@1322 -- # grep libasan 00:27:00.733 20:27:37 -- common/autotest_common.sh@1322 -- # awk '{print $3}' 00:27:00.733 20:27:37 -- common/autotest_common.sh@1322 -- # asan_lib= 00:27:00.733 20:27:37 -- common/autotest_common.sh@1323 -- # [[ -n '' ]] 00:27:00.733 20:27:37 -- common/autotest_common.sh@1321 -- # for sanitizer in "${sanitizers[@]}" 00:27:00.733 20:27:37 -- common/autotest_common.sh@1322 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:00.733 20:27:37 -- common/autotest_common.sh@1322 -- # grep libclang_rt.asan 00:27:00.733 20:27:37 -- common/autotest_common.sh@1322 -- # awk '{print $3}' 00:27:00.733 20:27:37 -- common/autotest_common.sh@1322 -- # asan_lib= 00:27:00.733 20:27:37 -- common/autotest_common.sh@1323 -- # [[ -n '' ]] 00:27:00.733 20:27:37 -- common/autotest_common.sh@1329 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:00.733 20:27:37 -- common/autotest_common.sh@1329 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:00.991 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:27:00.991 fio-3.35 00:27:00.991 Starting 1 thread 00:27:00.991 EAL: No free 2048 kB hugepages reported on node 1 00:27:03.517 00:27:03.517 test: (groupid=0, jobs=1): err= 0: pid=1920356: Wed Feb 14 20:27:40 2024 00:27:03.517 read: IOPS=12.0k, BW=46.7MiB/s (49.0MB/s)(93.7MiB/2004msec) 00:27:03.517 slat (nsec): min=1515, max=256783, avg=1805.53, stdev=2395.33 00:27:03.517 clat (usec): min=3043, max=18183, avg=6226.48, stdev=1564.99 00:27:03.517 lat (usec): min=3045, max=18195, avg=6228.28, stdev=1565.24 00:27:03.517 clat percentiles (usec): 00:27:03.517 | 1.00th=[ 4047], 5.00th=[ 4621], 10.00th=[ 4948], 20.00th=[ 5276], 00:27:03.517 | 30.00th=[ 5473], 40.00th=[ 5669], 50.00th=[ 5800], 60.00th=[ 6063], 00:27:03.517 | 70.00th=[ 6325], 80.00th=[ 6849], 90.00th=[ 8225], 95.00th=[ 9372], 00:27:03.517 | 99.00th=[12256], 99.50th=[13829], 99.90th=[16319], 99.95th=[17171], 00:27:03.517 | 99.99th=[17957] 00:27:03.517 bw ( KiB/s): min=45696, max=49448, per=99.87%, avg=47792.00, stdev=1659.08, samples=4 00:27:03.517 iops : min=11424, max=12362, avg=11948.00, stdev=414.77, samples=4 00:27:03.517 write: IOPS=11.9k, BW=46.5MiB/s (48.8MB/s)(93.2MiB/2004msec); 0 zone resets 00:27:03.517 slat (nsec): min=1588, max=268663, avg=1898.49, stdev=1935.65 00:27:03.517 clat (usec): min=1715, max=16584, avg=4435.84, stdev=971.65 00:27:03.517 lat (usec): min=1717, max=16605, avg=4437.74, stdev=972.10 00:27:03.517 clat percentiles (usec): 00:27:03.517 | 1.00th=[ 2638], 5.00th=[ 3130], 10.00th=[ 3425], 20.00th=[ 3785], 00:27:03.517 | 30.00th=[ 4047], 40.00th=[ 4228], 50.00th=[ 4424], 60.00th=[ 4555], 00:27:03.517 | 70.00th=[ 4686], 80.00th=[ 4883], 90.00th=[ 5276], 95.00th=[ 5997], 00:27:03.517 | 99.00th=[ 7373], 99.50th=[ 9110], 99.90th=[13566], 99.95th=[15270], 00:27:03.517 | 99.99th=[16319] 00:27:03.517 bw ( KiB/s): min=46168, max=48288, per=100.00%, avg=47626.00, stdev=992.40, samples=4 00:27:03.517 iops : min=11542, max=12072, avg=11906.50, stdev=248.10, samples=4 00:27:03.517 lat (msec) : 2=0.01%, 4=14.29%, 10=83.81%, 20=1.89% 00:27:03.517 cpu : usr=69.65%, sys=24.21%, ctx=32, majf=0, minf=6 00:27:03.517 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:27:03.517 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:03.517 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:03.517 issued rwts: total=23976,23860,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:03.517 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:03.517 00:27:03.517 Run status group 0 (all jobs): 00:27:03.517 READ: bw=46.7MiB/s (49.0MB/s), 46.7MiB/s-46.7MiB/s (49.0MB/s-49.0MB/s), io=93.7MiB (98.2MB), run=2004-2004msec 00:27:03.517 WRITE: bw=46.5MiB/s (48.8MB/s), 46.5MiB/s-46.5MiB/s (48.8MB/s-48.8MB/s), io=93.2MiB (97.7MB), run=2004-2004msec 00:27:03.517 20:27:40 -- host/fio.sh@43 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:27:03.517 20:27:40 -- common/autotest_common.sh@1337 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:27:03.517 20:27:40 -- common/autotest_common.sh@1314 -- # local fio_dir=/usr/src/fio 00:27:03.517 20:27:40 -- common/autotest_common.sh@1316 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:03.517 20:27:40 -- common/autotest_common.sh@1316 -- # local sanitizers 00:27:03.517 20:27:40 -- common/autotest_common.sh@1317 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:03.517 20:27:40 -- common/autotest_common.sh@1318 -- # shift 00:27:03.517 20:27:40 -- common/autotest_common.sh@1320 -- # local asan_lib= 00:27:03.517 20:27:40 -- common/autotest_common.sh@1321 -- # for sanitizer in "${sanitizers[@]}" 00:27:03.518 20:27:40 -- common/autotest_common.sh@1322 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:03.518 20:27:40 -- common/autotest_common.sh@1322 -- # grep libasan 00:27:03.518 20:27:40 -- common/autotest_common.sh@1322 -- # awk '{print $3}' 00:27:03.518 20:27:40 -- common/autotest_common.sh@1322 -- # asan_lib= 00:27:03.518 20:27:40 -- common/autotest_common.sh@1323 -- # [[ -n '' ]] 00:27:03.518 20:27:40 -- common/autotest_common.sh@1321 -- # for sanitizer in "${sanitizers[@]}" 00:27:03.518 20:27:40 -- common/autotest_common.sh@1322 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:03.518 20:27:40 -- common/autotest_common.sh@1322 -- # grep libclang_rt.asan 00:27:03.518 20:27:40 -- common/autotest_common.sh@1322 -- # awk '{print $3}' 00:27:03.518 20:27:40 -- common/autotest_common.sh@1322 -- # asan_lib= 00:27:03.518 20:27:40 -- common/autotest_common.sh@1323 -- # [[ -n '' ]] 00:27:03.518 20:27:40 -- common/autotest_common.sh@1329 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:03.518 20:27:40 -- common/autotest_common.sh@1329 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:27:03.518 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:27:03.518 fio-3.35 00:27:03.518 Starting 1 thread 00:27:03.518 EAL: No free 2048 kB hugepages reported on node 1 00:27:06.046 00:27:06.046 test: (groupid=0, jobs=1): err= 0: pid=1920929: Wed Feb 14 20:27:43 2024 00:27:06.046 read: IOPS=9889, BW=155MiB/s (162MB/s)(310MiB/2005msec) 00:27:06.046 slat (nsec): min=2530, max=90971, avg=2861.22, stdev=1505.51 00:27:06.046 clat (usec): min=1225, max=34845, avg=7955.15, stdev=3175.81 00:27:06.046 lat (usec): min=1228, max=34851, avg=7958.01, stdev=3176.35 00:27:06.046 clat percentiles (usec): 00:27:06.046 | 1.00th=[ 3720], 5.00th=[ 4490], 10.00th=[ 5014], 20.00th=[ 5800], 00:27:06.046 | 30.00th=[ 6390], 40.00th=[ 6849], 50.00th=[ 7373], 60.00th=[ 8029], 00:27:06.046 | 70.00th=[ 8717], 80.00th=[ 9634], 90.00th=[10683], 95.00th=[12256], 00:27:06.046 | 99.00th=[22938], 99.50th=[25560], 99.90th=[27132], 99.95th=[27132], 00:27:06.046 | 99.99th=[27657] 00:27:06.046 bw ( KiB/s): min=73280, max=87392, per=49.64%, avg=78552.00, stdev=6132.68, samples=4 00:27:06.046 iops : min= 4580, max= 5462, avg=4909.50, stdev=383.29, samples=4 00:27:06.046 write: IOPS=5833, BW=91.2MiB/s (95.6MB/s)(161MiB/1762msec); 0 zone resets 00:27:06.046 slat (usec): min=28, max=368, avg=31.74, stdev= 7.65 00:27:06.046 clat (usec): min=2009, max=27587, avg=8785.71, stdev=2934.40 00:27:06.046 lat (usec): min=2039, max=27685, avg=8817.45, stdev=2937.97 00:27:06.046 clat percentiles (usec): 00:27:06.046 | 1.00th=[ 5669], 5.00th=[ 6259], 10.00th=[ 6718], 20.00th=[ 7242], 00:27:06.046 | 30.00th=[ 7635], 40.00th=[ 7963], 50.00th=[ 8225], 60.00th=[ 8586], 00:27:06.046 | 70.00th=[ 8979], 80.00th=[ 9634], 90.00th=[10421], 95.00th=[11338], 00:27:06.046 | 99.00th=[25297], 99.50th=[26870], 99.90th=[27132], 99.95th=[27132], 00:27:06.046 | 99.99th=[27132] 00:27:06.046 bw ( KiB/s): min=76640, max=90720, per=87.74%, avg=81896.00, stdev=6220.76, samples=4 00:27:06.046 iops : min= 4790, max= 5670, avg=5118.50, stdev=388.80, samples=4 00:27:06.046 lat (msec) : 2=0.04%, 4=1.47%, 10=84.03%, 20=12.41%, 50=2.05% 00:27:06.046 cpu : usr=83.88%, sys=12.77%, ctx=16, majf=0, minf=3 00:27:06.046 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:27:06.046 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:06.046 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:06.046 issued rwts: total=19828,10279,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:06.046 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:06.046 00:27:06.046 Run status group 0 (all jobs): 00:27:06.046 READ: bw=155MiB/s (162MB/s), 155MiB/s-155MiB/s (162MB/s-162MB/s), io=310MiB (325MB), run=2005-2005msec 00:27:06.046 WRITE: bw=91.2MiB/s (95.6MB/s), 91.2MiB/s-91.2MiB/s (95.6MB/s-95.6MB/s), io=161MiB (168MB), run=1762-1762msec 00:27:06.046 20:27:43 -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:06.046 20:27:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:06.046 20:27:43 -- common/autotest_common.sh@10 -- # set +x 00:27:06.046 20:27:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:06.046 20:27:43 -- host/fio.sh@47 -- # '[' 1 -eq 1 ']' 00:27:06.046 20:27:43 -- host/fio.sh@49 -- # bdfs=($(get_nvme_bdfs)) 00:27:06.046 20:27:43 -- host/fio.sh@49 -- # get_nvme_bdfs 00:27:06.046 20:27:43 -- common/autotest_common.sh@1496 -- # bdfs=() 00:27:06.046 20:27:43 -- common/autotest_common.sh@1496 -- # local bdfs 00:27:06.046 20:27:43 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:27:06.046 20:27:43 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:27:06.046 20:27:43 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:27:06.046 20:27:43 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:27:06.046 20:27:43 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:5e:00.0 00:27:06.046 20:27:43 -- host/fio.sh@50 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 -i 10.0.0.2 00:27:06.046 20:27:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:06.046 20:27:43 -- common/autotest_common.sh@10 -- # set +x 00:27:09.325 Nvme0n1 00:27:09.325 20:27:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:09.325 20:27:46 -- host/fio.sh@51 -- # rpc_cmd bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:27:09.325 20:27:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:09.325 20:27:46 -- common/autotest_common.sh@10 -- # set +x 00:27:11.851 20:27:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:11.851 20:27:48 -- host/fio.sh@51 -- # ls_guid=36a4bea0-b32a-4c92-abf9-d581221c4129 00:27:11.851 20:27:48 -- host/fio.sh@52 -- # get_lvs_free_mb 36a4bea0-b32a-4c92-abf9-d581221c4129 00:27:11.851 20:27:48 -- common/autotest_common.sh@1341 -- # local lvs_uuid=36a4bea0-b32a-4c92-abf9-d581221c4129 00:27:11.851 20:27:48 -- common/autotest_common.sh@1342 -- # local lvs_info 00:27:11.851 20:27:48 -- common/autotest_common.sh@1343 -- # local fc 00:27:11.851 20:27:48 -- common/autotest_common.sh@1344 -- # local cs 00:27:11.851 20:27:48 -- common/autotest_common.sh@1345 -- # rpc_cmd bdev_lvol_get_lvstores 00:27:11.851 20:27:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:11.851 20:27:48 -- common/autotest_common.sh@10 -- # set +x 00:27:11.851 20:27:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:11.851 20:27:48 -- common/autotest_common.sh@1345 -- # lvs_info='[ 00:27:11.851 { 00:27:11.851 "uuid": "36a4bea0-b32a-4c92-abf9-d581221c4129", 00:27:11.851 "name": "lvs_0", 00:27:11.851 "base_bdev": "Nvme0n1", 00:27:11.851 "total_data_clusters": 930, 00:27:11.851 "free_clusters": 930, 00:27:11.851 "block_size": 512, 00:27:11.851 "cluster_size": 1073741824 00:27:11.851 } 00:27:11.851 ]' 00:27:11.851 20:27:48 -- common/autotest_common.sh@1346 -- # jq '.[] | select(.uuid=="36a4bea0-b32a-4c92-abf9-d581221c4129") .free_clusters' 00:27:11.851 20:27:48 -- common/autotest_common.sh@1346 -- # fc=930 00:27:11.851 20:27:48 -- common/autotest_common.sh@1347 -- # jq '.[] | select(.uuid=="36a4bea0-b32a-4c92-abf9-d581221c4129") .cluster_size' 00:27:11.851 20:27:48 -- common/autotest_common.sh@1347 -- # cs=1073741824 00:27:11.851 20:27:48 -- common/autotest_common.sh@1350 -- # free_mb=952320 00:27:11.851 20:27:48 -- common/autotest_common.sh@1351 -- # echo 952320 00:27:11.851 952320 00:27:11.851 20:27:48 -- host/fio.sh@53 -- # rpc_cmd bdev_lvol_create -l lvs_0 lbd_0 952320 00:27:11.851 20:27:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:11.851 20:27:48 -- common/autotest_common.sh@10 -- # set +x 00:27:11.851 892e49ff-f5f4-4ee5-aca0-1ff4134dcb64 00:27:11.851 20:27:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:11.851 20:27:49 -- host/fio.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:27:11.851 20:27:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:11.851 20:27:49 -- common/autotest_common.sh@10 -- # set +x 00:27:11.851 20:27:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:11.851 20:27:49 -- host/fio.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:27:11.851 20:27:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:11.851 20:27:49 -- common/autotest_common.sh@10 -- # set +x 00:27:11.851 20:27:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:11.851 20:27:49 -- host/fio.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:27:11.851 20:27:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:11.851 20:27:49 -- common/autotest_common.sh@10 -- # set +x 00:27:11.851 20:27:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:11.851 20:27:49 -- host/fio.sh@57 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:11.851 20:27:49 -- common/autotest_common.sh@1337 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:11.851 20:27:49 -- common/autotest_common.sh@1314 -- # local fio_dir=/usr/src/fio 00:27:11.851 20:27:49 -- common/autotest_common.sh@1316 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:11.851 20:27:49 -- common/autotest_common.sh@1316 -- # local sanitizers 00:27:11.851 20:27:49 -- common/autotest_common.sh@1317 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:11.851 20:27:49 -- common/autotest_common.sh@1318 -- # shift 00:27:11.851 20:27:49 -- common/autotest_common.sh@1320 -- # local asan_lib= 00:27:11.851 20:27:49 -- common/autotest_common.sh@1321 -- # for sanitizer in "${sanitizers[@]}" 00:27:11.851 20:27:49 -- common/autotest_common.sh@1322 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:11.851 20:27:49 -- common/autotest_common.sh@1322 -- # grep libasan 00:27:11.851 20:27:49 -- common/autotest_common.sh@1322 -- # awk '{print $3}' 00:27:11.851 20:27:49 -- common/autotest_common.sh@1322 -- # asan_lib= 00:27:11.851 20:27:49 -- common/autotest_common.sh@1323 -- # [[ -n '' ]] 00:27:11.851 20:27:49 -- common/autotest_common.sh@1321 -- # for sanitizer in "${sanitizers[@]}" 00:27:11.851 20:27:49 -- common/autotest_common.sh@1322 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:11.851 20:27:49 -- common/autotest_common.sh@1322 -- # grep libclang_rt.asan 00:27:11.851 20:27:49 -- common/autotest_common.sh@1322 -- # awk '{print $3}' 00:27:11.851 20:27:49 -- common/autotest_common.sh@1322 -- # asan_lib= 00:27:11.851 20:27:49 -- common/autotest_common.sh@1323 -- # [[ -n '' ]] 00:27:11.851 20:27:49 -- common/autotest_common.sh@1329 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:11.851 20:27:49 -- common/autotest_common.sh@1329 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:12.109 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:27:12.109 fio-3.35 00:27:12.109 Starting 1 thread 00:27:12.109 EAL: No free 2048 kB hugepages reported on node 1 00:27:14.636 00:27:14.636 test: (groupid=0, jobs=1): err= 0: pid=1922443: Wed Feb 14 20:27:51 2024 00:27:14.636 read: IOPS=8185, BW=32.0MiB/s (33.5MB/s)(64.1MiB/2006msec) 00:27:14.636 slat (nsec): min=1531, max=111618, avg=1807.62, stdev=1210.16 00:27:14.636 clat (usec): min=895, max=176873, avg=8824.64, stdev=10502.94 00:27:14.636 lat (usec): min=898, max=176891, avg=8826.45, stdev=10503.10 00:27:14.636 clat percentiles (msec): 00:27:14.636 | 1.00th=[ 6], 5.00th=[ 7], 10.00th=[ 7], 20.00th=[ 8], 00:27:14.636 | 30.00th=[ 8], 40.00th=[ 8], 50.00th=[ 8], 60.00th=[ 9], 00:27:14.636 | 70.00th=[ 9], 80.00th=[ 9], 90.00th=[ 10], 95.00th=[ 12], 00:27:14.636 | 99.00th=[ 15], 99.50th=[ 16], 99.90th=[ 178], 99.95th=[ 178], 00:27:14.636 | 99.99th=[ 178] 00:27:14.636 bw ( KiB/s): min=22336, max=36456, per=99.90%, avg=32708.00, stdev=6921.61, samples=4 00:27:14.636 iops : min= 5584, max= 9114, avg=8177.50, stdev=1730.75, samples=4 00:27:14.636 write: IOPS=8191, BW=32.0MiB/s (33.6MB/s)(64.2MiB/2006msec); 0 zone resets 00:27:14.636 slat (nsec): min=1565, max=80214, avg=1893.76, stdev=822.32 00:27:14.636 clat (usec): min=441, max=173156, avg=6695.02, stdev=9657.42 00:27:14.636 lat (usec): min=443, max=173160, avg=6696.91, stdev=9657.59 00:27:14.636 clat percentiles (msec): 00:27:14.636 | 1.00th=[ 4], 5.00th=[ 5], 10.00th=[ 5], 20.00th=[ 6], 00:27:14.636 | 30.00th=[ 6], 40.00th=[ 6], 50.00th=[ 7], 60.00th=[ 7], 00:27:14.636 | 70.00th=[ 7], 80.00th=[ 7], 90.00th=[ 8], 95.00th=[ 8], 00:27:14.636 | 99.00th=[ 10], 99.50th=[ 12], 99.90th=[ 171], 99.95th=[ 174], 00:27:14.636 | 99.99th=[ 174] 00:27:14.636 bw ( KiB/s): min=23280, max=36032, per=99.92%, avg=32742.00, stdev=6308.73, samples=4 00:27:14.636 iops : min= 5820, max= 9008, avg=8185.50, stdev=1577.18, samples=4 00:27:14.636 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:27:14.636 lat (msec) : 2=0.03%, 4=0.87%, 10=93.94%, 20=4.76%, 250=0.39% 00:27:14.636 cpu : usr=63.49%, sys=30.22%, ctx=58, majf=0, minf=6 00:27:14.636 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:27:14.636 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:14.636 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:14.636 issued rwts: total=16420,16433,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:14.636 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:14.636 00:27:14.636 Run status group 0 (all jobs): 00:27:14.636 READ: bw=32.0MiB/s (33.5MB/s), 32.0MiB/s-32.0MiB/s (33.5MB/s-33.5MB/s), io=64.1MiB (67.3MB), run=2006-2006msec 00:27:14.636 WRITE: bw=32.0MiB/s (33.6MB/s), 32.0MiB/s-32.0MiB/s (33.6MB/s-33.6MB/s), io=64.2MiB (67.3MB), run=2006-2006msec 00:27:14.636 20:27:51 -- host/fio.sh@59 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:14.636 20:27:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:14.636 20:27:51 -- common/autotest_common.sh@10 -- # set +x 00:27:14.636 20:27:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:14.636 20:27:51 -- host/fio.sh@62 -- # rpc_cmd bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:27:14.636 20:27:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:14.636 20:27:51 -- common/autotest_common.sh@10 -- # set +x 00:27:15.201 20:27:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:15.202 20:27:52 -- host/fio.sh@62 -- # ls_nested_guid=74149b6a-a8f9-4a3d-b56a-08222e67dc73 00:27:15.202 20:27:52 -- host/fio.sh@63 -- # get_lvs_free_mb 74149b6a-a8f9-4a3d-b56a-08222e67dc73 00:27:15.202 20:27:52 -- common/autotest_common.sh@1341 -- # local lvs_uuid=74149b6a-a8f9-4a3d-b56a-08222e67dc73 00:27:15.202 20:27:52 -- common/autotest_common.sh@1342 -- # local lvs_info 00:27:15.202 20:27:52 -- common/autotest_common.sh@1343 -- # local fc 00:27:15.202 20:27:52 -- common/autotest_common.sh@1344 -- # local cs 00:27:15.202 20:27:52 -- common/autotest_common.sh@1345 -- # rpc_cmd bdev_lvol_get_lvstores 00:27:15.202 20:27:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:15.202 20:27:52 -- common/autotest_common.sh@10 -- # set +x 00:27:15.202 20:27:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:15.202 20:27:52 -- common/autotest_common.sh@1345 -- # lvs_info='[ 00:27:15.202 { 00:27:15.202 "uuid": "36a4bea0-b32a-4c92-abf9-d581221c4129", 00:27:15.202 "name": "lvs_0", 00:27:15.202 "base_bdev": "Nvme0n1", 00:27:15.202 "total_data_clusters": 930, 00:27:15.202 "free_clusters": 0, 00:27:15.202 "block_size": 512, 00:27:15.202 "cluster_size": 1073741824 00:27:15.202 }, 00:27:15.202 { 00:27:15.202 "uuid": "74149b6a-a8f9-4a3d-b56a-08222e67dc73", 00:27:15.202 "name": "lvs_n_0", 00:27:15.202 "base_bdev": "892e49ff-f5f4-4ee5-aca0-1ff4134dcb64", 00:27:15.202 "total_data_clusters": 237847, 00:27:15.202 "free_clusters": 237847, 00:27:15.202 "block_size": 512, 00:27:15.202 "cluster_size": 4194304 00:27:15.202 } 00:27:15.202 ]' 00:27:15.202 20:27:52 -- common/autotest_common.sh@1346 -- # jq '.[] | select(.uuid=="74149b6a-a8f9-4a3d-b56a-08222e67dc73") .free_clusters' 00:27:15.459 20:27:52 -- common/autotest_common.sh@1346 -- # fc=237847 00:27:15.459 20:27:52 -- common/autotest_common.sh@1347 -- # jq '.[] | select(.uuid=="74149b6a-a8f9-4a3d-b56a-08222e67dc73") .cluster_size' 00:27:15.459 20:27:52 -- common/autotest_common.sh@1347 -- # cs=4194304 00:27:15.459 20:27:52 -- common/autotest_common.sh@1350 -- # free_mb=951388 00:27:15.459 20:27:52 -- common/autotest_common.sh@1351 -- # echo 951388 00:27:15.459 951388 00:27:15.459 20:27:52 -- host/fio.sh@64 -- # rpc_cmd bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:27:15.459 20:27:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:15.459 20:27:52 -- common/autotest_common.sh@10 -- # set +x 00:27:15.717 fdbf0f7d-69bd-4c09-a2aa-1f87d544d3c7 00:27:15.717 20:27:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:15.717 20:27:53 -- host/fio.sh@65 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:27:15.717 20:27:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:15.717 20:27:53 -- common/autotest_common.sh@10 -- # set +x 00:27:15.717 20:27:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:15.717 20:27:53 -- host/fio.sh@66 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:27:15.717 20:27:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:15.717 20:27:53 -- common/autotest_common.sh@10 -- # set +x 00:27:15.717 20:27:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:15.717 20:27:53 -- host/fio.sh@67 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:27:15.717 20:27:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:15.717 20:27:53 -- common/autotest_common.sh@10 -- # set +x 00:27:15.717 20:27:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:15.717 20:27:53 -- host/fio.sh@68 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:15.717 20:27:53 -- common/autotest_common.sh@1337 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:15.717 20:27:53 -- common/autotest_common.sh@1314 -- # local fio_dir=/usr/src/fio 00:27:15.717 20:27:53 -- common/autotest_common.sh@1316 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:15.717 20:27:53 -- common/autotest_common.sh@1316 -- # local sanitizers 00:27:15.717 20:27:53 -- common/autotest_common.sh@1317 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:15.717 20:27:53 -- common/autotest_common.sh@1318 -- # shift 00:27:15.717 20:27:53 -- common/autotest_common.sh@1320 -- # local asan_lib= 00:27:15.717 20:27:53 -- common/autotest_common.sh@1321 -- # for sanitizer in "${sanitizers[@]}" 00:27:15.717 20:27:53 -- common/autotest_common.sh@1322 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:15.717 20:27:53 -- common/autotest_common.sh@1322 -- # grep libasan 00:27:15.717 20:27:53 -- common/autotest_common.sh@1322 -- # awk '{print $3}' 00:27:15.717 20:27:53 -- common/autotest_common.sh@1322 -- # asan_lib= 00:27:15.717 20:27:53 -- common/autotest_common.sh@1323 -- # [[ -n '' ]] 00:27:15.717 20:27:53 -- common/autotest_common.sh@1321 -- # for sanitizer in "${sanitizers[@]}" 00:27:15.717 20:27:53 -- common/autotest_common.sh@1322 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:15.717 20:27:53 -- common/autotest_common.sh@1322 -- # grep libclang_rt.asan 00:27:15.717 20:27:53 -- common/autotest_common.sh@1322 -- # awk '{print $3}' 00:27:15.717 20:27:53 -- common/autotest_common.sh@1322 -- # asan_lib= 00:27:15.717 20:27:53 -- common/autotest_common.sh@1323 -- # [[ -n '' ]] 00:27:15.717 20:27:53 -- common/autotest_common.sh@1329 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:15.717 20:27:53 -- common/autotest_common.sh@1329 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:15.975 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:27:15.975 fio-3.35 00:27:15.975 Starting 1 thread 00:27:16.232 EAL: No free 2048 kB hugepages reported on node 1 00:27:18.760 00:27:18.760 test: (groupid=0, jobs=1): err= 0: pid=1923250: Wed Feb 14 20:27:55 2024 00:27:18.760 read: IOPS=7923, BW=31.0MiB/s (32.5MB/s)(62.1MiB/2005msec) 00:27:18.760 slat (nsec): min=1531, max=104242, avg=1691.21, stdev=1123.29 00:27:18.760 clat (usec): min=4436, max=18484, avg=9195.11, stdev=1858.45 00:27:18.760 lat (usec): min=4439, max=18485, avg=9196.80, stdev=1858.47 00:27:18.760 clat percentiles (usec): 00:27:18.760 | 1.00th=[ 6194], 5.00th=[ 6980], 10.00th=[ 7439], 20.00th=[ 7963], 00:27:18.760 | 30.00th=[ 8291], 40.00th=[ 8586], 50.00th=[ 8848], 60.00th=[ 9110], 00:27:18.760 | 70.00th=[ 9503], 80.00th=[10028], 90.00th=[11731], 95.00th=[13042], 00:27:18.760 | 99.00th=[16057], 99.50th=[16909], 99.90th=[17957], 99.95th=[17957], 00:27:18.760 | 99.99th=[18482] 00:27:18.760 bw ( KiB/s): min=30392, max=32112, per=99.78%, avg=31624.00, stdev=825.49, samples=4 00:27:18.760 iops : min= 7598, max= 8028, avg=7906.00, stdev=206.37, samples=4 00:27:18.760 write: IOPS=7896, BW=30.8MiB/s (32.3MB/s)(61.8MiB/2005msec); 0 zone resets 00:27:18.760 slat (nsec): min=1583, max=82561, avg=1777.53, stdev=795.43 00:27:18.760 clat (usec): min=2028, max=13448, avg=6846.24, stdev=1156.93 00:27:18.760 lat (usec): min=2032, max=13450, avg=6848.02, stdev=1156.96 00:27:18.760 clat percentiles (usec): 00:27:18.760 | 1.00th=[ 4113], 5.00th=[ 4948], 10.00th=[ 5407], 20.00th=[ 5997], 00:27:18.760 | 30.00th=[ 6390], 40.00th=[ 6652], 50.00th=[ 6849], 60.00th=[ 7046], 00:27:18.760 | 70.00th=[ 7308], 80.00th=[ 7635], 90.00th=[ 8160], 95.00th=[ 8848], 00:27:18.760 | 99.00th=[10159], 99.50th=[10683], 99.90th=[12125], 99.95th=[12518], 00:27:18.760 | 99.99th=[13042] 00:27:18.760 bw ( KiB/s): min=31336, max=31824, per=99.92%, avg=31560.00, stdev=206.46, samples=4 00:27:18.760 iops : min= 7834, max= 7956, avg=7890.00, stdev=51.61, samples=4 00:27:18.760 lat (msec) : 4=0.28%, 10=89.28%, 20=10.44% 00:27:18.760 cpu : usr=65.82%, sys=28.74%, ctx=61, majf=0, minf=6 00:27:18.760 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:27:18.760 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:18.760 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:18.760 issued rwts: total=15887,15832,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:18.760 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:18.760 00:27:18.760 Run status group 0 (all jobs): 00:27:18.760 READ: bw=31.0MiB/s (32.5MB/s), 31.0MiB/s-31.0MiB/s (32.5MB/s-32.5MB/s), io=62.1MiB (65.1MB), run=2005-2005msec 00:27:18.760 WRITE: bw=30.8MiB/s (32.3MB/s), 30.8MiB/s-30.8MiB/s (32.3MB/s-32.3MB/s), io=61.8MiB (64.8MB), run=2005-2005msec 00:27:18.760 20:27:55 -- host/fio.sh@70 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:27:18.760 20:27:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:18.760 20:27:55 -- common/autotest_common.sh@10 -- # set +x 00:27:18.760 20:27:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:18.760 20:27:55 -- host/fio.sh@72 -- # sync 00:27:18.760 20:27:55 -- host/fio.sh@74 -- # rpc_cmd bdev_lvol_delete lvs_n_0/lbd_nest_0 00:27:18.760 20:27:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:18.760 20:27:55 -- common/autotest_common.sh@10 -- # set +x 00:27:22.044 20:27:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:22.044 20:27:59 -- host/fio.sh@75 -- # rpc_cmd bdev_lvol_delete_lvstore -l lvs_n_0 00:27:22.044 20:27:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:22.044 20:27:59 -- common/autotest_common.sh@10 -- # set +x 00:27:22.044 20:27:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:22.044 20:27:59 -- host/fio.sh@76 -- # rpc_cmd bdev_lvol_delete lvs_0/lbd_0 00:27:22.044 20:27:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:22.044 20:27:59 -- common/autotest_common.sh@10 -- # set +x 00:27:24.621 20:28:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:24.621 20:28:01 -- host/fio.sh@77 -- # rpc_cmd bdev_lvol_delete_lvstore -l lvs_0 00:27:24.621 20:28:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:24.621 20:28:01 -- common/autotest_common.sh@10 -- # set +x 00:27:24.621 20:28:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:24.621 20:28:01 -- host/fio.sh@78 -- # rpc_cmd bdev_nvme_detach_controller Nvme0 00:27:24.621 20:28:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:24.621 20:28:01 -- common/autotest_common.sh@10 -- # set +x 00:27:26.520 20:28:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:26.520 20:28:03 -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:27:26.520 20:28:03 -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:27:26.520 20:28:03 -- host/fio.sh@84 -- # nvmftestfini 00:27:26.520 20:28:03 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:26.520 20:28:03 -- nvmf/common.sh@116 -- # sync 00:27:26.520 20:28:03 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:26.520 20:28:03 -- nvmf/common.sh@119 -- # set +e 00:27:26.520 20:28:03 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:26.520 20:28:03 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:26.520 rmmod nvme_tcp 00:27:26.520 rmmod nvme_fabrics 00:27:26.520 rmmod nvme_keyring 00:27:26.520 20:28:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:26.520 20:28:03 -- nvmf/common.sh@123 -- # set -e 00:27:26.520 20:28:03 -- nvmf/common.sh@124 -- # return 0 00:27:26.520 20:28:03 -- nvmf/common.sh@477 -- # '[' -n 1919990 ']' 00:27:26.520 20:28:03 -- nvmf/common.sh@478 -- # killprocess 1919990 00:27:26.520 20:28:03 -- common/autotest_common.sh@924 -- # '[' -z 1919990 ']' 00:27:26.520 20:28:03 -- common/autotest_common.sh@928 -- # kill -0 1919990 00:27:26.520 20:28:03 -- common/autotest_common.sh@929 -- # uname 00:27:26.520 20:28:03 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:27:26.520 20:28:03 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 1919990 00:27:26.520 20:28:03 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:27:26.520 20:28:03 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:27:26.520 20:28:03 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 1919990' 00:27:26.520 killing process with pid 1919990 00:27:26.520 20:28:03 -- common/autotest_common.sh@943 -- # kill 1919990 00:27:26.520 20:28:03 -- common/autotest_common.sh@948 -- # wait 1919990 00:27:26.520 20:28:03 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:26.520 20:28:03 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:26.520 20:28:03 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:26.520 20:28:03 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:26.520 20:28:03 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:26.520 20:28:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:26.520 20:28:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:26.520 20:28:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:29.055 20:28:05 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:27:29.055 00:27:29.055 real 0m34.882s 00:27:29.055 user 2m15.569s 00:27:29.055 sys 0m8.093s 00:27:29.055 20:28:05 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:29.055 20:28:05 -- common/autotest_common.sh@10 -- # set +x 00:27:29.055 ************************************ 00:27:29.055 END TEST nvmf_fio_host 00:27:29.055 ************************************ 00:27:29.055 20:28:05 -- nvmf/nvmf.sh@99 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:27:29.055 20:28:05 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:27:29.055 20:28:05 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:27:29.055 20:28:05 -- common/autotest_common.sh@10 -- # set +x 00:27:29.055 ************************************ 00:27:29.055 START TEST nvmf_failover 00:27:29.055 ************************************ 00:27:29.055 20:28:05 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:27:29.055 * Looking for test storage... 00:27:29.055 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:29.055 20:28:06 -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:29.055 20:28:06 -- nvmf/common.sh@7 -- # uname -s 00:27:29.055 20:28:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:29.055 20:28:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:29.055 20:28:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:29.055 20:28:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:29.055 20:28:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:29.055 20:28:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:29.055 20:28:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:29.055 20:28:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:29.055 20:28:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:29.055 20:28:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:29.055 20:28:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:27:29.055 20:28:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:27:29.055 20:28:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:29.055 20:28:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:29.055 20:28:06 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:29.055 20:28:06 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:29.055 20:28:06 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:29.055 20:28:06 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:29.055 20:28:06 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:29.055 20:28:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:29.055 20:28:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:29.055 20:28:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:29.055 20:28:06 -- paths/export.sh@5 -- # export PATH 00:27:29.055 20:28:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:29.055 20:28:06 -- nvmf/common.sh@46 -- # : 0 00:27:29.055 20:28:06 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:29.055 20:28:06 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:29.055 20:28:06 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:29.055 20:28:06 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:29.055 20:28:06 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:29.055 20:28:06 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:29.055 20:28:06 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:29.055 20:28:06 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:29.055 20:28:06 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:29.055 20:28:06 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:29.055 20:28:06 -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:29.055 20:28:06 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:29.055 20:28:06 -- host/failover.sh@18 -- # nvmftestinit 00:27:29.055 20:28:06 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:29.055 20:28:06 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:29.055 20:28:06 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:29.055 20:28:06 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:29.055 20:28:06 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:29.055 20:28:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:29.055 20:28:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:29.055 20:28:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:29.055 20:28:06 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:29.055 20:28:06 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:29.055 20:28:06 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:29.055 20:28:06 -- common/autotest_common.sh@10 -- # set +x 00:27:34.318 20:28:11 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:34.318 20:28:11 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:34.318 20:28:11 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:34.318 20:28:11 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:34.318 20:28:11 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:34.318 20:28:11 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:34.318 20:28:11 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:34.318 20:28:11 -- nvmf/common.sh@294 -- # net_devs=() 00:27:34.318 20:28:11 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:34.318 20:28:11 -- nvmf/common.sh@295 -- # e810=() 00:27:34.318 20:28:11 -- nvmf/common.sh@295 -- # local -ga e810 00:27:34.318 20:28:11 -- nvmf/common.sh@296 -- # x722=() 00:27:34.318 20:28:11 -- nvmf/common.sh@296 -- # local -ga x722 00:27:34.318 20:28:11 -- nvmf/common.sh@297 -- # mlx=() 00:27:34.318 20:28:11 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:34.318 20:28:11 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:34.318 20:28:11 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:34.318 20:28:11 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:34.318 20:28:11 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:34.318 20:28:11 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:34.318 20:28:11 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:34.318 20:28:11 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:34.318 20:28:11 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:34.318 20:28:11 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:34.318 20:28:11 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:34.318 20:28:11 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:34.318 20:28:11 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:34.318 20:28:11 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:27:34.318 20:28:11 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:27:34.318 20:28:11 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:27:34.318 20:28:11 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:27:34.318 20:28:11 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:34.318 20:28:11 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:34.318 20:28:11 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:34.318 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:34.318 20:28:11 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:34.318 20:28:11 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:34.318 20:28:11 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:34.318 20:28:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:34.318 20:28:11 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:34.318 20:28:11 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:34.318 20:28:11 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:34.318 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:34.318 20:28:11 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:34.318 20:28:11 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:34.318 20:28:11 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:34.318 20:28:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:34.318 20:28:11 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:34.318 20:28:11 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:34.318 20:28:11 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:27:34.318 20:28:11 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:27:34.318 20:28:11 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:34.318 20:28:11 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:34.318 20:28:11 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:34.318 20:28:11 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:34.318 20:28:11 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:34.318 Found net devices under 0000:af:00.0: cvl_0_0 00:27:34.318 20:28:11 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:34.318 20:28:11 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:34.318 20:28:11 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:34.318 20:28:11 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:34.318 20:28:11 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:34.318 20:28:11 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:34.318 Found net devices under 0000:af:00.1: cvl_0_1 00:27:34.318 20:28:11 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:34.318 20:28:11 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:34.318 20:28:11 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:34.318 20:28:11 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:34.318 20:28:11 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:27:34.318 20:28:11 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:27:34.318 20:28:11 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:34.318 20:28:11 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:34.319 20:28:11 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:34.319 20:28:11 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:27:34.319 20:28:11 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:34.319 20:28:11 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:34.319 20:28:11 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:27:34.319 20:28:11 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:34.319 20:28:11 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:34.319 20:28:11 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:27:34.319 20:28:11 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:27:34.319 20:28:11 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:27:34.319 20:28:11 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:34.577 20:28:11 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:34.577 20:28:11 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:34.577 20:28:11 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:27:34.577 20:28:11 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:34.577 20:28:11 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:34.577 20:28:11 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:34.577 20:28:11 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:27:34.577 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:34.577 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:27:34.577 00:27:34.577 --- 10.0.0.2 ping statistics --- 00:27:34.577 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:34.577 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:27:34.577 20:28:11 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:34.577 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:34.577 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.351 ms 00:27:34.577 00:27:34.577 --- 10.0.0.1 ping statistics --- 00:27:34.577 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:34.577 rtt min/avg/max/mdev = 0.351/0.351/0.351/0.000 ms 00:27:34.577 20:28:11 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:34.577 20:28:11 -- nvmf/common.sh@410 -- # return 0 00:27:34.577 20:28:11 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:34.577 20:28:11 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:34.577 20:28:11 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:34.577 20:28:11 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:34.577 20:28:11 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:34.577 20:28:11 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:34.577 20:28:11 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:34.577 20:28:11 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:27:34.577 20:28:11 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:34.577 20:28:11 -- common/autotest_common.sh@710 -- # xtrace_disable 00:27:34.577 20:28:11 -- common/autotest_common.sh@10 -- # set +x 00:27:34.577 20:28:11 -- nvmf/common.sh@469 -- # nvmfpid=1928645 00:27:34.577 20:28:11 -- nvmf/common.sh@470 -- # waitforlisten 1928645 00:27:34.577 20:28:11 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:34.577 20:28:11 -- common/autotest_common.sh@817 -- # '[' -z 1928645 ']' 00:27:34.577 20:28:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:34.577 20:28:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:34.577 20:28:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:34.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:34.577 20:28:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:34.577 20:28:11 -- common/autotest_common.sh@10 -- # set +x 00:27:34.835 [2024-02-14 20:28:12.023723] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:27:34.835 [2024-02-14 20:28:12.023768] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:34.835 EAL: No free 2048 kB hugepages reported on node 1 00:27:34.835 [2024-02-14 20:28:12.086479] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:34.835 [2024-02-14 20:28:12.161856] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:34.835 [2024-02-14 20:28:12.161975] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:34.835 [2024-02-14 20:28:12.161983] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:34.835 [2024-02-14 20:28:12.161989] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:34.835 [2024-02-14 20:28:12.162089] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:34.835 [2024-02-14 20:28:12.162178] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:34.835 [2024-02-14 20:28:12.162179] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:35.766 20:28:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:35.766 20:28:12 -- common/autotest_common.sh@850 -- # return 0 00:27:35.766 20:28:12 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:27:35.766 20:28:12 -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:35.766 20:28:12 -- common/autotest_common.sh@10 -- # set +x 00:27:35.766 20:28:12 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:35.766 20:28:12 -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:35.766 [2024-02-14 20:28:13.022266] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:35.766 20:28:13 -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:27:36.027 Malloc0 00:27:36.027 20:28:13 -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:36.027 20:28:13 -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:36.287 20:28:13 -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:36.544 [2024-02-14 20:28:13.732716] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:36.544 20:28:13 -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:36.544 [2024-02-14 20:28:13.897145] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:36.544 20:28:13 -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:27:36.801 [2024-02-14 20:28:14.069747] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:27:36.801 20:28:14 -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:27:36.801 20:28:14 -- host/failover.sh@31 -- # bdevperf_pid=1928913 00:27:36.801 20:28:14 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:36.801 20:28:14 -- host/failover.sh@34 -- # waitforlisten 1928913 /var/tmp/bdevperf.sock 00:27:36.801 20:28:14 -- common/autotest_common.sh@817 -- # '[' -z 1928913 ']' 00:27:36.801 20:28:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:36.801 20:28:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:36.801 20:28:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:36.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:36.801 20:28:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:36.801 20:28:14 -- common/autotest_common.sh@10 -- # set +x 00:27:37.731 20:28:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:37.731 20:28:14 -- common/autotest_common.sh@850 -- # return 0 00:27:37.731 20:28:14 -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:37.987 NVMe0n1 00:27:37.987 20:28:15 -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:38.242 00:27:38.499 20:28:15 -- host/failover.sh@39 -- # run_test_pid=1929234 00:27:38.499 20:28:15 -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:38.499 20:28:15 -- host/failover.sh@41 -- # sleep 1 00:27:39.429 20:28:16 -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:39.429 [2024-02-14 20:28:16.845775] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d467e0 is same with the state(5) to be set 00:27:39.430 [2024-02-14 20:28:16.845824] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d467e0 is same with the state(5) to be set 00:27:39.430 [2024-02-14 20:28:16.845880] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d467e0 is same with the state(5) to be set 00:27:39.430 [2024-02-14 20:28:16.845887] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d467e0 is same with the state(5) to be set 00:27:39.430 [2024-02-14 20:28:16.845893] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d467e0 is same with the state(5) to be set 00:27:39.430 [2024-02-14 20:28:16.845901] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d467e0 is same with the state(5) to be set 00:27:39.430 [2024-02-14 20:28:16.845909] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d467e0 is same with the state(5) to be set 00:27:39.430 [2024-02-14 20:28:16.845918] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d467e0 is same with the state(5) to be set 00:27:39.430 [2024-02-14 20:28:16.845925] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d467e0 is same with the state(5) to be set 00:27:39.430 [2024-02-14 20:28:16.845932] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d467e0 is same with the state(5) to be set 00:27:39.430 [2024-02-14 20:28:16.845938] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d467e0 is same with the state(5) to be set 00:27:39.430 [2024-02-14 20:28:16.845944] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d467e0 is same with the state(5) to be set 00:27:39.430 [2024-02-14 20:28:16.845949] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d467e0 is same with the state(5) to be set 00:27:39.430 [2024-02-14 20:28:16.846004] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d467e0 is same with the state(5) to be set 00:27:39.430 [2024-02-14 20:28:16.846011] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d467e0 is same with the state(5) to be set 00:27:39.430 [2024-02-14 20:28:16.846089] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d467e0 is same with the state(5) to be set 00:27:39.430 [2024-02-14 20:28:16.846099] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d467e0 is same with the state(5) to be set 00:27:39.430 [2024-02-14 20:28:16.846104] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d467e0 is same with the state(5) to be set 00:27:39.430 [2024-02-14 20:28:16.846111] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d467e0 is same with the state(5) to be set 00:27:39.430 [2024-02-14 20:28:16.846116] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d467e0 is same with the state(5) to be set 00:27:39.430 [2024-02-14 20:28:16.846122] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d467e0 is same with the state(5) to be set 00:27:39.430 [2024-02-14 20:28:16.846128] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d467e0 is same with the state(5) to be set 00:27:39.430 [2024-02-14 20:28:16.846135] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d467e0 is same with the state(5) to be set 00:27:39.430 [2024-02-14 20:28:16.846143] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d467e0 is same with the state(5) to be set 00:27:39.430 [2024-02-14 20:28:16.846151] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d467e0 is same with the state(5) to be set 00:27:39.430 [2024-02-14 20:28:16.846159] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d467e0 is same with the state(5) to be set 00:27:39.430 [2024-02-14 20:28:16.846168] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d467e0 is same with the state(5) to be set 00:27:39.430 [2024-02-14 20:28:16.846177] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d467e0 is same with the state(5) to be set 00:27:39.430 [2024-02-14 20:28:16.846190] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d467e0 is same with the state(5) to be set 00:27:39.430 [2024-02-14 20:28:16.846200] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d467e0 is same with the state(5) to be set 00:27:39.430 [2024-02-14 20:28:16.846208] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d467e0 is same with the state(5) to be set 00:27:39.430 [2024-02-14 20:28:16.846218] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d467e0 is same with the state(5) to be set 00:27:39.430 [2024-02-14 20:28:16.846227] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d467e0 is same with the state(5) to be set 00:27:39.430 [2024-02-14 20:28:16.846236] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d467e0 is same with the state(5) to be set 00:27:39.430 [2024-02-14 20:28:16.846245] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d467e0 is same with the state(5) to be set 00:27:39.430 [2024-02-14 20:28:16.846255] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d467e0 is same with the state(5) to be set 00:27:39.430 [2024-02-14 20:28:16.846263] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d467e0 is same with the state(5) to be set 00:27:39.430 [2024-02-14 20:28:16.846272] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d467e0 is same with the state(5) to be set 00:27:39.430 [2024-02-14 20:28:16.846280] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d467e0 is same with the state(5) to be set 00:27:39.430 [2024-02-14 20:28:16.846287] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d467e0 is same with the state(5) to be set 00:27:39.430 [2024-02-14 20:28:16.846296] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d467e0 is same with the state(5) to be set 00:27:39.430 [2024-02-14 20:28:16.846302] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d467e0 is same with the state(5) to be set 00:27:39.430 [2024-02-14 20:28:16.846308] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d467e0 is same with the state(5) to be set 00:27:39.430 [2024-02-14 20:28:16.846314] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d467e0 is same with the state(5) to be set 00:27:39.687 [2024-02-14 20:28:16.846319] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d467e0 is same with the state(5) to be set 00:27:39.687 [2024-02-14 20:28:16.846325] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d467e0 is same with the state(5) to be set 00:27:39.687 [2024-02-14 20:28:16.846331] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d467e0 is same with the state(5) to be set 00:27:39.687 [2024-02-14 20:28:16.846337] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d467e0 is same with the state(5) to be set 00:27:39.687 20:28:16 -- host/failover.sh@45 -- # sleep 3 00:27:42.959 20:28:19 -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:42.959 00:27:42.959 20:28:20 -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:42.959 [2024-02-14 20:28:20.329819] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46ff0 is same with the state(5) to be set 00:27:42.959 [2024-02-14 20:28:20.329860] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46ff0 is same with the state(5) to be set 00:27:42.959 [2024-02-14 20:28:20.329868] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46ff0 is same with the state(5) to be set 00:27:42.959 [2024-02-14 20:28:20.329874] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46ff0 is same with the state(5) to be set 00:27:42.959 [2024-02-14 20:28:20.329880] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46ff0 is same with the state(5) to be set 00:27:42.959 [2024-02-14 20:28:20.329886] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46ff0 is same with the state(5) to be set 00:27:42.959 [2024-02-14 20:28:20.329891] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46ff0 is same with the state(5) to be set 00:27:42.959 [2024-02-14 20:28:20.329897] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46ff0 is same with the state(5) to be set 00:27:42.959 [2024-02-14 20:28:20.329903] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46ff0 is same with the state(5) to be set 00:27:42.959 [2024-02-14 20:28:20.329908] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46ff0 is same with the state(5) to be set 00:27:42.959 [2024-02-14 20:28:20.329914] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46ff0 is same with the state(5) to be set 00:27:42.959 [2024-02-14 20:28:20.329919] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46ff0 is same with the state(5) to be set 00:27:42.960 [2024-02-14 20:28:20.329925] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46ff0 is same with the state(5) to be set 00:27:42.960 [2024-02-14 20:28:20.329930] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46ff0 is same with the state(5) to be set 00:27:42.960 [2024-02-14 20:28:20.329936] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46ff0 is same with the state(5) to be set 00:27:42.960 [2024-02-14 20:28:20.329941] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46ff0 is same with the state(5) to be set 00:27:42.960 [2024-02-14 20:28:20.329952] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46ff0 is same with the state(5) to be set 00:27:42.960 [2024-02-14 20:28:20.329959] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46ff0 is same with the state(5) to be set 00:27:42.960 [2024-02-14 20:28:20.329964] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46ff0 is same with the state(5) to be set 00:27:42.960 [2024-02-14 20:28:20.329970] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46ff0 is same with the state(5) to be set 00:27:42.960 [2024-02-14 20:28:20.329976] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46ff0 is same with the state(5) to be set 00:27:42.960 [2024-02-14 20:28:20.329982] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46ff0 is same with the state(5) to be set 00:27:42.960 [2024-02-14 20:28:20.329987] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46ff0 is same with the state(5) to be set 00:27:42.960 [2024-02-14 20:28:20.329993] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46ff0 is same with the state(5) to be set 00:27:42.960 [2024-02-14 20:28:20.329998] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46ff0 is same with the state(5) to be set 00:27:42.960 [2024-02-14 20:28:20.330004] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46ff0 is same with the state(5) to be set 00:27:42.960 [2024-02-14 20:28:20.330010] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46ff0 is same with the state(5) to be set 00:27:42.960 [2024-02-14 20:28:20.330016] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46ff0 is same with the state(5) to be set 00:27:42.960 [2024-02-14 20:28:20.330022] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46ff0 is same with the state(5) to be set 00:27:42.960 [2024-02-14 20:28:20.330029] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46ff0 is same with the state(5) to be set 00:27:42.960 [2024-02-14 20:28:20.330034] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46ff0 is same with the state(5) to be set 00:27:42.960 [2024-02-14 20:28:20.330040] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46ff0 is same with the state(5) to be set 00:27:42.960 [2024-02-14 20:28:20.330046] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46ff0 is same with the state(5) to be set 00:27:42.960 [2024-02-14 20:28:20.330052] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46ff0 is same with the state(5) to be set 00:27:42.960 [2024-02-14 20:28:20.330058] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46ff0 is same with the state(5) to be set 00:27:42.960 [2024-02-14 20:28:20.330064] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46ff0 is same with the state(5) to be set 00:27:42.960 [2024-02-14 20:28:20.330070] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46ff0 is same with the state(5) to be set 00:27:42.960 [2024-02-14 20:28:20.330076] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46ff0 is same with the state(5) to be set 00:27:42.960 [2024-02-14 20:28:20.330082] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46ff0 is same with the state(5) to be set 00:27:42.960 [2024-02-14 20:28:20.330087] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46ff0 is same with the state(5) to be set 00:27:42.960 [2024-02-14 20:28:20.330093] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46ff0 is same with the state(5) to be set 00:27:42.960 [2024-02-14 20:28:20.330099] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46ff0 is same with the state(5) to be set 00:27:42.960 [2024-02-14 20:28:20.330105] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46ff0 is same with the state(5) to be set 00:27:42.960 [2024-02-14 20:28:20.330113] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46ff0 is same with the state(5) to be set 00:27:42.960 [2024-02-14 20:28:20.330119] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46ff0 is same with the state(5) to be set 00:27:42.960 [2024-02-14 20:28:20.330124] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46ff0 is same with the state(5) to be set 00:27:42.960 [2024-02-14 20:28:20.330130] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46ff0 is same with the state(5) to be set 00:27:42.960 [2024-02-14 20:28:20.330136] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46ff0 is same with the state(5) to be set 00:27:42.960 [2024-02-14 20:28:20.330142] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46ff0 is same with the state(5) to be set 00:27:42.960 [2024-02-14 20:28:20.330147] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46ff0 is same with the state(5) to be set 00:27:42.960 20:28:20 -- host/failover.sh@50 -- # sleep 3 00:27:46.235 20:28:23 -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:46.235 [2024-02-14 20:28:23.517435] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:46.235 20:28:23 -- host/failover.sh@55 -- # sleep 1 00:27:47.166 20:28:24 -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:27:47.424 [2024-02-14 20:28:24.709067] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1460 is same with the state(5) to be set 00:27:47.424 [2024-02-14 20:28:24.709098] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1460 is same with the state(5) to be set 00:27:47.424 [2024-02-14 20:28:24.709106] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1460 is same with the state(5) to be set 00:27:47.424 [2024-02-14 20:28:24.709112] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1460 is same with the state(5) to be set 00:27:47.424 [2024-02-14 20:28:24.709118] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1460 is same with the state(5) to be set 00:27:47.424 [2024-02-14 20:28:24.709124] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1460 is same with the state(5) to be set 00:27:47.424 [2024-02-14 20:28:24.709130] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1460 is same with the state(5) to be set 00:27:47.424 [2024-02-14 20:28:24.709136] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1460 is same with the state(5) to be set 00:27:47.424 [2024-02-14 20:28:24.709141] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1460 is same with the state(5) to be set 00:27:47.424 [2024-02-14 20:28:24.709147] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1460 is same with the state(5) to be set 00:27:47.424 [2024-02-14 20:28:24.709153] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1460 is same with the state(5) to be set 00:27:47.424 [2024-02-14 20:28:24.709159] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1460 is same with the state(5) to be set 00:27:47.424 [2024-02-14 20:28:24.709164] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1460 is same with the state(5) to be set 00:27:47.424 [2024-02-14 20:28:24.709170] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1460 is same with the state(5) to be set 00:27:47.424 [2024-02-14 20:28:24.709175] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1460 is same with the state(5) to be set 00:27:47.424 [2024-02-14 20:28:24.709181] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1460 is same with the state(5) to be set 00:27:47.424 [2024-02-14 20:28:24.709187] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1460 is same with the state(5) to be set 00:27:47.424 [2024-02-14 20:28:24.709200] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1460 is same with the state(5) to be set 00:27:47.424 [2024-02-14 20:28:24.709206] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1460 is same with the state(5) to be set 00:27:47.424 [2024-02-14 20:28:24.709211] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1460 is same with the state(5) to be set 00:27:47.424 [2024-02-14 20:28:24.709217] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1460 is same with the state(5) to be set 00:27:47.424 [2024-02-14 20:28:24.709223] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1460 is same with the state(5) to be set 00:27:47.424 [2024-02-14 20:28:24.709228] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1460 is same with the state(5) to be set 00:27:47.424 [2024-02-14 20:28:24.709233] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1460 is same with the state(5) to be set 00:27:47.424 [2024-02-14 20:28:24.709239] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1460 is same with the state(5) to be set 00:27:47.424 [2024-02-14 20:28:24.709245] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1460 is same with the state(5) to be set 00:27:47.424 [2024-02-14 20:28:24.709251] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1460 is same with the state(5) to be set 00:27:47.425 [2024-02-14 20:28:24.709256] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1460 is same with the state(5) to be set 00:27:47.425 [2024-02-14 20:28:24.709262] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1460 is same with the state(5) to be set 00:27:47.425 [2024-02-14 20:28:24.709268] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1460 is same with the state(5) to be set 00:27:47.425 [2024-02-14 20:28:24.709274] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1460 is same with the state(5) to be set 00:27:47.425 [2024-02-14 20:28:24.709280] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1460 is same with the state(5) to be set 00:27:47.425 [2024-02-14 20:28:24.709286] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1460 is same with the state(5) to be set 00:27:47.425 [2024-02-14 20:28:24.709292] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1460 is same with the state(5) to be set 00:27:47.425 [2024-02-14 20:28:24.709298] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1460 is same with the state(5) to be set 00:27:47.425 [2024-02-14 20:28:24.709304] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1460 is same with the state(5) to be set 00:27:47.425 [2024-02-14 20:28:24.709310] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1460 is same with the state(5) to be set 00:27:47.425 [2024-02-14 20:28:24.709315] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1460 is same with the state(5) to be set 00:27:47.425 [2024-02-14 20:28:24.709321] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1460 is same with the state(5) to be set 00:27:47.425 [2024-02-14 20:28:24.709326] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1460 is same with the state(5) to be set 00:27:47.425 [2024-02-14 20:28:24.709332] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1460 is same with the state(5) to be set 00:27:47.425 [2024-02-14 20:28:24.709338] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1460 is same with the state(5) to be set 00:27:47.425 [2024-02-14 20:28:24.709343] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1460 is same with the state(5) to be set 00:27:47.425 [2024-02-14 20:28:24.709349] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1460 is same with the state(5) to be set 00:27:47.425 [2024-02-14 20:28:24.709357] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1460 is same with the state(5) to be set 00:27:47.425 [2024-02-14 20:28:24.709363] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1460 is same with the state(5) to be set 00:27:47.425 [2024-02-14 20:28:24.709369] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1460 is same with the state(5) to be set 00:27:47.425 [2024-02-14 20:28:24.709375] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1460 is same with the state(5) to be set 00:27:47.425 [2024-02-14 20:28:24.709381] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1460 is same with the state(5) to be set 00:27:47.425 [2024-02-14 20:28:24.709388] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1460 is same with the state(5) to be set 00:27:47.425 [2024-02-14 20:28:24.709393] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1460 is same with the state(5) to be set 00:27:47.425 [2024-02-14 20:28:24.709399] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1460 is same with the state(5) to be set 00:27:47.425 [2024-02-14 20:28:24.709405] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1460 is same with the state(5) to be set 00:27:47.425 [2024-02-14 20:28:24.709411] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1460 is same with the state(5) to be set 00:27:47.425 [2024-02-14 20:28:24.709417] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1460 is same with the state(5) to be set 00:27:47.425 [2024-02-14 20:28:24.709422] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1460 is same with the state(5) to be set 00:27:47.425 [2024-02-14 20:28:24.709428] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1460 is same with the state(5) to be set 00:27:47.425 [2024-02-14 20:28:24.709434] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1460 is same with the state(5) to be set 00:27:47.425 [2024-02-14 20:28:24.709439] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1460 is same with the state(5) to be set 00:27:47.425 [2024-02-14 20:28:24.709445] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1460 is same with the state(5) to be set 00:27:47.425 [2024-02-14 20:28:24.709450] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1460 is same with the state(5) to be set 00:27:47.425 [2024-02-14 20:28:24.709456] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1460 is same with the state(5) to be set 00:27:47.425 [2024-02-14 20:28:24.709461] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1460 is same with the state(5) to be set 00:27:47.425 [2024-02-14 20:28:24.709467] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1460 is same with the state(5) to be set 00:27:47.425 [2024-02-14 20:28:24.709473] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1460 is same with the state(5) to be set 00:27:47.425 [2024-02-14 20:28:24.709479] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1460 is same with the state(5) to be set 00:27:47.425 [2024-02-14 20:28:24.709485] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1460 is same with the state(5) to be set 00:27:47.425 [2024-02-14 20:28:24.709491] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1460 is same with the state(5) to be set 00:27:47.425 [2024-02-14 20:28:24.709497] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1460 is same with the state(5) to be set 00:27:47.425 20:28:24 -- host/failover.sh@59 -- # wait 1929234 00:27:54.020 0 00:27:54.020 20:28:30 -- host/failover.sh@61 -- # killprocess 1928913 00:27:54.020 20:28:30 -- common/autotest_common.sh@924 -- # '[' -z 1928913 ']' 00:27:54.020 20:28:30 -- common/autotest_common.sh@928 -- # kill -0 1928913 00:27:54.020 20:28:30 -- common/autotest_common.sh@929 -- # uname 00:27:54.020 20:28:30 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:27:54.020 20:28:30 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 1928913 00:27:54.020 20:28:30 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:27:54.020 20:28:30 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:27:54.020 20:28:30 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 1928913' 00:27:54.020 killing process with pid 1928913 00:27:54.020 20:28:30 -- common/autotest_common.sh@943 -- # kill 1928913 00:27:54.020 20:28:30 -- common/autotest_common.sh@948 -- # wait 1928913 00:27:54.020 20:28:31 -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:54.020 [2024-02-14 20:28:14.139070] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:27:54.020 [2024-02-14 20:28:14.139120] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1928913 ] 00:27:54.020 EAL: No free 2048 kB hugepages reported on node 1 00:27:54.020 [2024-02-14 20:28:14.200043] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:54.020 [2024-02-14 20:28:14.270961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:54.020 Running I/O for 15 seconds... 00:27:54.021 [2024-02-14 20:28:16.846811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:13152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.021 [2024-02-14 20:28:16.846842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.021 [2024-02-14 20:28:16.846858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:13176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.021 [2024-02-14 20:28:16.846865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.021 [2024-02-14 20:28:16.846874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.021 [2024-02-14 20:28:16.846881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.021 [2024-02-14 20:28:16.846889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:13192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.021 [2024-02-14 20:28:16.846895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.021 [2024-02-14 20:28:16.846903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:12568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.021 [2024-02-14 20:28:16.846910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.021 [2024-02-14 20:28:16.846918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:12576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.021 [2024-02-14 20:28:16.846924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.021 [2024-02-14 20:28:16.846932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.021 [2024-02-14 20:28:16.846938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.021 [2024-02-14 20:28:16.846946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:12616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.021 [2024-02-14 20:28:16.846952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.021 [2024-02-14 20:28:16.846960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:12632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.021 [2024-02-14 20:28:16.846966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.021 [2024-02-14 20:28:16.846974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:12648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.021 [2024-02-14 20:28:16.846980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.021 [2024-02-14 20:28:16.846988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:12664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.021 [2024-02-14 20:28:16.846994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.021 [2024-02-14 20:28:16.847006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:12680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.021 [2024-02-14 20:28:16.847012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.021 [2024-02-14 20:28:16.847020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:13200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.021 [2024-02-14 20:28:16.847026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.021 [2024-02-14 20:28:16.847034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:13208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.021 [2024-02-14 20:28:16.847040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.021 [2024-02-14 20:28:16.847050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:13216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.021 [2024-02-14 20:28:16.847056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.021 [2024-02-14 20:28:16.847064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:13264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.021 [2024-02-14 20:28:16.847070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.021 [2024-02-14 20:28:16.847078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:12688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.021 [2024-02-14 20:28:16.847084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.021 [2024-02-14 20:28:16.847092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:12704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.021 [2024-02-14 20:28:16.847098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.021 [2024-02-14 20:28:16.847106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:12712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.021 [2024-02-14 20:28:16.847112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.021 [2024-02-14 20:28:16.847120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:12720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.021 [2024-02-14 20:28:16.847126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.021 [2024-02-14 20:28:16.847134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:12760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.021 [2024-02-14 20:28:16.847140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.021 [2024-02-14 20:28:16.847147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:12768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.021 [2024-02-14 20:28:16.847153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.021 [2024-02-14 20:28:16.847161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:12776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.021 [2024-02-14 20:28:16.847167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.021 [2024-02-14 20:28:16.847175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.021 [2024-02-14 20:28:16.847182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.021 [2024-02-14 20:28:16.847190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.021 [2024-02-14 20:28:16.847196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.021 [2024-02-14 20:28:16.847204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.021 [2024-02-14 20:28:16.847210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.021 [2024-02-14 20:28:16.847217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:13328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.021 [2024-02-14 20:28:16.847223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.021 [2024-02-14 20:28:16.847231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:13336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.021 [2024-02-14 20:28:16.847237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.021 [2024-02-14 20:28:16.847245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:13344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.021 [2024-02-14 20:28:16.847251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.021 [2024-02-14 20:28:16.847259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:13360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.021 [2024-02-14 20:28:16.847265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.021 [2024-02-14 20:28:16.847273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:13376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.021 [2024-02-14 20:28:16.847280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.021 [2024-02-14 20:28:16.847288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:13384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.021 [2024-02-14 20:28:16.847294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.021 [2024-02-14 20:28:16.847301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:12816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.021 [2024-02-14 20:28:16.847307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.021 [2024-02-14 20:28:16.847315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:12840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.021 [2024-02-14 20:28:16.847321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.021 [2024-02-14 20:28:16.847329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:12848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.021 [2024-02-14 20:28:16.847335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.021 [2024-02-14 20:28:16.847343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:12864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.021 [2024-02-14 20:28:16.847349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.021 [2024-02-14 20:28:16.847357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.021 [2024-02-14 20:28:16.847364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.021 [2024-02-14 20:28:16.847372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:12896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.021 [2024-02-14 20:28:16.847378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.021 [2024-02-14 20:28:16.847385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:12920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.021 [2024-02-14 20:28:16.847391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.021 [2024-02-14 20:28:16.847399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.022 [2024-02-14 20:28:16.847405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.022 [2024-02-14 20:28:16.847413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:13392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.022 [2024-02-14 20:28:16.847419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.022 [2024-02-14 20:28:16.847427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:13400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.022 [2024-02-14 20:28:16.847433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.022 [2024-02-14 20:28:16.847441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:13408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.022 [2024-02-14 20:28:16.847447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.022 [2024-02-14 20:28:16.847455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.022 [2024-02-14 20:28:16.847461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.022 [2024-02-14 20:28:16.847469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:13424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.022 [2024-02-14 20:28:16.847475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.022 [2024-02-14 20:28:16.847483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.022 [2024-02-14 20:28:16.847489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.022 [2024-02-14 20:28:16.847497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:13440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.022 [2024-02-14 20:28:16.847503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.022 [2024-02-14 20:28:16.847511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:13448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.022 [2024-02-14 20:28:16.847516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.022 [2024-02-14 20:28:16.847524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:12944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.022 [2024-02-14 20:28:16.847531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.022 [2024-02-14 20:28:16.847539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:12952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.022 [2024-02-14 20:28:16.847545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.022 [2024-02-14 20:28:16.847552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.022 [2024-02-14 20:28:16.847558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.022 [2024-02-14 20:28:16.847566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:12976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.022 [2024-02-14 20:28:16.847572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.022 [2024-02-14 20:28:16.847580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.022 [2024-02-14 20:28:16.847586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.022 [2024-02-14 20:28:16.847593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:13016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.022 [2024-02-14 20:28:16.847599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.022 [2024-02-14 20:28:16.847607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.022 [2024-02-14 20:28:16.847613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.022 [2024-02-14 20:28:16.847621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.022 [2024-02-14 20:28:16.847626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.022 [2024-02-14 20:28:16.847634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:13048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.022 [2024-02-14 20:28:16.847640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.022 [2024-02-14 20:28:16.847651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:13056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.022 [2024-02-14 20:28:16.847658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.022 [2024-02-14 20:28:16.847666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:13064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.022 [2024-02-14 20:28:16.847672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.022 [2024-02-14 20:28:16.847680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:13072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.022 [2024-02-14 20:28:16.847686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.022 [2024-02-14 20:28:16.847693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:13080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.022 [2024-02-14 20:28:16.847700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.022 [2024-02-14 20:28:16.847708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:13096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.022 [2024-02-14 20:28:16.847716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.022 [2024-02-14 20:28:16.847724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:13120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.022 [2024-02-14 20:28:16.847730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.022 [2024-02-14 20:28:16.847737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:13128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.022 [2024-02-14 20:28:16.847743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.022 [2024-02-14 20:28:16.847751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:13456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.022 [2024-02-14 20:28:16.847757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.022 [2024-02-14 20:28:16.847765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:13464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.022 [2024-02-14 20:28:16.847771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.022 [2024-02-14 20:28:16.847779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:13472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.022 [2024-02-14 20:28:16.847785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.022 [2024-02-14 20:28:16.847792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:13480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.022 [2024-02-14 20:28:16.847799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.022 [2024-02-14 20:28:16.847806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:13488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.022 [2024-02-14 20:28:16.847812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.022 [2024-02-14 20:28:16.847820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.022 [2024-02-14 20:28:16.847826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.022 [2024-02-14 20:28:16.847833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:13504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.022 [2024-02-14 20:28:16.847839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.022 [2024-02-14 20:28:16.847847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:13512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.022 [2024-02-14 20:28:16.847854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.022 [2024-02-14 20:28:16.847861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:13520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.022 [2024-02-14 20:28:16.847868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.022 [2024-02-14 20:28:16.847875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:13528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.022 [2024-02-14 20:28:16.847881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.022 [2024-02-14 20:28:16.847890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:13536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.022 [2024-02-14 20:28:16.847897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.022 [2024-02-14 20:28:16.847905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.022 [2024-02-14 20:28:16.847911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.022 [2024-02-14 20:28:16.847918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:13552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.022 [2024-02-14 20:28:16.847924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.022 [2024-02-14 20:28:16.847935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:13560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.022 [2024-02-14 20:28:16.847942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.022 [2024-02-14 20:28:16.847950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:13568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.022 [2024-02-14 20:28:16.847957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.023 [2024-02-14 20:28:16.847965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:13576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.023 [2024-02-14 20:28:16.847971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.023 [2024-02-14 20:28:16.847978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:13584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.023 [2024-02-14 20:28:16.847984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.023 [2024-02-14 20:28:16.847992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.023 [2024-02-14 20:28:16.847998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.023 [2024-02-14 20:28:16.848006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:13600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.023 [2024-02-14 20:28:16.848012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.023 [2024-02-14 20:28:16.848020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.023 [2024-02-14 20:28:16.848026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.023 [2024-02-14 20:28:16.848033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:13616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.023 [2024-02-14 20:28:16.848040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.023 [2024-02-14 20:28:16.848047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:13624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.023 [2024-02-14 20:28:16.848053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.023 [2024-02-14 20:28:16.848061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:13632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.023 [2024-02-14 20:28:16.848068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.023 [2024-02-14 20:28:16.848076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:13640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.023 [2024-02-14 20:28:16.848082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.023 [2024-02-14 20:28:16.848090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:13648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.023 [2024-02-14 20:28:16.848096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.023 [2024-02-14 20:28:16.848103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:13656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.023 [2024-02-14 20:28:16.848110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.023 [2024-02-14 20:28:16.848117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:13664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.023 [2024-02-14 20:28:16.848123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.023 [2024-02-14 20:28:16.848131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:13672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.023 [2024-02-14 20:28:16.848137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.023 [2024-02-14 20:28:16.848145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:13680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.023 [2024-02-14 20:28:16.848151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.023 [2024-02-14 20:28:16.848160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:13688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.023 [2024-02-14 20:28:16.848167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.023 [2024-02-14 20:28:16.848175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:13696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.023 [2024-02-14 20:28:16.848181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.023 [2024-02-14 20:28:16.848189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:13704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.023 [2024-02-14 20:28:16.848195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.023 [2024-02-14 20:28:16.848203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:13712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.023 [2024-02-14 20:28:16.848209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.023 [2024-02-14 20:28:16.848216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:13720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.023 [2024-02-14 20:28:16.848223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.023 [2024-02-14 20:28:16.848231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:13728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.023 [2024-02-14 20:28:16.848238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.023 [2024-02-14 20:28:16.848247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:13736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.023 [2024-02-14 20:28:16.848254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.023 [2024-02-14 20:28:16.848261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:13744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.023 [2024-02-14 20:28:16.848268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.023 [2024-02-14 20:28:16.848275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:13136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.023 [2024-02-14 20:28:16.848282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.023 [2024-02-14 20:28:16.848289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:13144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.023 [2024-02-14 20:28:16.848296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.023 [2024-02-14 20:28:16.848304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:13160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.023 [2024-02-14 20:28:16.848310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.023 [2024-02-14 20:28:16.848318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:13168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.023 [2024-02-14 20:28:16.848325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.023 [2024-02-14 20:28:16.848332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:13224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.023 [2024-02-14 20:28:16.848338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.023 [2024-02-14 20:28:16.848346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:13232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.023 [2024-02-14 20:28:16.848352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.023 [2024-02-14 20:28:16.848360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.023 [2024-02-14 20:28:16.848366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.023 [2024-02-14 20:28:16.848374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:13248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.023 [2024-02-14 20:28:16.848380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.023 [2024-02-14 20:28:16.848389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:13752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.023 [2024-02-14 20:28:16.848395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.023 [2024-02-14 20:28:16.848403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:13760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.023 [2024-02-14 20:28:16.848409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.023 [2024-02-14 20:28:16.848417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:13768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.023 [2024-02-14 20:28:16.848423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.023 [2024-02-14 20:28:16.848435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:13776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.023 [2024-02-14 20:28:16.848441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.023 [2024-02-14 20:28:16.848449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.023 [2024-02-14 20:28:16.848455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.023 [2024-02-14 20:28:16.848463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:13792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.023 [2024-02-14 20:28:16.848469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.023 [2024-02-14 20:28:16.848477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:13800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.023 [2024-02-14 20:28:16.848483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.023 [2024-02-14 20:28:16.848490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.023 [2024-02-14 20:28:16.848502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.023 [2024-02-14 20:28:16.848510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:13816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.023 [2024-02-14 20:28:16.848516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.023 [2024-02-14 20:28:16.848524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:13824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.023 [2024-02-14 20:28:16.848531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.024 [2024-02-14 20:28:16.848538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:13832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.024 [2024-02-14 20:28:16.848544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.024 [2024-02-14 20:28:16.848552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:13256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.024 [2024-02-14 20:28:16.848559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.024 [2024-02-14 20:28:16.848567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.024 [2024-02-14 20:28:16.848573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.024 [2024-02-14 20:28:16.848581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.024 [2024-02-14 20:28:16.848587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.024 [2024-02-14 20:28:16.848595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:13304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.024 [2024-02-14 20:28:16.848601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.024 [2024-02-14 20:28:16.848609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:13312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.024 [2024-02-14 20:28:16.848616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.024 [2024-02-14 20:28:16.848628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:13320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.024 [2024-02-14 20:28:16.848634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.024 [2024-02-14 20:28:16.848642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:13352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.024 [2024-02-14 20:28:16.848651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.024 [2024-02-14 20:28:16.848660] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220f7c0 is same with the state(5) to be set 00:27:54.024 [2024-02-14 20:28:16.848668] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:54.024 [2024-02-14 20:28:16.848673] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:54.024 [2024-02-14 20:28:16.848679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13368 len:8 PRP1 0x0 PRP2 0x0 00:27:54.024 [2024-02-14 20:28:16.848685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.024 [2024-02-14 20:28:16.848727] bdev_nvme.c:1576:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x220f7c0 was disconnected and freed. reset controller. 00:27:54.024 [2024-02-14 20:28:16.848741] bdev_nvme.c:1829:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:27:54.024 [2024-02-14 20:28:16.848761] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:54.024 [2024-02-14 20:28:16.848769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.024 [2024-02-14 20:28:16.848777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:54.024 [2024-02-14 20:28:16.848784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.024 [2024-02-14 20:28:16.848791] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:54.024 [2024-02-14 20:28:16.848797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.024 [2024-02-14 20:28:16.848804] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:54.024 [2024-02-14 20:28:16.848811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.024 [2024-02-14 20:28:16.848817] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:54.024 [2024-02-14 20:28:16.850599] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:54.024 [2024-02-14 20:28:16.850621] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21f06b0 (9): Bad file descriptor 00:27:54.024 [2024-02-14 20:28:16.920631] bdev_nvme.c:2026:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:54.024 [2024-02-14 20:28:20.330437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:128152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.024 [2024-02-14 20:28:20.330471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.024 [2024-02-14 20:28:20.330485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:128184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.024 [2024-02-14 20:28:20.330497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.024 [2024-02-14 20:28:20.330505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:128192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.024 [2024-02-14 20:28:20.330511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.024 [2024-02-14 20:28:20.330519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:128208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.024 [2024-02-14 20:28:20.330526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.024 [2024-02-14 20:28:20.330534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:127712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.024 [2024-02-14 20:28:20.330540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.024 [2024-02-14 20:28:20.330549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:127776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.024 [2024-02-14 20:28:20.330555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.024 [2024-02-14 20:28:20.330562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:127824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.024 [2024-02-14 20:28:20.330568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.024 [2024-02-14 20:28:20.330576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:127840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.024 [2024-02-14 20:28:20.330582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.024 [2024-02-14 20:28:20.330590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:127848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.024 [2024-02-14 20:28:20.330596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.024 [2024-02-14 20:28:20.330604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:127856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.024 [2024-02-14 20:28:20.330610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.024 [2024-02-14 20:28:20.330617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:127864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.024 [2024-02-14 20:28:20.330623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.024 [2024-02-14 20:28:20.330631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:127880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.024 [2024-02-14 20:28:20.330637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.024 [2024-02-14 20:28:20.330650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:128224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.024 [2024-02-14 20:28:20.330657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.024 [2024-02-14 20:28:20.330665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:128232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.024 [2024-02-14 20:28:20.330671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.024 [2024-02-14 20:28:20.330680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:128240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.024 [2024-02-14 20:28:20.330686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.024 [2024-02-14 20:28:20.330694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:128264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.024 [2024-02-14 20:28:20.330700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.024 [2024-02-14 20:28:20.330708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:128304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.024 [2024-02-14 20:28:20.330715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.024 [2024-02-14 20:28:20.330723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:128312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.024 [2024-02-14 20:28:20.330729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.024 [2024-02-14 20:28:20.330737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:128328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.024 [2024-02-14 20:28:20.330743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.024 [2024-02-14 20:28:20.330751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:128336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.024 [2024-02-14 20:28:20.330757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.024 [2024-02-14 20:28:20.330764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:128344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.024 [2024-02-14 20:28:20.330771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.024 [2024-02-14 20:28:20.330779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:128360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.024 [2024-02-14 20:28:20.330785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.024 [2024-02-14 20:28:20.330793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:128376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.024 [2024-02-14 20:28:20.330799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.025 [2024-02-14 20:28:20.330807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:128392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.025 [2024-02-14 20:28:20.330813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.025 [2024-02-14 20:28:20.330821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:128416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.025 [2024-02-14 20:28:20.330827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.025 [2024-02-14 20:28:20.330840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:128432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.025 [2024-02-14 20:28:20.330846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.025 [2024-02-14 20:28:20.330854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:128448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.025 [2024-02-14 20:28:20.330862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.025 [2024-02-14 20:28:20.330870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:128472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.025 [2024-02-14 20:28:20.330876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.025 [2024-02-14 20:28:20.330884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:127888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.025 [2024-02-14 20:28:20.330890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.025 [2024-02-14 20:28:20.330898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:127912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.025 [2024-02-14 20:28:20.330904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.025 [2024-02-14 20:28:20.330911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:127928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.025 [2024-02-14 20:28:20.330918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.025 [2024-02-14 20:28:20.330926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:127944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.025 [2024-02-14 20:28:20.330932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.025 [2024-02-14 20:28:20.330939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:127952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.025 [2024-02-14 20:28:20.330946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.025 [2024-02-14 20:28:20.330954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:127976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.025 [2024-02-14 20:28:20.330960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.025 [2024-02-14 20:28:20.330969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:127984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.025 [2024-02-14 20:28:20.330975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.025 [2024-02-14 20:28:20.330982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:128000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.025 [2024-02-14 20:28:20.330988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.025 [2024-02-14 20:28:20.330996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:128488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.025 [2024-02-14 20:28:20.331002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.025 [2024-02-14 20:28:20.331010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:128504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.025 [2024-02-14 20:28:20.331016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.025 [2024-02-14 20:28:20.331024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:128512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.025 [2024-02-14 20:28:20.331030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.025 [2024-02-14 20:28:20.331039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:128520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.025 [2024-02-14 20:28:20.331045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.025 [2024-02-14 20:28:20.331053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:128528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.025 [2024-02-14 20:28:20.331060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.025 [2024-02-14 20:28:20.331068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:128536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.025 [2024-02-14 20:28:20.331074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.025 [2024-02-14 20:28:20.331082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:128544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.025 [2024-02-14 20:28:20.331088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.025 [2024-02-14 20:28:20.331096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:128552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.025 [2024-02-14 20:28:20.331102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.025 [2024-02-14 20:28:20.331109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:128560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.025 [2024-02-14 20:28:20.331116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.025 [2024-02-14 20:28:20.331124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:128568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.025 [2024-02-14 20:28:20.331131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.025 [2024-02-14 20:28:20.331139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:128576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.025 [2024-02-14 20:28:20.331145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.025 [2024-02-14 20:28:20.331152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:128584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.025 [2024-02-14 20:28:20.331158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.025 [2024-02-14 20:28:20.331166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:128592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.025 [2024-02-14 20:28:20.331172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.025 [2024-02-14 20:28:20.331180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:128600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.025 [2024-02-14 20:28:20.331187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.025 [2024-02-14 20:28:20.331194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:128608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.025 [2024-02-14 20:28:20.331200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.025 [2024-02-14 20:28:20.331208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:128616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.025 [2024-02-14 20:28:20.331216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.025 [2024-02-14 20:28:20.331224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:128624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.025 [2024-02-14 20:28:20.331230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.025 [2024-02-14 20:28:20.331237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:128632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.025 [2024-02-14 20:28:20.331244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.025 [2024-02-14 20:28:20.331252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:128640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.025 [2024-02-14 20:28:20.331258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.026 [2024-02-14 20:28:20.331265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:128648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.026 [2024-02-14 20:28:20.331272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.026 [2024-02-14 20:28:20.331279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:128656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.026 [2024-02-14 20:28:20.331285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.026 [2024-02-14 20:28:20.331294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:128664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.026 [2024-02-14 20:28:20.331300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.026 [2024-02-14 20:28:20.331308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:128672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.026 [2024-02-14 20:28:20.331314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.026 [2024-02-14 20:28:20.331322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:128680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.026 [2024-02-14 20:28:20.331328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.026 [2024-02-14 20:28:20.331336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:128688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.026 [2024-02-14 20:28:20.331342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.026 [2024-02-14 20:28:20.331350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:128696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.026 [2024-02-14 20:28:20.331356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.026 [2024-02-14 20:28:20.331364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:128704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.026 [2024-02-14 20:28:20.331370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.026 [2024-02-14 20:28:20.331378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:128712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.026 [2024-02-14 20:28:20.331384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.026 [2024-02-14 20:28:20.331392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:128008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.026 [2024-02-14 20:28:20.331399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.026 [2024-02-14 20:28:20.331407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:128064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.026 [2024-02-14 20:28:20.331414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.026 [2024-02-14 20:28:20.331421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:128080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.026 [2024-02-14 20:28:20.331427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.026 [2024-02-14 20:28:20.331435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:128088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.026 [2024-02-14 20:28:20.331441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.026 [2024-02-14 20:28:20.331449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:128096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.026 [2024-02-14 20:28:20.331456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.026 [2024-02-14 20:28:20.331464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:128104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.026 [2024-02-14 20:28:20.331470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.026 [2024-02-14 20:28:20.331478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:128128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.026 [2024-02-14 20:28:20.331484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.026 [2024-02-14 20:28:20.331492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:128136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.026 [2024-02-14 20:28:20.331498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.026 [2024-02-14 20:28:20.331506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:128720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.026 [2024-02-14 20:28:20.331512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.026 [2024-02-14 20:28:20.331520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:128728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.026 [2024-02-14 20:28:20.331526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.026 [2024-02-14 20:28:20.331533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:128736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.026 [2024-02-14 20:28:20.331540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.026 [2024-02-14 20:28:20.331547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:128744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.026 [2024-02-14 20:28:20.331553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.026 [2024-02-14 20:28:20.331561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:128752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.026 [2024-02-14 20:28:20.331567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.026 [2024-02-14 20:28:20.331576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:128760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.026 [2024-02-14 20:28:20.331582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.026 [2024-02-14 20:28:20.331589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:128768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.026 [2024-02-14 20:28:20.331595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.026 [2024-02-14 20:28:20.331603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:128776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.026 [2024-02-14 20:28:20.331609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.026 [2024-02-14 20:28:20.331617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:128784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.026 [2024-02-14 20:28:20.331623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.026 [2024-02-14 20:28:20.331631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:128792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.026 [2024-02-14 20:28:20.331638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.026 [2024-02-14 20:28:20.331650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:128800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.026 [2024-02-14 20:28:20.331657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.026 [2024-02-14 20:28:20.331664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:128808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.026 [2024-02-14 20:28:20.331670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.026 [2024-02-14 20:28:20.331678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:128816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.026 [2024-02-14 20:28:20.331684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.026 [2024-02-14 20:28:20.331692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:128824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.026 [2024-02-14 20:28:20.331699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.026 [2024-02-14 20:28:20.331706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:128832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.026 [2024-02-14 20:28:20.331712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.026 [2024-02-14 20:28:20.331720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:128144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.026 [2024-02-14 20:28:20.331726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.026 [2024-02-14 20:28:20.331734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:128160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.026 [2024-02-14 20:28:20.331740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.026 [2024-02-14 20:28:20.331748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:128168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.026 [2024-02-14 20:28:20.331756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.026 [2024-02-14 20:28:20.331764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:128176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.026 [2024-02-14 20:28:20.331770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.026 [2024-02-14 20:28:20.331777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:128200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.026 [2024-02-14 20:28:20.331783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.026 [2024-02-14 20:28:20.331791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:128216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.026 [2024-02-14 20:28:20.331797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.026 [2024-02-14 20:28:20.331805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:128248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.026 [2024-02-14 20:28:20.331811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.026 [2024-02-14 20:28:20.331818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:128256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.027 [2024-02-14 20:28:20.331825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.027 [2024-02-14 20:28:20.331832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:128840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.027 [2024-02-14 20:28:20.331838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.027 [2024-02-14 20:28:20.331846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:128848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.027 [2024-02-14 20:28:20.331852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.027 [2024-02-14 20:28:20.331860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:128856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.027 [2024-02-14 20:28:20.331866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.027 [2024-02-14 20:28:20.331873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:128864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.027 [2024-02-14 20:28:20.331879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.027 [2024-02-14 20:28:20.331887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:128872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.027 [2024-02-14 20:28:20.331893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.027 [2024-02-14 20:28:20.331901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:128880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.027 [2024-02-14 20:28:20.331907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.027 [2024-02-14 20:28:20.331914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:128888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.027 [2024-02-14 20:28:20.331920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.027 [2024-02-14 20:28:20.331929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:128896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.027 [2024-02-14 20:28:20.331936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.027 [2024-02-14 20:28:20.331943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:128904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.027 [2024-02-14 20:28:20.331950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.027 [2024-02-14 20:28:20.331957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:128912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.027 [2024-02-14 20:28:20.331963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.027 [2024-02-14 20:28:20.331971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:128920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.027 [2024-02-14 20:28:20.331978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.027 [2024-02-14 20:28:20.331986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:128928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.027 [2024-02-14 20:28:20.331992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.027 [2024-02-14 20:28:20.331999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:128936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.027 [2024-02-14 20:28:20.332006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.027 [2024-02-14 20:28:20.332013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:128944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.027 [2024-02-14 20:28:20.332019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.027 [2024-02-14 20:28:20.332027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:128272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.027 [2024-02-14 20:28:20.332040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.027 [2024-02-14 20:28:20.332047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:128280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.027 [2024-02-14 20:28:20.332054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.027 [2024-02-14 20:28:20.332061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:128288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.027 [2024-02-14 20:28:20.332067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.027 [2024-02-14 20:28:20.332075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:128296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.027 [2024-02-14 20:28:20.332081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.027 [2024-02-14 20:28:20.332089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:128320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.027 [2024-02-14 20:28:20.332095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.027 [2024-02-14 20:28:20.332103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:128352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.027 [2024-02-14 20:28:20.332110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.027 [2024-02-14 20:28:20.332117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:128368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.027 [2024-02-14 20:28:20.332124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.027 [2024-02-14 20:28:20.332131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:128384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.027 [2024-02-14 20:28:20.332137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.027 [2024-02-14 20:28:20.332145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:128952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.027 [2024-02-14 20:28:20.332152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.027 [2024-02-14 20:28:20.332159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:128960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.027 [2024-02-14 20:28:20.332165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.027 [2024-02-14 20:28:20.332173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:128968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.027 [2024-02-14 20:28:20.332179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.027 [2024-02-14 20:28:20.332187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:128400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.027 [2024-02-14 20:28:20.332193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.027 [2024-02-14 20:28:20.332200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:128408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.027 [2024-02-14 20:28:20.332206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.027 [2024-02-14 20:28:20.332214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:128424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.027 [2024-02-14 20:28:20.332220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.027 [2024-02-14 20:28:20.332227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:128440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.027 [2024-02-14 20:28:20.332233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.027 [2024-02-14 20:28:20.332241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:128456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.027 [2024-02-14 20:28:20.332247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.027 [2024-02-14 20:28:20.332255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:128464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.027 [2024-02-14 20:28:20.332264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.027 [2024-02-14 20:28:20.332271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:128480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.027 [2024-02-14 20:28:20.332278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.027 [2024-02-14 20:28:20.332286] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2211810 is same with the state(5) to be set 00:27:54.027 [2024-02-14 20:28:20.332294] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:54.027 [2024-02-14 20:28:20.332299] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:54.027 [2024-02-14 20:28:20.332304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:128496 len:8 PRP1 0x0 PRP2 0x0 00:27:54.027 [2024-02-14 20:28:20.332310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.027 [2024-02-14 20:28:20.332350] bdev_nvme.c:1576:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2211810 was disconnected and freed. reset controller. 00:27:54.027 [2024-02-14 20:28:20.332359] bdev_nvme.c:1829:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:27:54.027 [2024-02-14 20:28:20.332379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:54.027 [2024-02-14 20:28:20.332386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.027 [2024-02-14 20:28:20.332393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:54.027 [2024-02-14 20:28:20.332399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.027 [2024-02-14 20:28:20.332406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:54.027 [2024-02-14 20:28:20.332412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.027 [2024-02-14 20:28:20.332419] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:54.027 [2024-02-14 20:28:20.332425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.027 [2024-02-14 20:28:20.332431] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:54.028 [2024-02-14 20:28:20.334114] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:54.028 [2024-02-14 20:28:20.334136] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21f06b0 (9): Bad file descriptor 00:27:54.028 [2024-02-14 20:28:20.365234] bdev_nvme.c:2026:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:54.028 [2024-02-14 20:28:24.709001] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:54.028 [2024-02-14 20:28:24.709043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.028 [2024-02-14 20:28:24.709052] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:54.028 [2024-02-14 20:28:24.709059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.028 [2024-02-14 20:28:24.709067] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:54.028 [2024-02-14 20:28:24.709074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.028 [2024-02-14 20:28:24.709082] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:54.028 [2024-02-14 20:28:24.709088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.028 [2024-02-14 20:28:24.709104] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f06b0 is same with the state(5) to be set 00:27:54.028 [2024-02-14 20:28:24.709659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:89632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.028 [2024-02-14 20:28:24.709679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.028 [2024-02-14 20:28:24.709693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:88920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.028 [2024-02-14 20:28:24.709700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.028 [2024-02-14 20:28:24.709708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:88952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.028 [2024-02-14 20:28:24.709715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.028 [2024-02-14 20:28:24.709722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:88960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.028 [2024-02-14 20:28:24.709729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.028 [2024-02-14 20:28:24.709737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:88968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.028 [2024-02-14 20:28:24.709743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.028 [2024-02-14 20:28:24.709751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:88984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.028 [2024-02-14 20:28:24.709757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.028 [2024-02-14 20:28:24.709764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:88992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.028 [2024-02-14 20:28:24.709771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.028 [2024-02-14 20:28:24.709779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:89000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.028 [2024-02-14 20:28:24.709785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.028 [2024-02-14 20:28:24.709793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:89016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.028 [2024-02-14 20:28:24.709799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.028 [2024-02-14 20:28:24.709808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:89640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.028 [2024-02-14 20:28:24.709814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.028 [2024-02-14 20:28:24.709821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:89656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.028 [2024-02-14 20:28:24.709828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.028 [2024-02-14 20:28:24.709837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:89672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.028 [2024-02-14 20:28:24.709844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.028 [2024-02-14 20:28:24.709851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:89688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.028 [2024-02-14 20:28:24.709861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.028 [2024-02-14 20:28:24.709869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:89696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.028 [2024-02-14 20:28:24.709875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.028 [2024-02-14 20:28:24.709883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:89712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.028 [2024-02-14 20:28:24.709890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.028 [2024-02-14 20:28:24.709898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:89720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.028 [2024-02-14 20:28:24.709904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.028 [2024-02-14 20:28:24.709911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:89024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.028 [2024-02-14 20:28:24.709917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.028 [2024-02-14 20:28:24.709925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:89032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.028 [2024-02-14 20:28:24.709931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.028 [2024-02-14 20:28:24.709939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:89064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.028 [2024-02-14 20:28:24.709945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.028 [2024-02-14 20:28:24.709953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:89088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.028 [2024-02-14 20:28:24.709959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.028 [2024-02-14 20:28:24.709966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:89112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.028 [2024-02-14 20:28:24.709972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.028 [2024-02-14 20:28:24.709980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:89128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.028 [2024-02-14 20:28:24.709986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.028 [2024-02-14 20:28:24.709994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:89176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.028 [2024-02-14 20:28:24.709999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.028 [2024-02-14 20:28:24.710007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:89184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.028 [2024-02-14 20:28:24.710013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.028 [2024-02-14 20:28:24.710021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:89736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.028 [2024-02-14 20:28:24.710027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.028 [2024-02-14 20:28:24.710036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:89752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.028 [2024-02-14 20:28:24.710042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.028 [2024-02-14 20:28:24.710050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:89776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.028 [2024-02-14 20:28:24.710056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.028 [2024-02-14 20:28:24.710063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:89192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.028 [2024-02-14 20:28:24.710069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.028 [2024-02-14 20:28:24.710079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:89208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.028 [2024-02-14 20:28:24.710085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.028 [2024-02-14 20:28:24.710093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:89216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.028 [2024-02-14 20:28:24.710100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.028 [2024-02-14 20:28:24.710107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:89224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.028 [2024-02-14 20:28:24.710113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.028 [2024-02-14 20:28:24.710121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:89240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.028 [2024-02-14 20:28:24.710127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.028 [2024-02-14 20:28:24.710134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:89248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.028 [2024-02-14 20:28:24.710140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.028 [2024-02-14 20:28:24.710148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:89264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.028 [2024-02-14 20:28:24.710154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.029 [2024-02-14 20:28:24.710162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:89272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.029 [2024-02-14 20:28:24.710168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.029 [2024-02-14 20:28:24.710175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:89280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.029 [2024-02-14 20:28:24.710181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.029 [2024-02-14 20:28:24.710189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:89288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.029 [2024-02-14 20:28:24.710195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.029 [2024-02-14 20:28:24.710202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:89304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.029 [2024-02-14 20:28:24.710216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.029 [2024-02-14 20:28:24.710224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:89320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.029 [2024-02-14 20:28:24.710230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.029 [2024-02-14 20:28:24.710238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:89336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.029 [2024-02-14 20:28:24.710244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.029 [2024-02-14 20:28:24.710252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:89352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.029 [2024-02-14 20:28:24.710258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.029 [2024-02-14 20:28:24.710265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:89368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.029 [2024-02-14 20:28:24.710271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.029 [2024-02-14 20:28:24.710279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:89400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.029 [2024-02-14 20:28:24.710285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.029 [2024-02-14 20:28:24.710292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:89808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.029 [2024-02-14 20:28:24.710298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.029 [2024-02-14 20:28:24.710307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:89816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.029 [2024-02-14 20:28:24.710314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.029 [2024-02-14 20:28:24.710322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:89840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.029 [2024-02-14 20:28:24.710328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.029 [2024-02-14 20:28:24.710335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:89848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.029 [2024-02-14 20:28:24.710341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.029 [2024-02-14 20:28:24.710349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:89856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.029 [2024-02-14 20:28:24.710355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.029 [2024-02-14 20:28:24.710362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:89864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.029 [2024-02-14 20:28:24.710369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.029 [2024-02-14 20:28:24.710376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:89872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.029 [2024-02-14 20:28:24.710382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.029 [2024-02-14 20:28:24.710391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:89880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.029 [2024-02-14 20:28:24.710397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.029 [2024-02-14 20:28:24.710405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:89888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.029 [2024-02-14 20:28:24.710411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.029 [2024-02-14 20:28:24.710418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:89896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.029 [2024-02-14 20:28:24.710425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.029 [2024-02-14 20:28:24.710432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.029 [2024-02-14 20:28:24.710438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.029 [2024-02-14 20:28:24.710446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:89912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.029 [2024-02-14 20:28:24.710452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.029 [2024-02-14 20:28:24.710459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:89920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.029 [2024-02-14 20:28:24.710465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.029 [2024-02-14 20:28:24.710473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.029 [2024-02-14 20:28:24.710479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.029 [2024-02-14 20:28:24.710486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:89936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.029 [2024-02-14 20:28:24.710492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.029 [2024-02-14 20:28:24.710500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:89944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.029 [2024-02-14 20:28:24.710506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.029 [2024-02-14 20:28:24.710513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:89952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.029 [2024-02-14 20:28:24.710520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.029 [2024-02-14 20:28:24.710529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:89960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.029 [2024-02-14 20:28:24.710535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.029 [2024-02-14 20:28:24.710543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:89968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.029 [2024-02-14 20:28:24.710549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.029 [2024-02-14 20:28:24.710556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:89408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.029 [2024-02-14 20:28:24.710563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.029 [2024-02-14 20:28:24.710572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:89416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.029 [2024-02-14 20:28:24.710578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.029 [2024-02-14 20:28:24.710585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:89456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.029 [2024-02-14 20:28:24.710591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.029 [2024-02-14 20:28:24.710599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:89464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.029 [2024-02-14 20:28:24.710605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.029 [2024-02-14 20:28:24.710613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:89480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.029 [2024-02-14 20:28:24.710619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.029 [2024-02-14 20:28:24.710627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:89488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.030 [2024-02-14 20:28:24.710633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.030 [2024-02-14 20:28:24.710641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:89504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.030 [2024-02-14 20:28:24.710650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.030 [2024-02-14 20:28:24.710658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:89512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.030 [2024-02-14 20:28:24.710664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.030 [2024-02-14 20:28:24.710671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:89976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.030 [2024-02-14 20:28:24.710678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.030 [2024-02-14 20:28:24.710686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:89984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.030 [2024-02-14 20:28:24.710692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.030 [2024-02-14 20:28:24.710699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:89992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.030 [2024-02-14 20:28:24.710705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.030 [2024-02-14 20:28:24.710713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:90000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.030 [2024-02-14 20:28:24.710719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.030 [2024-02-14 20:28:24.710726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:90008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.030 [2024-02-14 20:28:24.710732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.030 [2024-02-14 20:28:24.710741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:90016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.030 [2024-02-14 20:28:24.710749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.030 [2024-02-14 20:28:24.710758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:90024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.030 [2024-02-14 20:28:24.710764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.030 [2024-02-14 20:28:24.710772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:90032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.030 [2024-02-14 20:28:24.710778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.030 [2024-02-14 20:28:24.710786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:90040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.030 [2024-02-14 20:28:24.710792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.030 [2024-02-14 20:28:24.710800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:90048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.030 [2024-02-14 20:28:24.710806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.030 [2024-02-14 20:28:24.710814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:90056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.030 [2024-02-14 20:28:24.710821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.030 [2024-02-14 20:28:24.710828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:90064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.030 [2024-02-14 20:28:24.710835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.030 [2024-02-14 20:28:24.710842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:90072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.030 [2024-02-14 20:28:24.710848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.030 [2024-02-14 20:28:24.710856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:90080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.030 [2024-02-14 20:28:24.710862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.030 [2024-02-14 20:28:24.710870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:90088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.030 [2024-02-14 20:28:24.710876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.030 [2024-02-14 20:28:24.710884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:90096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.030 [2024-02-14 20:28:24.710891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.030 [2024-02-14 20:28:24.710898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:90104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.030 [2024-02-14 20:28:24.710905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.030 [2024-02-14 20:28:24.710912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:89520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.030 [2024-02-14 20:28:24.710919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.030 [2024-02-14 20:28:24.710928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:89528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.030 [2024-02-14 20:28:24.710934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.030 [2024-02-14 20:28:24.710942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:89536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.030 [2024-02-14 20:28:24.710948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.030 [2024-02-14 20:28:24.710955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:89552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.030 [2024-02-14 20:28:24.710962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.030 [2024-02-14 20:28:24.710970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:89560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.030 [2024-02-14 20:28:24.710977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.030 [2024-02-14 20:28:24.710986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:89568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.030 [2024-02-14 20:28:24.710992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.030 [2024-02-14 20:28:24.711000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:89576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.030 [2024-02-14 20:28:24.711006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.030 [2024-02-14 20:28:24.711014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:89592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.030 [2024-02-14 20:28:24.711020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.030 [2024-02-14 20:28:24.711028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:89608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.030 [2024-02-14 20:28:24.711034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.030 [2024-02-14 20:28:24.711042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:89616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.030 [2024-02-14 20:28:24.711048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.030 [2024-02-14 20:28:24.711055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:89624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.030 [2024-02-14 20:28:24.711062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.030 [2024-02-14 20:28:24.711069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:89648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.030 [2024-02-14 20:28:24.711075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.030 [2024-02-14 20:28:24.711083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:89664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.030 [2024-02-14 20:28:24.711089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.030 [2024-02-14 20:28:24.711097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:89680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.030 [2024-02-14 20:28:24.711105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.030 [2024-02-14 20:28:24.711113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:89704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.030 [2024-02-14 20:28:24.711119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.030 [2024-02-14 20:28:24.711127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:89728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.030 [2024-02-14 20:28:24.711134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.030 [2024-02-14 20:28:24.711141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:90112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.030 [2024-02-14 20:28:24.711147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.030 [2024-02-14 20:28:24.711155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:90120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.030 [2024-02-14 20:28:24.711161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.030 [2024-02-14 20:28:24.711169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:90128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.030 [2024-02-14 20:28:24.711175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.030 [2024-02-14 20:28:24.711183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:90136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.030 [2024-02-14 20:28:24.711189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.030 [2024-02-14 20:28:24.711196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:90144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.031 [2024-02-14 20:28:24.711203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.031 [2024-02-14 20:28:24.711212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:90152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.031 [2024-02-14 20:28:24.711218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.031 [2024-02-14 20:28:24.711226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:90160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.031 [2024-02-14 20:28:24.711232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.031 [2024-02-14 20:28:24.711239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:90168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.031 [2024-02-14 20:28:24.711245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.031 [2024-02-14 20:28:24.711253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:90176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.031 [2024-02-14 20:28:24.711260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.031 [2024-02-14 20:28:24.711267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:90184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.031 [2024-02-14 20:28:24.711273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.031 [2024-02-14 20:28:24.711282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:90192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.031 [2024-02-14 20:28:24.711288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.031 [2024-02-14 20:28:24.711296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:90200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.031 [2024-02-14 20:28:24.711303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.031 [2024-02-14 20:28:24.711310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:90208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.031 [2024-02-14 20:28:24.711317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.031 [2024-02-14 20:28:24.711324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:90216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.031 [2024-02-14 20:28:24.711331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.031 [2024-02-14 20:28:24.711338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:90224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.031 [2024-02-14 20:28:24.711345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.031 [2024-02-14 20:28:24.711352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:90232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.031 [2024-02-14 20:28:24.711358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.031 [2024-02-14 20:28:24.711366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:90240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.031 [2024-02-14 20:28:24.711372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.031 [2024-02-14 20:28:24.711381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:89744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.031 [2024-02-14 20:28:24.711387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.031 [2024-02-14 20:28:24.711394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:89760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.031 [2024-02-14 20:28:24.711401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.031 [2024-02-14 20:28:24.711408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:89768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.031 [2024-02-14 20:28:24.711415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.031 [2024-02-14 20:28:24.711423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:89784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.031 [2024-02-14 20:28:24.711429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.031 [2024-02-14 20:28:24.711438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:89792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.031 [2024-02-14 20:28:24.711445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.031 [2024-02-14 20:28:24.711453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:89800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.031 [2024-02-14 20:28:24.711459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.031 [2024-02-14 20:28:24.711468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:89824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.031 [2024-02-14 20:28:24.711474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.031 [2024-02-14 20:28:24.711481] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x221ec80 is same with the state(5) to be set 00:27:54.031 [2024-02-14 20:28:24.711489] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:54.031 [2024-02-14 20:28:24.711494] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:54.031 [2024-02-14 20:28:24.711500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89832 len:8 PRP1 0x0 PRP2 0x0 00:27:54.031 [2024-02-14 20:28:24.711506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.031 [2024-02-14 20:28:24.711546] bdev_nvme.c:1576:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x221ec80 was disconnected and freed. reset controller. 00:27:54.031 [2024-02-14 20:28:24.711554] bdev_nvme.c:1829:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:27:54.031 [2024-02-14 20:28:24.711561] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:54.031 [2024-02-14 20:28:24.713425] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:54.031 [2024-02-14 20:28:24.713449] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21f06b0 (9): Bad file descriptor 00:27:54.031 [2024-02-14 20:28:24.781756] bdev_nvme.c:2026:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:54.031 00:27:54.031 Latency(us) 00:27:54.031 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:54.031 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:54.031 Verification LBA range: start 0x0 length 0x4000 00:27:54.031 NVMe0n1 : 15.00 16969.03 66.29 834.21 0.00 7177.18 698.27 22469.49 00:27:54.031 =================================================================================================================== 00:27:54.031 Total : 16969.03 66.29 834.21 0.00 7177.18 698.27 22469.49 00:27:54.031 Received shutdown signal, test time was about 15.000000 seconds 00:27:54.031 00:27:54.031 Latency(us) 00:27:54.031 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:54.031 =================================================================================================================== 00:27:54.031 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:54.031 20:28:31 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:27:54.031 20:28:31 -- host/failover.sh@65 -- # count=3 00:27:54.031 20:28:31 -- host/failover.sh@67 -- # (( count != 3 )) 00:27:54.031 20:28:31 -- host/failover.sh@73 -- # bdevperf_pid=1931694 00:27:54.031 20:28:31 -- host/failover.sh@75 -- # waitforlisten 1931694 /var/tmp/bdevperf.sock 00:27:54.031 20:28:31 -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:27:54.031 20:28:31 -- common/autotest_common.sh@817 -- # '[' -z 1931694 ']' 00:27:54.031 20:28:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:54.031 20:28:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:54.031 20:28:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:54.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:54.031 20:28:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:54.031 20:28:31 -- common/autotest_common.sh@10 -- # set +x 00:27:54.596 20:28:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:54.596 20:28:31 -- common/autotest_common.sh@850 -- # return 0 00:27:54.596 20:28:31 -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:54.852 [2024-02-14 20:28:32.077671] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:54.852 20:28:32 -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:27:54.852 [2024-02-14 20:28:32.250159] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:27:55.109 20:28:32 -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:55.367 NVMe0n1 00:27:55.367 20:28:32 -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:55.624 00:27:55.624 20:28:32 -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:55.881 00:27:55.881 20:28:33 -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:55.881 20:28:33 -- host/failover.sh@82 -- # grep -q NVMe0 00:27:56.138 20:28:33 -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:56.138 20:28:33 -- host/failover.sh@87 -- # sleep 3 00:27:59.413 20:28:36 -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:59.413 20:28:36 -- host/failover.sh@88 -- # grep -q NVMe0 00:27:59.413 20:28:36 -- host/failover.sh@90 -- # run_test_pid=1932609 00:27:59.413 20:28:36 -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:59.413 20:28:36 -- host/failover.sh@92 -- # wait 1932609 00:28:00.784 0 00:28:00.784 20:28:37 -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:00.784 [2024-02-14 20:28:31.129442] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:28:00.784 [2024-02-14 20:28:31.129493] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1931694 ] 00:28:00.784 EAL: No free 2048 kB hugepages reported on node 1 00:28:00.784 [2024-02-14 20:28:31.191949] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:00.784 [2024-02-14 20:28:31.257701] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:00.784 [2024-02-14 20:28:33.477967] bdev_nvme.c:1829:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:28:00.784 [2024-02-14 20:28:33.478011] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:00.784 [2024-02-14 20:28:33.478021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:00.784 [2024-02-14 20:28:33.478030] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:00.784 [2024-02-14 20:28:33.478037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:00.784 [2024-02-14 20:28:33.478043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:00.784 [2024-02-14 20:28:33.478049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:00.784 [2024-02-14 20:28:33.478056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:00.784 [2024-02-14 20:28:33.478062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:00.784 [2024-02-14 20:28:33.478069] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:00.784 [2024-02-14 20:28:33.478087] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:00.784 [2024-02-14 20:28:33.478100] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17286b0 (9): Bad file descriptor 00:28:00.784 [2024-02-14 20:28:33.485904] bdev_nvme.c:2026:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:00.784 Running I/O for 1 seconds... 00:28:00.784 00:28:00.784 Latency(us) 00:28:00.784 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:00.785 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:00.785 Verification LBA range: start 0x0 length 0x4000 00:28:00.785 NVMe0n1 : 1.01 17087.65 66.75 0.00 0.00 7462.18 1107.87 17351.44 00:28:00.785 =================================================================================================================== 00:28:00.785 Total : 17087.65 66.75 0.00 0.00 7462.18 1107.87 17351.44 00:28:00.785 20:28:37 -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:00.785 20:28:37 -- host/failover.sh@95 -- # grep -q NVMe0 00:28:00.785 20:28:37 -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:00.785 20:28:38 -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:00.785 20:28:38 -- host/failover.sh@99 -- # grep -q NVMe0 00:28:01.041 20:28:38 -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:01.298 20:28:38 -- host/failover.sh@101 -- # sleep 3 00:28:04.568 20:28:41 -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:04.568 20:28:41 -- host/failover.sh@103 -- # grep -q NVMe0 00:28:04.568 20:28:41 -- host/failover.sh@108 -- # killprocess 1931694 00:28:04.568 20:28:41 -- common/autotest_common.sh@924 -- # '[' -z 1931694 ']' 00:28:04.568 20:28:41 -- common/autotest_common.sh@928 -- # kill -0 1931694 00:28:04.568 20:28:41 -- common/autotest_common.sh@929 -- # uname 00:28:04.568 20:28:41 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:28:04.568 20:28:41 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 1931694 00:28:04.568 20:28:41 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:28:04.568 20:28:41 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:28:04.568 20:28:41 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 1931694' 00:28:04.568 killing process with pid 1931694 00:28:04.568 20:28:41 -- common/autotest_common.sh@943 -- # kill 1931694 00:28:04.568 20:28:41 -- common/autotest_common.sh@948 -- # wait 1931694 00:28:04.568 20:28:41 -- host/failover.sh@110 -- # sync 00:28:04.568 20:28:41 -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:04.825 20:28:42 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:28:04.825 20:28:42 -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:04.825 20:28:42 -- host/failover.sh@116 -- # nvmftestfini 00:28:04.825 20:28:42 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:04.825 20:28:42 -- nvmf/common.sh@116 -- # sync 00:28:04.825 20:28:42 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:28:04.825 20:28:42 -- nvmf/common.sh@119 -- # set +e 00:28:04.825 20:28:42 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:04.825 20:28:42 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:28:04.825 rmmod nvme_tcp 00:28:04.825 rmmod nvme_fabrics 00:28:04.825 rmmod nvme_keyring 00:28:04.825 20:28:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:04.825 20:28:42 -- nvmf/common.sh@123 -- # set -e 00:28:04.825 20:28:42 -- nvmf/common.sh@124 -- # return 0 00:28:04.825 20:28:42 -- nvmf/common.sh@477 -- # '[' -n 1928645 ']' 00:28:04.825 20:28:42 -- nvmf/common.sh@478 -- # killprocess 1928645 00:28:04.825 20:28:42 -- common/autotest_common.sh@924 -- # '[' -z 1928645 ']' 00:28:04.825 20:28:42 -- common/autotest_common.sh@928 -- # kill -0 1928645 00:28:04.825 20:28:42 -- common/autotest_common.sh@929 -- # uname 00:28:04.825 20:28:42 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:28:04.826 20:28:42 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 1928645 00:28:04.826 20:28:42 -- common/autotest_common.sh@930 -- # process_name=reactor_1 00:28:04.826 20:28:42 -- common/autotest_common.sh@934 -- # '[' reactor_1 = sudo ']' 00:28:04.826 20:28:42 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 1928645' 00:28:04.826 killing process with pid 1928645 00:28:04.826 20:28:42 -- common/autotest_common.sh@943 -- # kill 1928645 00:28:04.826 20:28:42 -- common/autotest_common.sh@948 -- # wait 1928645 00:28:05.083 20:28:42 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:05.083 20:28:42 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:28:05.083 20:28:42 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:28:05.083 20:28:42 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:05.083 20:28:42 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:28:05.083 20:28:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:05.083 20:28:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:05.083 20:28:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:07.611 20:28:44 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:28:07.611 00:28:07.611 real 0m38.559s 00:28:07.611 user 2m2.530s 00:28:07.611 sys 0m7.947s 00:28:07.612 20:28:44 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:07.612 20:28:44 -- common/autotest_common.sh@10 -- # set +x 00:28:07.612 ************************************ 00:28:07.612 END TEST nvmf_failover 00:28:07.612 ************************************ 00:28:07.612 20:28:44 -- nvmf/nvmf.sh@100 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:28:07.612 20:28:44 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:28:07.612 20:28:44 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:28:07.612 20:28:44 -- common/autotest_common.sh@10 -- # set +x 00:28:07.612 ************************************ 00:28:07.612 START TEST nvmf_discovery 00:28:07.612 ************************************ 00:28:07.612 20:28:44 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:28:07.612 * Looking for test storage... 00:28:07.612 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:07.612 20:28:44 -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:07.612 20:28:44 -- nvmf/common.sh@7 -- # uname -s 00:28:07.612 20:28:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:07.612 20:28:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:07.612 20:28:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:07.612 20:28:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:07.612 20:28:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:07.612 20:28:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:07.612 20:28:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:07.612 20:28:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:07.612 20:28:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:07.612 20:28:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:07.612 20:28:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:28:07.612 20:28:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:28:07.612 20:28:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:07.612 20:28:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:07.612 20:28:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:07.612 20:28:44 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:07.612 20:28:44 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:07.612 20:28:44 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:07.612 20:28:44 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:07.612 20:28:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:07.612 20:28:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:07.612 20:28:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:07.612 20:28:44 -- paths/export.sh@5 -- # export PATH 00:28:07.612 20:28:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:07.612 20:28:44 -- nvmf/common.sh@46 -- # : 0 00:28:07.612 20:28:44 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:07.612 20:28:44 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:07.612 20:28:44 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:07.612 20:28:44 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:07.612 20:28:44 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:07.612 20:28:44 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:07.612 20:28:44 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:07.612 20:28:44 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:07.612 20:28:44 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:28:07.612 20:28:44 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:28:07.612 20:28:44 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:28:07.612 20:28:44 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:28:07.612 20:28:44 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:28:07.612 20:28:44 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:28:07.612 20:28:44 -- host/discovery.sh@25 -- # nvmftestinit 00:28:07.612 20:28:44 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:28:07.612 20:28:44 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:07.612 20:28:44 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:07.612 20:28:44 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:07.612 20:28:44 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:07.612 20:28:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:07.612 20:28:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:07.612 20:28:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:07.612 20:28:44 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:28:07.612 20:28:44 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:28:07.612 20:28:44 -- nvmf/common.sh@284 -- # xtrace_disable 00:28:07.612 20:28:44 -- common/autotest_common.sh@10 -- # set +x 00:28:14.238 20:28:50 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:14.238 20:28:50 -- nvmf/common.sh@290 -- # pci_devs=() 00:28:14.238 20:28:50 -- nvmf/common.sh@290 -- # local -a pci_devs 00:28:14.238 20:28:50 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:28:14.238 20:28:50 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:28:14.238 20:28:50 -- nvmf/common.sh@292 -- # pci_drivers=() 00:28:14.238 20:28:50 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:28:14.238 20:28:50 -- nvmf/common.sh@294 -- # net_devs=() 00:28:14.238 20:28:50 -- nvmf/common.sh@294 -- # local -ga net_devs 00:28:14.238 20:28:50 -- nvmf/common.sh@295 -- # e810=() 00:28:14.238 20:28:50 -- nvmf/common.sh@295 -- # local -ga e810 00:28:14.238 20:28:50 -- nvmf/common.sh@296 -- # x722=() 00:28:14.238 20:28:50 -- nvmf/common.sh@296 -- # local -ga x722 00:28:14.238 20:28:50 -- nvmf/common.sh@297 -- # mlx=() 00:28:14.238 20:28:50 -- nvmf/common.sh@297 -- # local -ga mlx 00:28:14.238 20:28:50 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:14.238 20:28:50 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:14.238 20:28:50 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:14.238 20:28:50 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:14.238 20:28:50 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:14.238 20:28:50 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:14.238 20:28:50 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:14.238 20:28:50 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:14.238 20:28:50 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:14.238 20:28:50 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:14.238 20:28:50 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:14.238 20:28:50 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:28:14.238 20:28:50 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:28:14.238 20:28:50 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:28:14.238 20:28:50 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:28:14.238 20:28:50 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:28:14.238 20:28:50 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:28:14.238 20:28:50 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:14.238 20:28:50 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:14.238 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:14.238 20:28:50 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:14.238 20:28:50 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:14.238 20:28:50 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:14.238 20:28:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:14.238 20:28:50 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:14.238 20:28:50 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:14.238 20:28:50 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:14.238 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:14.238 20:28:50 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:14.238 20:28:50 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:14.238 20:28:50 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:14.238 20:28:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:14.238 20:28:50 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:14.238 20:28:50 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:28:14.238 20:28:50 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:28:14.238 20:28:50 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:28:14.238 20:28:50 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:14.238 20:28:50 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:14.238 20:28:50 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:14.238 20:28:50 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:14.238 20:28:50 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:14.238 Found net devices under 0000:af:00.0: cvl_0_0 00:28:14.238 20:28:50 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:14.238 20:28:50 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:14.238 20:28:50 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:14.238 20:28:50 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:14.238 20:28:50 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:14.238 20:28:50 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:14.238 Found net devices under 0000:af:00.1: cvl_0_1 00:28:14.238 20:28:50 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:14.238 20:28:50 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:28:14.238 20:28:50 -- nvmf/common.sh@402 -- # is_hw=yes 00:28:14.238 20:28:50 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:28:14.238 20:28:50 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:28:14.238 20:28:50 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:28:14.238 20:28:50 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:14.238 20:28:50 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:14.238 20:28:50 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:14.238 20:28:50 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:28:14.238 20:28:50 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:14.238 20:28:50 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:14.238 20:28:50 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:28:14.238 20:28:50 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:14.238 20:28:50 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:14.238 20:28:50 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:28:14.238 20:28:50 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:28:14.238 20:28:50 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:28:14.238 20:28:50 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:14.238 20:28:50 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:14.238 20:28:50 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:14.238 20:28:50 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:28:14.239 20:28:50 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:14.239 20:28:50 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:14.239 20:28:50 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:14.239 20:28:50 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:28:14.239 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:14.239 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.163 ms 00:28:14.239 00:28:14.239 --- 10.0.0.2 ping statistics --- 00:28:14.239 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:14.239 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:28:14.239 20:28:50 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:14.239 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:14.239 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.335 ms 00:28:14.239 00:28:14.239 --- 10.0.0.1 ping statistics --- 00:28:14.239 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:14.239 rtt min/avg/max/mdev = 0.335/0.335/0.335/0.000 ms 00:28:14.239 20:28:50 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:14.239 20:28:50 -- nvmf/common.sh@410 -- # return 0 00:28:14.239 20:28:50 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:14.239 20:28:50 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:14.239 20:28:50 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:28:14.239 20:28:50 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:28:14.239 20:28:50 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:14.239 20:28:50 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:28:14.239 20:28:50 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:28:14.239 20:28:50 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:28:14.239 20:28:50 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:14.239 20:28:50 -- common/autotest_common.sh@710 -- # xtrace_disable 00:28:14.239 20:28:50 -- common/autotest_common.sh@10 -- # set +x 00:28:14.239 20:28:50 -- nvmf/common.sh@469 -- # nvmfpid=1937333 00:28:14.239 20:28:50 -- nvmf/common.sh@470 -- # waitforlisten 1937333 00:28:14.239 20:28:50 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:28:14.239 20:28:50 -- common/autotest_common.sh@817 -- # '[' -z 1937333 ']' 00:28:14.239 20:28:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:14.239 20:28:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:14.239 20:28:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:14.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:14.239 20:28:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:14.239 20:28:50 -- common/autotest_common.sh@10 -- # set +x 00:28:14.239 [2024-02-14 20:28:50.795088] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:28:14.239 [2024-02-14 20:28:50.795132] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:14.239 EAL: No free 2048 kB hugepages reported on node 1 00:28:14.239 [2024-02-14 20:28:50.857642] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:14.239 [2024-02-14 20:28:50.932545] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:14.239 [2024-02-14 20:28:50.932670] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:14.239 [2024-02-14 20:28:50.932678] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:14.239 [2024-02-14 20:28:50.932685] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:14.239 [2024-02-14 20:28:50.932699] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:14.239 20:28:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:14.239 20:28:51 -- common/autotest_common.sh@850 -- # return 0 00:28:14.239 20:28:51 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:14.239 20:28:51 -- common/autotest_common.sh@716 -- # xtrace_disable 00:28:14.239 20:28:51 -- common/autotest_common.sh@10 -- # set +x 00:28:14.239 20:28:51 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:14.239 20:28:51 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:14.239 20:28:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:14.239 20:28:51 -- common/autotest_common.sh@10 -- # set +x 00:28:14.239 [2024-02-14 20:28:51.623007] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:14.239 20:28:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:14.239 20:28:51 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:28:14.239 20:28:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:14.239 20:28:51 -- common/autotest_common.sh@10 -- # set +x 00:28:14.239 [2024-02-14 20:28:51.635136] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:28:14.239 20:28:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:14.239 20:28:51 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:28:14.239 20:28:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:14.239 20:28:51 -- common/autotest_common.sh@10 -- # set +x 00:28:14.239 null0 00:28:14.239 20:28:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:14.239 20:28:51 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:28:14.239 20:28:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:14.239 20:28:51 -- common/autotest_common.sh@10 -- # set +x 00:28:14.499 null1 00:28:14.499 20:28:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:14.499 20:28:51 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:28:14.499 20:28:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:14.499 20:28:51 -- common/autotest_common.sh@10 -- # set +x 00:28:14.499 20:28:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:14.499 20:28:51 -- host/discovery.sh@45 -- # hostpid=1937578 00:28:14.499 20:28:51 -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:28:14.499 20:28:51 -- host/discovery.sh@46 -- # waitforlisten 1937578 /tmp/host.sock 00:28:14.499 20:28:51 -- common/autotest_common.sh@817 -- # '[' -z 1937578 ']' 00:28:14.499 20:28:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:28:14.499 20:28:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:14.499 20:28:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:28:14.499 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:28:14.499 20:28:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:14.499 20:28:51 -- common/autotest_common.sh@10 -- # set +x 00:28:14.499 [2024-02-14 20:28:51.707501] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:28:14.499 [2024-02-14 20:28:51.707539] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1937578 ] 00:28:14.499 EAL: No free 2048 kB hugepages reported on node 1 00:28:14.499 [2024-02-14 20:28:51.766288] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:14.499 [2024-02-14 20:28:51.835177] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:14.499 [2024-02-14 20:28:51.835294] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:15.435 20:28:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:15.435 20:28:52 -- common/autotest_common.sh@850 -- # return 0 00:28:15.435 20:28:52 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:15.435 20:28:52 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:28:15.435 20:28:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:15.435 20:28:52 -- common/autotest_common.sh@10 -- # set +x 00:28:15.435 20:28:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:15.435 20:28:52 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:28:15.436 20:28:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:15.436 20:28:52 -- common/autotest_common.sh@10 -- # set +x 00:28:15.436 20:28:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:15.436 20:28:52 -- host/discovery.sh@72 -- # notify_id=0 00:28:15.436 20:28:52 -- host/discovery.sh@78 -- # get_subsystem_names 00:28:15.436 20:28:52 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:15.436 20:28:52 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:15.436 20:28:52 -- host/discovery.sh@59 -- # sort 00:28:15.436 20:28:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:15.436 20:28:52 -- host/discovery.sh@59 -- # xargs 00:28:15.436 20:28:52 -- common/autotest_common.sh@10 -- # set +x 00:28:15.436 20:28:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:15.436 20:28:52 -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:28:15.436 20:28:52 -- host/discovery.sh@79 -- # get_bdev_list 00:28:15.436 20:28:52 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:15.436 20:28:52 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:15.436 20:28:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:15.436 20:28:52 -- host/discovery.sh@55 -- # sort 00:28:15.436 20:28:52 -- common/autotest_common.sh@10 -- # set +x 00:28:15.436 20:28:52 -- host/discovery.sh@55 -- # xargs 00:28:15.436 20:28:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:15.436 20:28:52 -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:28:15.436 20:28:52 -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:28:15.436 20:28:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:15.436 20:28:52 -- common/autotest_common.sh@10 -- # set +x 00:28:15.436 20:28:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:15.436 20:28:52 -- host/discovery.sh@82 -- # get_subsystem_names 00:28:15.436 20:28:52 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:15.436 20:28:52 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:15.436 20:28:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:15.436 20:28:52 -- host/discovery.sh@59 -- # sort 00:28:15.436 20:28:52 -- common/autotest_common.sh@10 -- # set +x 00:28:15.436 20:28:52 -- host/discovery.sh@59 -- # xargs 00:28:15.436 20:28:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:15.436 20:28:52 -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:28:15.436 20:28:52 -- host/discovery.sh@83 -- # get_bdev_list 00:28:15.436 20:28:52 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:15.436 20:28:52 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:15.436 20:28:52 -- host/discovery.sh@55 -- # sort 00:28:15.436 20:28:52 -- host/discovery.sh@55 -- # xargs 00:28:15.436 20:28:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:15.436 20:28:52 -- common/autotest_common.sh@10 -- # set +x 00:28:15.436 20:28:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:15.436 20:28:52 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:28:15.436 20:28:52 -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:28:15.436 20:28:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:15.436 20:28:52 -- common/autotest_common.sh@10 -- # set +x 00:28:15.436 20:28:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:15.436 20:28:52 -- host/discovery.sh@86 -- # get_subsystem_names 00:28:15.436 20:28:52 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:15.436 20:28:52 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:15.436 20:28:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:15.436 20:28:52 -- host/discovery.sh@59 -- # sort 00:28:15.436 20:28:52 -- common/autotest_common.sh@10 -- # set +x 00:28:15.436 20:28:52 -- host/discovery.sh@59 -- # xargs 00:28:15.436 20:28:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:15.436 20:28:52 -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:28:15.436 20:28:52 -- host/discovery.sh@87 -- # get_bdev_list 00:28:15.436 20:28:52 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:15.436 20:28:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:15.436 20:28:52 -- common/autotest_common.sh@10 -- # set +x 00:28:15.436 20:28:52 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:15.436 20:28:52 -- host/discovery.sh@55 -- # sort 00:28:15.436 20:28:52 -- host/discovery.sh@55 -- # xargs 00:28:15.436 20:28:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:15.436 20:28:52 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:28:15.436 20:28:52 -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:15.436 20:28:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:15.436 20:28:52 -- common/autotest_common.sh@10 -- # set +x 00:28:15.436 [2024-02-14 20:28:52.826300] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:15.436 20:28:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:15.436 20:28:52 -- host/discovery.sh@92 -- # get_subsystem_names 00:28:15.436 20:28:52 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:15.436 20:28:52 -- host/discovery.sh@59 -- # xargs 00:28:15.436 20:28:52 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:15.436 20:28:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:15.436 20:28:52 -- host/discovery.sh@59 -- # sort 00:28:15.436 20:28:52 -- common/autotest_common.sh@10 -- # set +x 00:28:15.436 20:28:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:15.695 20:28:52 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:28:15.695 20:28:52 -- host/discovery.sh@93 -- # get_bdev_list 00:28:15.695 20:28:52 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:15.695 20:28:52 -- host/discovery.sh@55 -- # xargs 00:28:15.695 20:28:52 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:15.695 20:28:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:15.695 20:28:52 -- host/discovery.sh@55 -- # sort 00:28:15.695 20:28:52 -- common/autotest_common.sh@10 -- # set +x 00:28:15.695 20:28:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:15.695 20:28:52 -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:28:15.695 20:28:52 -- host/discovery.sh@94 -- # get_notification_count 00:28:15.695 20:28:52 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:28:15.695 20:28:52 -- host/discovery.sh@74 -- # jq '. | length' 00:28:15.695 20:28:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:15.695 20:28:52 -- common/autotest_common.sh@10 -- # set +x 00:28:15.695 20:28:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:15.695 20:28:52 -- host/discovery.sh@74 -- # notification_count=0 00:28:15.695 20:28:52 -- host/discovery.sh@75 -- # notify_id=0 00:28:15.695 20:28:52 -- host/discovery.sh@95 -- # [[ 0 == 0 ]] 00:28:15.695 20:28:52 -- host/discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:28:15.695 20:28:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:15.695 20:28:52 -- common/autotest_common.sh@10 -- # set +x 00:28:15.695 20:28:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:15.695 20:28:52 -- host/discovery.sh@100 -- # sleep 1 00:28:16.262 [2024-02-14 20:28:53.554686] bdev_nvme.c:6704:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:16.262 [2024-02-14 20:28:53.554704] bdev_nvme.c:6784:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:16.262 [2024-02-14 20:28:53.554718] bdev_nvme.c:6667:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:16.262 [2024-02-14 20:28:53.644001] bdev_nvme.c:6633:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:28:16.521 [2024-02-14 20:28:53.745261] bdev_nvme.c:6523:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:16.521 [2024-02-14 20:28:53.745279] bdev_nvme.c:6482:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:16.780 20:28:53 -- host/discovery.sh@101 -- # get_subsystem_names 00:28:16.780 20:28:53 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:16.780 20:28:53 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:16.780 20:28:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:16.780 20:28:53 -- host/discovery.sh@59 -- # sort 00:28:16.780 20:28:53 -- common/autotest_common.sh@10 -- # set +x 00:28:16.780 20:28:53 -- host/discovery.sh@59 -- # xargs 00:28:16.780 20:28:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:16.780 20:28:54 -- host/discovery.sh@101 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:16.780 20:28:54 -- host/discovery.sh@102 -- # get_bdev_list 00:28:16.780 20:28:54 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:16.780 20:28:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:16.780 20:28:54 -- common/autotest_common.sh@10 -- # set +x 00:28:16.780 20:28:54 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:16.780 20:28:54 -- host/discovery.sh@55 -- # sort 00:28:16.780 20:28:54 -- host/discovery.sh@55 -- # xargs 00:28:16.780 20:28:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:16.780 20:28:54 -- host/discovery.sh@102 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:28:16.780 20:28:54 -- host/discovery.sh@103 -- # get_subsystem_paths nvme0 00:28:16.780 20:28:54 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:16.780 20:28:54 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:28:16.780 20:28:54 -- host/discovery.sh@63 -- # xargs 00:28:16.780 20:28:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:16.780 20:28:54 -- common/autotest_common.sh@10 -- # set +x 00:28:16.780 20:28:54 -- host/discovery.sh@63 -- # sort -n 00:28:16.780 20:28:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:16.780 20:28:54 -- host/discovery.sh@103 -- # [[ 4420 == \4\4\2\0 ]] 00:28:16.780 20:28:54 -- host/discovery.sh@104 -- # get_notification_count 00:28:16.780 20:28:54 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:28:16.780 20:28:54 -- host/discovery.sh@74 -- # jq '. | length' 00:28:16.780 20:28:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:16.780 20:28:54 -- common/autotest_common.sh@10 -- # set +x 00:28:16.780 20:28:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:16.780 20:28:54 -- host/discovery.sh@74 -- # notification_count=1 00:28:16.780 20:28:54 -- host/discovery.sh@75 -- # notify_id=1 00:28:16.780 20:28:54 -- host/discovery.sh@105 -- # [[ 1 == 1 ]] 00:28:16.780 20:28:54 -- host/discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:28:16.780 20:28:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:16.780 20:28:54 -- common/autotest_common.sh@10 -- # set +x 00:28:16.780 20:28:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:16.780 20:28:54 -- host/discovery.sh@109 -- # sleep 1 00:28:18.157 20:28:55 -- host/discovery.sh@110 -- # get_bdev_list 00:28:18.157 20:28:55 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:18.157 20:28:55 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:18.157 20:28:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:18.157 20:28:55 -- host/discovery.sh@55 -- # sort 00:28:18.157 20:28:55 -- common/autotest_common.sh@10 -- # set +x 00:28:18.157 20:28:55 -- host/discovery.sh@55 -- # xargs 00:28:18.157 20:28:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:18.157 20:28:55 -- host/discovery.sh@110 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:18.157 20:28:55 -- host/discovery.sh@111 -- # get_notification_count 00:28:18.157 20:28:55 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:28:18.157 20:28:55 -- host/discovery.sh@74 -- # jq '. | length' 00:28:18.157 20:28:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:18.157 20:28:55 -- common/autotest_common.sh@10 -- # set +x 00:28:18.157 20:28:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:18.157 20:28:55 -- host/discovery.sh@74 -- # notification_count=1 00:28:18.157 20:28:55 -- host/discovery.sh@75 -- # notify_id=2 00:28:18.157 20:28:55 -- host/discovery.sh@112 -- # [[ 1 == 1 ]] 00:28:18.157 20:28:55 -- host/discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:28:18.157 20:28:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:18.157 20:28:55 -- common/autotest_common.sh@10 -- # set +x 00:28:18.157 [2024-02-14 20:28:55.277063] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:18.157 [2024-02-14 20:28:55.277930] bdev_nvme.c:6686:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:28:18.157 [2024-02-14 20:28:55.277951] bdev_nvme.c:6667:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:18.157 20:28:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:18.157 20:28:55 -- host/discovery.sh@117 -- # sleep 1 00:28:18.157 [2024-02-14 20:28:55.365195] bdev_nvme.c:6628:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:28:18.415 [2024-02-14 20:28:55.675733] bdev_nvme.c:6523:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:18.415 [2024-02-14 20:28:55.675750] bdev_nvme.c:6482:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:18.415 [2024-02-14 20:28:55.675754] bdev_nvme.c:6482:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:28:18.982 20:28:56 -- host/discovery.sh@118 -- # get_subsystem_names 00:28:18.982 20:28:56 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:18.982 20:28:56 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:18.982 20:28:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:18.982 20:28:56 -- host/discovery.sh@59 -- # sort 00:28:18.982 20:28:56 -- common/autotest_common.sh@10 -- # set +x 00:28:18.982 20:28:56 -- host/discovery.sh@59 -- # xargs 00:28:18.982 20:28:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:18.982 20:28:56 -- host/discovery.sh@118 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:18.983 20:28:56 -- host/discovery.sh@119 -- # get_bdev_list 00:28:18.983 20:28:56 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:18.983 20:28:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:18.983 20:28:56 -- common/autotest_common.sh@10 -- # set +x 00:28:18.983 20:28:56 -- host/discovery.sh@55 -- # xargs 00:28:18.983 20:28:56 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:18.983 20:28:56 -- host/discovery.sh@55 -- # sort 00:28:18.983 20:28:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:18.983 20:28:56 -- host/discovery.sh@119 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:18.983 20:28:56 -- host/discovery.sh@120 -- # get_subsystem_paths nvme0 00:28:18.983 20:28:56 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:28:18.983 20:28:56 -- host/discovery.sh@63 -- # xargs 00:28:18.983 20:28:56 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:18.983 20:28:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:18.983 20:28:56 -- host/discovery.sh@63 -- # sort -n 00:28:18.983 20:28:56 -- common/autotest_common.sh@10 -- # set +x 00:28:19.242 20:28:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:19.242 20:28:56 -- host/discovery.sh@120 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:28:19.242 20:28:56 -- host/discovery.sh@121 -- # get_notification_count 00:28:19.242 20:28:56 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:28:19.242 20:28:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:19.242 20:28:56 -- common/autotest_common.sh@10 -- # set +x 00:28:19.242 20:28:56 -- host/discovery.sh@74 -- # jq '. | length' 00:28:19.242 20:28:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:19.242 20:28:56 -- host/discovery.sh@74 -- # notification_count=0 00:28:19.242 20:28:56 -- host/discovery.sh@75 -- # notify_id=2 00:28:19.242 20:28:56 -- host/discovery.sh@122 -- # [[ 0 == 0 ]] 00:28:19.242 20:28:56 -- host/discovery.sh@126 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:19.242 20:28:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:19.242 20:28:56 -- common/autotest_common.sh@10 -- # set +x 00:28:19.242 [2024-02-14 20:28:56.489145] bdev_nvme.c:6686:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:28:19.242 [2024-02-14 20:28:56.489165] bdev_nvme.c:6667:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:19.242 20:28:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:19.242 20:28:56 -- host/discovery.sh@127 -- # sleep 1 00:28:19.242 [2024-02-14 20:28:56.495230] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:19.242 [2024-02-14 20:28:56.495248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.242 [2024-02-14 20:28:56.495257] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:19.242 [2024-02-14 20:28:56.495263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.242 [2024-02-14 20:28:56.495287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:19.242 [2024-02-14 20:28:56.495294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.242 [2024-02-14 20:28:56.495302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:19.242 [2024-02-14 20:28:56.495308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.242 [2024-02-14 20:28:56.495315] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x880b50 is same with the state(5) to be set 00:28:19.242 [2024-02-14 20:28:56.505245] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x880b50 (9): Bad file descriptor 00:28:19.242 [2024-02-14 20:28:56.515283] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:19.242 [2024-02-14 20:28:56.515758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.242 [2024-02-14 20:28:56.516128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.242 [2024-02-14 20:28:56.516138] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x880b50 with addr=10.0.0.2, port=4420 00:28:19.242 [2024-02-14 20:28:56.516145] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x880b50 is same with the state(5) to be set 00:28:19.242 [2024-02-14 20:28:56.516156] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x880b50 (9): Bad file descriptor 00:28:19.242 [2024-02-14 20:28:56.516166] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:19.242 [2024-02-14 20:28:56.516172] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:19.242 [2024-02-14 20:28:56.516179] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:19.242 [2024-02-14 20:28:56.516190] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:19.242 [2024-02-14 20:28:56.525334] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:19.242 [2024-02-14 20:28:56.525803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.242 [2024-02-14 20:28:56.526223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.242 [2024-02-14 20:28:56.526236] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x880b50 with addr=10.0.0.2, port=4420 00:28:19.242 [2024-02-14 20:28:56.526243] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x880b50 is same with the state(5) to be set 00:28:19.242 [2024-02-14 20:28:56.526253] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x880b50 (9): Bad file descriptor 00:28:19.242 [2024-02-14 20:28:56.526273] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:19.242 [2024-02-14 20:28:56.526280] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:19.242 [2024-02-14 20:28:56.526286] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:19.242 [2024-02-14 20:28:56.526295] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:19.242 [2024-02-14 20:28:56.535383] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:19.242 [2024-02-14 20:28:56.535829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.242 [2024-02-14 20:28:56.536182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.242 [2024-02-14 20:28:56.536193] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x880b50 with addr=10.0.0.2, port=4420 00:28:19.242 [2024-02-14 20:28:56.536199] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x880b50 is same with the state(5) to be set 00:28:19.242 [2024-02-14 20:28:56.536210] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x880b50 (9): Bad file descriptor 00:28:19.242 [2024-02-14 20:28:56.536219] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:19.242 [2024-02-14 20:28:56.536224] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:19.242 [2024-02-14 20:28:56.536230] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:19.242 [2024-02-14 20:28:56.536240] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:19.242 [2024-02-14 20:28:56.545434] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:19.242 [2024-02-14 20:28:56.545886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.242 [2024-02-14 20:28:56.546310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.242 [2024-02-14 20:28:56.546319] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x880b50 with addr=10.0.0.2, port=4420 00:28:19.242 [2024-02-14 20:28:56.546326] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x880b50 is same with the state(5) to be set 00:28:19.242 [2024-02-14 20:28:56.546335] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x880b50 (9): Bad file descriptor 00:28:19.242 [2024-02-14 20:28:56.546355] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:19.242 [2024-02-14 20:28:56.546362] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:19.242 [2024-02-14 20:28:56.546368] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:19.242 [2024-02-14 20:28:56.546376] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:19.242 [2024-02-14 20:28:56.555481] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:19.242 [2024-02-14 20:28:56.555940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.242 [2024-02-14 20:28:56.556363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.242 [2024-02-14 20:28:56.556373] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x880b50 with addr=10.0.0.2, port=4420 00:28:19.242 [2024-02-14 20:28:56.556384] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x880b50 is same with the state(5) to be set 00:28:19.242 [2024-02-14 20:28:56.556394] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x880b50 (9): Bad file descriptor 00:28:19.242 [2024-02-14 20:28:56.556409] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:19.242 [2024-02-14 20:28:56.556416] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:19.242 [2024-02-14 20:28:56.556422] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:19.242 [2024-02-14 20:28:56.556431] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:19.242 [2024-02-14 20:28:56.565527] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:19.242 [2024-02-14 20:28:56.565921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.242 [2024-02-14 20:28:56.566296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.242 [2024-02-14 20:28:56.566305] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x880b50 with addr=10.0.0.2, port=4420 00:28:19.242 [2024-02-14 20:28:56.566312] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x880b50 is same with the state(5) to be set 00:28:19.242 [2024-02-14 20:28:56.566322] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x880b50 (9): Bad file descriptor 00:28:19.242 [2024-02-14 20:28:56.566330] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:19.242 [2024-02-14 20:28:56.566336] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:19.242 [2024-02-14 20:28:56.566342] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:19.242 [2024-02-14 20:28:56.566351] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:19.242 [2024-02-14 20:28:56.575470] bdev_nvme.c:6491:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:28:19.243 [2024-02-14 20:28:56.575484] bdev_nvme.c:6482:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:28:20.178 20:28:57 -- host/discovery.sh@128 -- # get_subsystem_names 00:28:20.178 20:28:57 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:20.178 20:28:57 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:20.178 20:28:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:20.178 20:28:57 -- host/discovery.sh@59 -- # sort 00:28:20.178 20:28:57 -- host/discovery.sh@59 -- # xargs 00:28:20.178 20:28:57 -- common/autotest_common.sh@10 -- # set +x 00:28:20.178 20:28:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:20.178 20:28:57 -- host/discovery.sh@128 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:20.178 20:28:57 -- host/discovery.sh@129 -- # get_bdev_list 00:28:20.178 20:28:57 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:20.178 20:28:57 -- host/discovery.sh@55 -- # xargs 00:28:20.178 20:28:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:20.178 20:28:57 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:20.178 20:28:57 -- common/autotest_common.sh@10 -- # set +x 00:28:20.178 20:28:57 -- host/discovery.sh@55 -- # sort 00:28:20.178 20:28:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:20.178 20:28:57 -- host/discovery.sh@129 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:20.178 20:28:57 -- host/discovery.sh@130 -- # get_subsystem_paths nvme0 00:28:20.435 20:28:57 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:28:20.435 20:28:57 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:20.435 20:28:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:20.435 20:28:57 -- common/autotest_common.sh@10 -- # set +x 00:28:20.435 20:28:57 -- host/discovery.sh@63 -- # sort -n 00:28:20.435 20:28:57 -- host/discovery.sh@63 -- # xargs 00:28:20.435 20:28:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:20.435 20:28:57 -- host/discovery.sh@130 -- # [[ 4421 == \4\4\2\1 ]] 00:28:20.435 20:28:57 -- host/discovery.sh@131 -- # get_notification_count 00:28:20.435 20:28:57 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:28:20.435 20:28:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:20.435 20:28:57 -- host/discovery.sh@74 -- # jq '. | length' 00:28:20.435 20:28:57 -- common/autotest_common.sh@10 -- # set +x 00:28:20.435 20:28:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:20.435 20:28:57 -- host/discovery.sh@74 -- # notification_count=0 00:28:20.435 20:28:57 -- host/discovery.sh@75 -- # notify_id=2 00:28:20.435 20:28:57 -- host/discovery.sh@132 -- # [[ 0 == 0 ]] 00:28:20.435 20:28:57 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:28:20.435 20:28:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:20.435 20:28:57 -- common/autotest_common.sh@10 -- # set +x 00:28:20.435 20:28:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:20.435 20:28:57 -- host/discovery.sh@135 -- # sleep 1 00:28:21.370 20:28:58 -- host/discovery.sh@136 -- # get_subsystem_names 00:28:21.370 20:28:58 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:21.370 20:28:58 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:21.370 20:28:58 -- host/discovery.sh@59 -- # sort 00:28:21.370 20:28:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:21.370 20:28:58 -- host/discovery.sh@59 -- # xargs 00:28:21.370 20:28:58 -- common/autotest_common.sh@10 -- # set +x 00:28:21.370 20:28:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:21.370 20:28:58 -- host/discovery.sh@136 -- # [[ '' == '' ]] 00:28:21.370 20:28:58 -- host/discovery.sh@137 -- # get_bdev_list 00:28:21.370 20:28:58 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:21.370 20:28:58 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:21.370 20:28:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:21.370 20:28:58 -- host/discovery.sh@55 -- # sort 00:28:21.370 20:28:58 -- common/autotest_common.sh@10 -- # set +x 00:28:21.370 20:28:58 -- host/discovery.sh@55 -- # xargs 00:28:21.370 20:28:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:21.628 20:28:58 -- host/discovery.sh@137 -- # [[ '' == '' ]] 00:28:21.628 20:28:58 -- host/discovery.sh@138 -- # get_notification_count 00:28:21.628 20:28:58 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:28:21.628 20:28:58 -- host/discovery.sh@74 -- # jq '. | length' 00:28:21.628 20:28:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:21.628 20:28:58 -- common/autotest_common.sh@10 -- # set +x 00:28:21.628 20:28:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:21.628 20:28:58 -- host/discovery.sh@74 -- # notification_count=2 00:28:21.628 20:28:58 -- host/discovery.sh@75 -- # notify_id=4 00:28:21.628 20:28:58 -- host/discovery.sh@139 -- # [[ 2 == 2 ]] 00:28:21.628 20:28:58 -- host/discovery.sh@142 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:21.628 20:28:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:21.628 20:28:58 -- common/autotest_common.sh@10 -- # set +x 00:28:22.559 [2024-02-14 20:28:59.909497] bdev_nvme.c:6704:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:22.559 [2024-02-14 20:28:59.909513] bdev_nvme.c:6784:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:22.559 [2024-02-14 20:28:59.909525] bdev_nvme.c:6667:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:22.816 [2024-02-14 20:29:00.039936] bdev_nvme.c:6633:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:28:23.074 [2024-02-14 20:29:00.351605] bdev_nvme.c:6523:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:23.074 [2024-02-14 20:29:00.351638] bdev_nvme.c:6482:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:28:23.074 20:29:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:23.074 20:29:00 -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:23.074 20:29:00 -- common/autotest_common.sh@638 -- # local es=0 00:28:23.074 20:29:00 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:23.074 20:29:00 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:28:23.074 20:29:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:28:23.074 20:29:00 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:28:23.074 20:29:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:28:23.074 20:29:00 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:23.074 20:29:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:23.074 20:29:00 -- common/autotest_common.sh@10 -- # set +x 00:28:23.074 request: 00:28:23.074 { 00:28:23.074 "name": "nvme", 00:28:23.074 "trtype": "tcp", 00:28:23.074 "traddr": "10.0.0.2", 00:28:23.074 "hostnqn": "nqn.2021-12.io.spdk:test", 00:28:23.074 "adrfam": "ipv4", 00:28:23.074 "trsvcid": "8009", 00:28:23.074 "wait_for_attach": true, 00:28:23.074 "method": "bdev_nvme_start_discovery", 00:28:23.074 "req_id": 1 00:28:23.074 } 00:28:23.074 Got JSON-RPC error response 00:28:23.074 response: 00:28:23.074 { 00:28:23.074 "code": -17, 00:28:23.074 "message": "File exists" 00:28:23.074 } 00:28:23.074 20:29:00 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:28:23.074 20:29:00 -- common/autotest_common.sh@641 -- # es=1 00:28:23.074 20:29:00 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:28:23.074 20:29:00 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:28:23.074 20:29:00 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:28:23.074 20:29:00 -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:28:23.074 20:29:00 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:28:23.074 20:29:00 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:28:23.074 20:29:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:23.074 20:29:00 -- host/discovery.sh@67 -- # sort 00:28:23.074 20:29:00 -- host/discovery.sh@67 -- # xargs 00:28:23.074 20:29:00 -- common/autotest_common.sh@10 -- # set +x 00:28:23.074 20:29:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:23.074 20:29:00 -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:28:23.074 20:29:00 -- host/discovery.sh@147 -- # get_bdev_list 00:28:23.074 20:29:00 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:23.074 20:29:00 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:23.074 20:29:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:23.074 20:29:00 -- host/discovery.sh@55 -- # sort 00:28:23.074 20:29:00 -- common/autotest_common.sh@10 -- # set +x 00:28:23.074 20:29:00 -- host/discovery.sh@55 -- # xargs 00:28:23.074 20:29:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:23.074 20:29:00 -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:23.074 20:29:00 -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:23.074 20:29:00 -- common/autotest_common.sh@638 -- # local es=0 00:28:23.074 20:29:00 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:23.074 20:29:00 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:28:23.074 20:29:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:28:23.074 20:29:00 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:28:23.074 20:29:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:28:23.074 20:29:00 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:23.074 20:29:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:23.074 20:29:00 -- common/autotest_common.sh@10 -- # set +x 00:28:23.074 request: 00:28:23.074 { 00:28:23.074 "name": "nvme_second", 00:28:23.074 "trtype": "tcp", 00:28:23.074 "traddr": "10.0.0.2", 00:28:23.074 "hostnqn": "nqn.2021-12.io.spdk:test", 00:28:23.074 "adrfam": "ipv4", 00:28:23.074 "trsvcid": "8009", 00:28:23.074 "wait_for_attach": true, 00:28:23.074 "method": "bdev_nvme_start_discovery", 00:28:23.074 "req_id": 1 00:28:23.074 } 00:28:23.074 Got JSON-RPC error response 00:28:23.074 response: 00:28:23.074 { 00:28:23.074 "code": -17, 00:28:23.074 "message": "File exists" 00:28:23.074 } 00:28:23.074 20:29:00 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:28:23.074 20:29:00 -- common/autotest_common.sh@641 -- # es=1 00:28:23.074 20:29:00 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:28:23.074 20:29:00 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:28:23.074 20:29:00 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:28:23.074 20:29:00 -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:28:23.074 20:29:00 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:28:23.074 20:29:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:23.074 20:29:00 -- common/autotest_common.sh@10 -- # set +x 00:28:23.074 20:29:00 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:28:23.074 20:29:00 -- host/discovery.sh@67 -- # sort 00:28:23.074 20:29:00 -- host/discovery.sh@67 -- # xargs 00:28:23.074 20:29:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:23.332 20:29:00 -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:28:23.332 20:29:00 -- host/discovery.sh@153 -- # get_bdev_list 00:28:23.332 20:29:00 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:23.332 20:29:00 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:23.332 20:29:00 -- host/discovery.sh@55 -- # sort 00:28:23.332 20:29:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:23.332 20:29:00 -- host/discovery.sh@55 -- # xargs 00:28:23.332 20:29:00 -- common/autotest_common.sh@10 -- # set +x 00:28:23.332 20:29:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:23.332 20:29:00 -- host/discovery.sh@153 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:23.332 20:29:00 -- host/discovery.sh@156 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:28:23.332 20:29:00 -- common/autotest_common.sh@638 -- # local es=0 00:28:23.332 20:29:00 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:28:23.332 20:29:00 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:28:23.332 20:29:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:28:23.332 20:29:00 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:28:23.332 20:29:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:28:23.332 20:29:00 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:28:23.332 20:29:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:23.332 20:29:00 -- common/autotest_common.sh@10 -- # set +x 00:28:24.264 [2024-02-14 20:29:01.587233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.264 [2024-02-14 20:29:01.587617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.264 [2024-02-14 20:29:01.587628] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x89c350 with addr=10.0.0.2, port=8010 00:28:24.264 [2024-02-14 20:29:01.587642] nvme_tcp.c:2594:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:28:24.264 [2024-02-14 20:29:01.587652] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:24.264 [2024-02-14 20:29:01.587658] bdev_nvme.c:6766:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:28:25.197 [2024-02-14 20:29:02.589569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.197 [2024-02-14 20:29:02.589962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.197 [2024-02-14 20:29:02.589973] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x89c350 with addr=10.0.0.2, port=8010 00:28:25.197 [2024-02-14 20:29:02.589983] nvme_tcp.c:2594:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:28:25.197 [2024-02-14 20:29:02.589989] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:25.197 [2024-02-14 20:29:02.589995] bdev_nvme.c:6766:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:28:26.569 [2024-02-14 20:29:03.591667] bdev_nvme.c:6747:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:28:26.569 request: 00:28:26.569 { 00:28:26.569 "name": "nvme_second", 00:28:26.569 "trtype": "tcp", 00:28:26.569 "traddr": "10.0.0.2", 00:28:26.569 "hostnqn": "nqn.2021-12.io.spdk:test", 00:28:26.569 "adrfam": "ipv4", 00:28:26.569 "trsvcid": "8010", 00:28:26.569 "attach_timeout_ms": 3000, 00:28:26.569 "method": "bdev_nvme_start_discovery", 00:28:26.569 "req_id": 1 00:28:26.569 } 00:28:26.569 Got JSON-RPC error response 00:28:26.569 response: 00:28:26.569 { 00:28:26.569 "code": -110, 00:28:26.569 "message": "Connection timed out" 00:28:26.569 } 00:28:26.569 20:29:03 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:28:26.569 20:29:03 -- common/autotest_common.sh@641 -- # es=1 00:28:26.569 20:29:03 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:28:26.569 20:29:03 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:28:26.569 20:29:03 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:28:26.569 20:29:03 -- host/discovery.sh@158 -- # get_discovery_ctrlrs 00:28:26.569 20:29:03 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:28:26.569 20:29:03 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:28:26.569 20:29:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:26.569 20:29:03 -- host/discovery.sh@67 -- # sort 00:28:26.569 20:29:03 -- common/autotest_common.sh@10 -- # set +x 00:28:26.569 20:29:03 -- host/discovery.sh@67 -- # xargs 00:28:26.569 20:29:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:26.569 20:29:03 -- host/discovery.sh@158 -- # [[ nvme == \n\v\m\e ]] 00:28:26.569 20:29:03 -- host/discovery.sh@160 -- # trap - SIGINT SIGTERM EXIT 00:28:26.569 20:29:03 -- host/discovery.sh@162 -- # kill 1937578 00:28:26.569 20:29:03 -- host/discovery.sh@163 -- # nvmftestfini 00:28:26.569 20:29:03 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:26.569 20:29:03 -- nvmf/common.sh@116 -- # sync 00:28:26.569 20:29:03 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:28:26.569 20:29:03 -- nvmf/common.sh@119 -- # set +e 00:28:26.569 20:29:03 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:26.569 20:29:03 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:28:26.569 rmmod nvme_tcp 00:28:26.569 rmmod nvme_fabrics 00:28:26.569 rmmod nvme_keyring 00:28:26.569 20:29:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:26.569 20:29:03 -- nvmf/common.sh@123 -- # set -e 00:28:26.569 20:29:03 -- nvmf/common.sh@124 -- # return 0 00:28:26.569 20:29:03 -- nvmf/common.sh@477 -- # '[' -n 1937333 ']' 00:28:26.569 20:29:03 -- nvmf/common.sh@478 -- # killprocess 1937333 00:28:26.569 20:29:03 -- common/autotest_common.sh@924 -- # '[' -z 1937333 ']' 00:28:26.569 20:29:03 -- common/autotest_common.sh@928 -- # kill -0 1937333 00:28:26.569 20:29:03 -- common/autotest_common.sh@929 -- # uname 00:28:26.569 20:29:03 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:28:26.569 20:29:03 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 1937333 00:28:26.569 20:29:03 -- common/autotest_common.sh@930 -- # process_name=reactor_1 00:28:26.569 20:29:03 -- common/autotest_common.sh@934 -- # '[' reactor_1 = sudo ']' 00:28:26.569 20:29:03 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 1937333' 00:28:26.569 killing process with pid 1937333 00:28:26.569 20:29:03 -- common/autotest_common.sh@943 -- # kill 1937333 00:28:26.569 20:29:03 -- common/autotest_common.sh@948 -- # wait 1937333 00:28:26.570 20:29:03 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:26.570 20:29:03 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:28:26.570 20:29:03 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:28:26.570 20:29:03 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:26.570 20:29:03 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:28:26.570 20:29:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:26.570 20:29:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:26.570 20:29:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:29.098 20:29:06 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:28:29.098 00:28:29.098 real 0m21.482s 00:28:29.098 user 0m28.430s 00:28:29.098 sys 0m5.841s 00:28:29.098 20:29:06 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:29.098 20:29:06 -- common/autotest_common.sh@10 -- # set +x 00:28:29.098 ************************************ 00:28:29.098 END TEST nvmf_discovery 00:28:29.098 ************************************ 00:28:29.098 20:29:06 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:28:29.098 20:29:06 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:28:29.098 20:29:06 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:28:29.098 20:29:06 -- common/autotest_common.sh@10 -- # set +x 00:28:29.098 ************************************ 00:28:29.098 START TEST nvmf_discovery_remove_ifc 00:28:29.098 ************************************ 00:28:29.098 20:29:06 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:28:29.098 * Looking for test storage... 00:28:29.098 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:29.098 20:29:06 -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:29.098 20:29:06 -- nvmf/common.sh@7 -- # uname -s 00:28:29.098 20:29:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:29.098 20:29:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:29.098 20:29:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:29.098 20:29:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:29.098 20:29:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:29.098 20:29:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:29.098 20:29:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:29.098 20:29:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:29.098 20:29:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:29.098 20:29:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:29.099 20:29:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:28:29.099 20:29:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:28:29.099 20:29:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:29.099 20:29:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:29.099 20:29:06 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:29.099 20:29:06 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:29.099 20:29:06 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:29.099 20:29:06 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:29.099 20:29:06 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:29.099 20:29:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:29.099 20:29:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:29.099 20:29:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:29.099 20:29:06 -- paths/export.sh@5 -- # export PATH 00:28:29.099 20:29:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:29.099 20:29:06 -- nvmf/common.sh@46 -- # : 0 00:28:29.099 20:29:06 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:29.099 20:29:06 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:29.099 20:29:06 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:29.099 20:29:06 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:29.099 20:29:06 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:29.099 20:29:06 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:29.099 20:29:06 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:29.099 20:29:06 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:29.099 20:29:06 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:28:29.099 20:29:06 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:28:29.099 20:29:06 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:28:29.099 20:29:06 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:28:29.099 20:29:06 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:28:29.099 20:29:06 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:28:29.099 20:29:06 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:28:29.099 20:29:06 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:28:29.099 20:29:06 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:29.099 20:29:06 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:29.099 20:29:06 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:29.099 20:29:06 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:29.099 20:29:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:29.099 20:29:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:29.099 20:29:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:29.099 20:29:06 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:28:29.099 20:29:06 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:28:29.099 20:29:06 -- nvmf/common.sh@284 -- # xtrace_disable 00:28:29.099 20:29:06 -- common/autotest_common.sh@10 -- # set +x 00:28:35.700 20:29:12 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:35.700 20:29:12 -- nvmf/common.sh@290 -- # pci_devs=() 00:28:35.700 20:29:12 -- nvmf/common.sh@290 -- # local -a pci_devs 00:28:35.700 20:29:12 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:28:35.700 20:29:12 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:28:35.700 20:29:12 -- nvmf/common.sh@292 -- # pci_drivers=() 00:28:35.700 20:29:12 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:28:35.700 20:29:12 -- nvmf/common.sh@294 -- # net_devs=() 00:28:35.700 20:29:12 -- nvmf/common.sh@294 -- # local -ga net_devs 00:28:35.700 20:29:12 -- nvmf/common.sh@295 -- # e810=() 00:28:35.700 20:29:12 -- nvmf/common.sh@295 -- # local -ga e810 00:28:35.700 20:29:12 -- nvmf/common.sh@296 -- # x722=() 00:28:35.700 20:29:12 -- nvmf/common.sh@296 -- # local -ga x722 00:28:35.700 20:29:12 -- nvmf/common.sh@297 -- # mlx=() 00:28:35.700 20:29:12 -- nvmf/common.sh@297 -- # local -ga mlx 00:28:35.700 20:29:12 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:35.700 20:29:12 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:35.700 20:29:12 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:35.700 20:29:12 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:35.700 20:29:12 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:35.700 20:29:12 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:35.700 20:29:12 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:35.700 20:29:12 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:35.700 20:29:12 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:35.700 20:29:12 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:35.700 20:29:12 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:35.700 20:29:12 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:28:35.700 20:29:12 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:28:35.700 20:29:12 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:28:35.700 20:29:12 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:28:35.700 20:29:12 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:28:35.700 20:29:12 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:28:35.700 20:29:12 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:35.700 20:29:12 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:35.700 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:35.700 20:29:12 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:35.700 20:29:12 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:35.700 20:29:12 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:35.700 20:29:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:35.700 20:29:12 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:35.700 20:29:12 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:35.700 20:29:12 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:35.700 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:35.700 20:29:12 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:35.700 20:29:12 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:35.700 20:29:12 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:35.700 20:29:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:35.700 20:29:12 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:35.700 20:29:12 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:28:35.700 20:29:12 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:28:35.700 20:29:12 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:28:35.700 20:29:12 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:35.700 20:29:12 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:35.700 20:29:12 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:35.700 20:29:12 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:35.700 20:29:12 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:35.700 Found net devices under 0000:af:00.0: cvl_0_0 00:28:35.700 20:29:12 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:35.700 20:29:12 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:35.700 20:29:12 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:35.700 20:29:12 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:35.700 20:29:12 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:35.700 20:29:12 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:35.700 Found net devices under 0000:af:00.1: cvl_0_1 00:28:35.700 20:29:12 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:35.700 20:29:12 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:28:35.700 20:29:12 -- nvmf/common.sh@402 -- # is_hw=yes 00:28:35.700 20:29:12 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:28:35.700 20:29:12 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:28:35.700 20:29:12 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:28:35.700 20:29:12 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:35.700 20:29:12 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:35.700 20:29:12 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:35.700 20:29:12 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:28:35.700 20:29:12 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:35.700 20:29:12 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:35.700 20:29:12 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:28:35.700 20:29:12 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:35.700 20:29:12 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:35.700 20:29:12 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:28:35.700 20:29:12 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:28:35.700 20:29:12 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:28:35.700 20:29:12 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:35.700 20:29:12 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:35.700 20:29:12 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:35.700 20:29:12 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:28:35.700 20:29:12 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:35.700 20:29:12 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:35.700 20:29:12 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:35.700 20:29:12 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:28:35.700 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:35.700 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.156 ms 00:28:35.700 00:28:35.700 --- 10.0.0.2 ping statistics --- 00:28:35.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:35.700 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:28:35.700 20:29:12 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:35.701 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:35.701 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.291 ms 00:28:35.701 00:28:35.701 --- 10.0.0.1 ping statistics --- 00:28:35.701 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:35.701 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:28:35.701 20:29:12 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:35.701 20:29:12 -- nvmf/common.sh@410 -- # return 0 00:28:35.701 20:29:12 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:35.701 20:29:12 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:35.701 20:29:12 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:28:35.701 20:29:12 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:28:35.701 20:29:12 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:35.701 20:29:12 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:28:35.701 20:29:12 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:28:35.701 20:29:12 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:28:35.701 20:29:12 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:35.701 20:29:12 -- common/autotest_common.sh@710 -- # xtrace_disable 00:28:35.701 20:29:12 -- common/autotest_common.sh@10 -- # set +x 00:28:35.701 20:29:12 -- nvmf/common.sh@469 -- # nvmfpid=1943496 00:28:35.701 20:29:12 -- nvmf/common.sh@470 -- # waitforlisten 1943496 00:28:35.701 20:29:12 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:28:35.701 20:29:12 -- common/autotest_common.sh@817 -- # '[' -z 1943496 ']' 00:28:35.701 20:29:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:35.701 20:29:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:35.701 20:29:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:35.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:35.701 20:29:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:35.701 20:29:12 -- common/autotest_common.sh@10 -- # set +x 00:28:35.701 [2024-02-14 20:29:12.487210] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:28:35.701 [2024-02-14 20:29:12.487251] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:35.701 EAL: No free 2048 kB hugepages reported on node 1 00:28:35.701 [2024-02-14 20:29:12.550314] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:35.701 [2024-02-14 20:29:12.625357] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:35.701 [2024-02-14 20:29:12.625463] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:35.701 [2024-02-14 20:29:12.625471] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:35.701 [2024-02-14 20:29:12.625477] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:35.701 [2024-02-14 20:29:12.625498] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:35.959 20:29:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:35.959 20:29:13 -- common/autotest_common.sh@850 -- # return 0 00:28:35.959 20:29:13 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:35.959 20:29:13 -- common/autotest_common.sh@716 -- # xtrace_disable 00:28:35.959 20:29:13 -- common/autotest_common.sh@10 -- # set +x 00:28:35.959 20:29:13 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:35.959 20:29:13 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:28:35.959 20:29:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:35.959 20:29:13 -- common/autotest_common.sh@10 -- # set +x 00:28:35.959 [2024-02-14 20:29:13.323874] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:35.959 [2024-02-14 20:29:13.332025] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:28:35.959 null0 00:28:35.959 [2024-02-14 20:29:13.364034] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:36.217 20:29:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:36.217 20:29:13 -- host/discovery_remove_ifc.sh@59 -- # hostpid=1943633 00:28:36.217 20:29:13 -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:28:36.217 20:29:13 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1943633 /tmp/host.sock 00:28:36.217 20:29:13 -- common/autotest_common.sh@817 -- # '[' -z 1943633 ']' 00:28:36.217 20:29:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:28:36.217 20:29:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:36.217 20:29:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:28:36.217 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:28:36.217 20:29:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:36.217 20:29:13 -- common/autotest_common.sh@10 -- # set +x 00:28:36.217 [2024-02-14 20:29:13.428383] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:28:36.217 [2024-02-14 20:29:13.428426] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1943633 ] 00:28:36.217 EAL: No free 2048 kB hugepages reported on node 1 00:28:36.217 [2024-02-14 20:29:13.488098] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:36.218 [2024-02-14 20:29:13.563715] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:36.218 [2024-02-14 20:29:13.563828] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:37.152 20:29:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:37.152 20:29:14 -- common/autotest_common.sh@850 -- # return 0 00:28:37.152 20:29:14 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:37.152 20:29:14 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:28:37.152 20:29:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:37.152 20:29:14 -- common/autotest_common.sh@10 -- # set +x 00:28:37.152 20:29:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:37.152 20:29:14 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:28:37.152 20:29:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:37.152 20:29:14 -- common/autotest_common.sh@10 -- # set +x 00:28:37.152 20:29:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:37.152 20:29:14 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:28:37.152 20:29:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:37.152 20:29:14 -- common/autotest_common.sh@10 -- # set +x 00:28:38.088 [2024-02-14 20:29:15.353670] bdev_nvme.c:6704:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:38.088 [2024-02-14 20:29:15.353689] bdev_nvme.c:6784:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:38.088 [2024-02-14 20:29:15.353703] bdev_nvme.c:6667:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:38.088 [2024-02-14 20:29:15.482107] bdev_nvme.c:6633:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:28:38.347 [2024-02-14 20:29:15.706772] bdev_nvme.c:7493:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:28:38.347 [2024-02-14 20:29:15.706808] bdev_nvme.c:7493:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:28:38.347 [2024-02-14 20:29:15.706829] bdev_nvme.c:7493:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:28:38.347 [2024-02-14 20:29:15.706841] bdev_nvme.c:6523:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:38.347 [2024-02-14 20:29:15.706858] bdev_nvme.c:6482:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:38.347 20:29:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:38.347 20:29:15 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:28:38.347 20:29:15 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:38.347 20:29:15 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:38.347 20:29:15 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:38.347 20:29:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:38.347 20:29:15 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:38.347 20:29:15 -- common/autotest_common.sh@10 -- # set +x 00:28:38.347 20:29:15 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:38.347 [2024-02-14 20:29:15.712769] bdev_nvme.c:1581:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x77b490 was disconnected and freed. delete nvme_qpair. 00:28:38.347 20:29:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:38.347 20:29:15 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:28:38.347 20:29:15 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:28:38.606 20:29:15 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:28:38.606 20:29:15 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:28:38.606 20:29:15 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:38.606 20:29:15 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:38.606 20:29:15 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:38.606 20:29:15 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:38.606 20:29:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:38.606 20:29:15 -- common/autotest_common.sh@10 -- # set +x 00:28:38.606 20:29:15 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:38.606 20:29:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:38.606 20:29:15 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:38.606 20:29:15 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:39.541 20:29:16 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:39.541 20:29:16 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:39.541 20:29:16 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:39.541 20:29:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:39.541 20:29:16 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:39.541 20:29:16 -- common/autotest_common.sh@10 -- # set +x 00:28:39.541 20:29:16 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:39.541 20:29:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:39.541 20:29:16 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:39.541 20:29:16 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:40.916 20:29:17 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:40.916 20:29:17 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:40.916 20:29:17 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:40.916 20:29:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:40.916 20:29:17 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:40.916 20:29:17 -- common/autotest_common.sh@10 -- # set +x 00:28:40.916 20:29:17 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:40.916 20:29:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:40.916 20:29:18 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:40.916 20:29:18 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:41.850 20:29:19 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:41.850 20:29:19 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:41.850 20:29:19 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:41.850 20:29:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:41.850 20:29:19 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:41.850 20:29:19 -- common/autotest_common.sh@10 -- # set +x 00:28:41.850 20:29:19 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:41.850 20:29:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:41.850 20:29:19 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:41.850 20:29:19 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:42.783 20:29:20 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:42.783 20:29:20 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:42.783 20:29:20 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:42.783 20:29:20 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:42.783 20:29:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:42.783 20:29:20 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:42.783 20:29:20 -- common/autotest_common.sh@10 -- # set +x 00:28:42.783 20:29:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:42.783 20:29:20 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:42.783 20:29:20 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:43.717 20:29:21 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:43.717 20:29:21 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:43.717 20:29:21 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:43.717 20:29:21 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:43.717 20:29:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:43.717 20:29:21 -- common/autotest_common.sh@10 -- # set +x 00:28:43.717 20:29:21 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:43.976 20:29:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:43.976 [2024-02-14 20:29:21.148099] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:28:43.976 [2024-02-14 20:29:21.148141] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:43.976 [2024-02-14 20:29:21.148155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.976 [2024-02-14 20:29:21.148164] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:43.976 [2024-02-14 20:29:21.148171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.976 [2024-02-14 20:29:21.148178] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:43.976 [2024-02-14 20:29:21.148185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.976 [2024-02-14 20:29:21.148191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:43.976 [2024-02-14 20:29:21.148198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.976 [2024-02-14 20:29:21.148206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:43.976 [2024-02-14 20:29:21.148212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.976 [2024-02-14 20:29:21.148218] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x741870 is same with the state(5) to be set 00:28:43.976 [2024-02-14 20:29:21.158120] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x741870 (9): Bad file descriptor 00:28:43.976 20:29:21 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:43.976 20:29:21 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:43.976 [2024-02-14 20:29:21.168161] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:44.911 20:29:22 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:44.912 20:29:22 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:44.912 20:29:22 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:44.912 20:29:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:44.912 20:29:22 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:44.912 20:29:22 -- common/autotest_common.sh@10 -- # set +x 00:28:44.912 20:29:22 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:44.912 [2024-02-14 20:29:22.208674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:28:45.847 [2024-02-14 20:29:23.232728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:28:45.847 [2024-02-14 20:29:23.232778] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x741870 with addr=10.0.0.2, port=4420 00:28:45.847 [2024-02-14 20:29:23.232805] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x741870 is same with the state(5) to be set 00:28:45.847 [2024-02-14 20:29:23.233237] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x741870 (9): Bad file descriptor 00:28:45.847 [2024-02-14 20:29:23.233267] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.847 [2024-02-14 20:29:23.233292] bdev_nvme.c:6455:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:28:45.847 [2024-02-14 20:29:23.233321] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:45.847 [2024-02-14 20:29:23.233336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.847 [2024-02-14 20:29:23.233350] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:45.847 [2024-02-14 20:29:23.233360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.847 [2024-02-14 20:29:23.233376] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:45.847 [2024-02-14 20:29:23.233386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.847 [2024-02-14 20:29:23.233396] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:45.847 [2024-02-14 20:29:23.233407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.847 [2024-02-14 20:29:23.233418] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:45.847 [2024-02-14 20:29:23.233428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.847 [2024-02-14 20:29:23.233438] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:28:45.847 [2024-02-14 20:29:23.233833] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x741c80 (9): Bad file descriptor 00:28:45.847 [2024-02-14 20:29:23.234849] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:28:45.847 [2024-02-14 20:29:23.234864] nvme_ctrlr.c:1135:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:28:45.847 20:29:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:45.847 20:29:23 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:45.847 20:29:23 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:47.219 20:29:24 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:47.219 20:29:24 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:47.219 20:29:24 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:47.219 20:29:24 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:47.219 20:29:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:47.219 20:29:24 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:47.219 20:29:24 -- common/autotest_common.sh@10 -- # set +x 00:28:47.219 20:29:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:47.219 20:29:24 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:28:47.219 20:29:24 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:47.219 20:29:24 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:47.219 20:29:24 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:28:47.219 20:29:24 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:47.219 20:29:24 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:47.219 20:29:24 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:47.219 20:29:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:47.219 20:29:24 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:47.219 20:29:24 -- common/autotest_common.sh@10 -- # set +x 00:28:47.219 20:29:24 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:47.219 20:29:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:47.219 20:29:24 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:28:47.219 20:29:24 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:48.153 [2024-02-14 20:29:25.248139] bdev_nvme.c:6704:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:48.153 [2024-02-14 20:29:25.248156] bdev_nvme.c:6784:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:48.153 [2024-02-14 20:29:25.248170] bdev_nvme.c:6667:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:48.153 [2024-02-14 20:29:25.334423] bdev_nvme.c:6633:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:28:48.153 [2024-02-14 20:29:25.438065] bdev_nvme.c:7493:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:28:48.153 [2024-02-14 20:29:25.438099] bdev_nvme.c:7493:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:28:48.153 [2024-02-14 20:29:25.438117] bdev_nvme.c:7493:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:28:48.153 [2024-02-14 20:29:25.438134] bdev_nvme.c:6523:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:28:48.153 [2024-02-14 20:29:25.438142] bdev_nvme.c:6482:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:48.153 20:29:25 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:48.153 20:29:25 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:48.153 20:29:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:48.153 20:29:25 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:48.153 20:29:25 -- common/autotest_common.sh@10 -- # set +x 00:28:48.153 20:29:25 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:48.153 20:29:25 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:48.153 [2024-02-14 20:29:25.447731] bdev_nvme.c:1581:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x74e690 was disconnected and freed. delete nvme_qpair. 00:28:48.153 20:29:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:48.153 20:29:25 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:28:48.153 20:29:25 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:28:48.153 20:29:25 -- host/discovery_remove_ifc.sh@90 -- # killprocess 1943633 00:28:48.153 20:29:25 -- common/autotest_common.sh@924 -- # '[' -z 1943633 ']' 00:28:48.153 20:29:25 -- common/autotest_common.sh@928 -- # kill -0 1943633 00:28:48.153 20:29:25 -- common/autotest_common.sh@929 -- # uname 00:28:48.153 20:29:25 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:28:48.153 20:29:25 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 1943633 00:28:48.153 20:29:25 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:28:48.153 20:29:25 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:28:48.153 20:29:25 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 1943633' 00:28:48.153 killing process with pid 1943633 00:28:48.153 20:29:25 -- common/autotest_common.sh@943 -- # kill 1943633 00:28:48.153 20:29:25 -- common/autotest_common.sh@948 -- # wait 1943633 00:28:48.412 20:29:25 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:28:48.412 20:29:25 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:48.412 20:29:25 -- nvmf/common.sh@116 -- # sync 00:28:48.412 20:29:25 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:28:48.412 20:29:25 -- nvmf/common.sh@119 -- # set +e 00:28:48.412 20:29:25 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:48.412 20:29:25 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:28:48.412 rmmod nvme_tcp 00:28:48.412 rmmod nvme_fabrics 00:28:48.412 rmmod nvme_keyring 00:28:48.412 20:29:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:48.412 20:29:25 -- nvmf/common.sh@123 -- # set -e 00:28:48.412 20:29:25 -- nvmf/common.sh@124 -- # return 0 00:28:48.412 20:29:25 -- nvmf/common.sh@477 -- # '[' -n 1943496 ']' 00:28:48.412 20:29:25 -- nvmf/common.sh@478 -- # killprocess 1943496 00:28:48.412 20:29:25 -- common/autotest_common.sh@924 -- # '[' -z 1943496 ']' 00:28:48.412 20:29:25 -- common/autotest_common.sh@928 -- # kill -0 1943496 00:28:48.412 20:29:25 -- common/autotest_common.sh@929 -- # uname 00:28:48.412 20:29:25 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:28:48.412 20:29:25 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 1943496 00:28:48.671 20:29:25 -- common/autotest_common.sh@930 -- # process_name=reactor_1 00:28:48.671 20:29:25 -- common/autotest_common.sh@934 -- # '[' reactor_1 = sudo ']' 00:28:48.671 20:29:25 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 1943496' 00:28:48.671 killing process with pid 1943496 00:28:48.671 20:29:25 -- common/autotest_common.sh@943 -- # kill 1943496 00:28:48.671 20:29:25 -- common/autotest_common.sh@948 -- # wait 1943496 00:28:48.671 20:29:26 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:48.671 20:29:26 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:28:48.671 20:29:26 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:28:48.671 20:29:26 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:48.671 20:29:26 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:28:48.671 20:29:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:48.671 20:29:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:48.671 20:29:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:51.243 20:29:28 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:28:51.243 00:28:51.243 real 0m22.027s 00:28:51.243 user 0m26.011s 00:28:51.243 sys 0m5.957s 00:28:51.243 20:29:28 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:51.243 20:29:28 -- common/autotest_common.sh@10 -- # set +x 00:28:51.243 ************************************ 00:28:51.243 END TEST nvmf_discovery_remove_ifc 00:28:51.243 ************************************ 00:28:51.243 20:29:28 -- nvmf/nvmf.sh@105 -- # [[ tcp == \t\c\p ]] 00:28:51.243 20:29:28 -- nvmf/nvmf.sh@106 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:51.243 20:29:28 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:28:51.243 20:29:28 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:28:51.243 20:29:28 -- common/autotest_common.sh@10 -- # set +x 00:28:51.243 ************************************ 00:28:51.243 START TEST nvmf_digest 00:28:51.243 ************************************ 00:28:51.243 20:29:28 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:51.243 * Looking for test storage... 00:28:51.243 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:51.243 20:29:28 -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:51.243 20:29:28 -- nvmf/common.sh@7 -- # uname -s 00:28:51.243 20:29:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:51.243 20:29:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:51.243 20:29:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:51.243 20:29:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:51.243 20:29:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:51.243 20:29:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:51.243 20:29:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:51.243 20:29:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:51.243 20:29:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:51.243 20:29:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:51.243 20:29:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:28:51.244 20:29:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:28:51.244 20:29:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:51.244 20:29:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:51.244 20:29:28 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:51.244 20:29:28 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:51.244 20:29:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:51.244 20:29:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:51.244 20:29:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:51.244 20:29:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:51.244 20:29:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:51.244 20:29:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:51.244 20:29:28 -- paths/export.sh@5 -- # export PATH 00:28:51.244 20:29:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:51.244 20:29:28 -- nvmf/common.sh@46 -- # : 0 00:28:51.244 20:29:28 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:51.244 20:29:28 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:51.244 20:29:28 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:51.244 20:29:28 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:51.244 20:29:28 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:51.244 20:29:28 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:51.244 20:29:28 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:51.244 20:29:28 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:51.244 20:29:28 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:28:51.244 20:29:28 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:28:51.244 20:29:28 -- host/digest.sh@16 -- # runtime=2 00:28:51.244 20:29:28 -- host/digest.sh@130 -- # [[ tcp != \t\c\p ]] 00:28:51.244 20:29:28 -- host/digest.sh@132 -- # nvmftestinit 00:28:51.244 20:29:28 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:28:51.244 20:29:28 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:51.244 20:29:28 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:51.244 20:29:28 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:51.244 20:29:28 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:51.244 20:29:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:51.244 20:29:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:51.244 20:29:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:51.244 20:29:28 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:28:51.244 20:29:28 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:28:51.244 20:29:28 -- nvmf/common.sh@284 -- # xtrace_disable 00:28:51.244 20:29:28 -- common/autotest_common.sh@10 -- # set +x 00:28:56.518 20:29:33 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:56.518 20:29:33 -- nvmf/common.sh@290 -- # pci_devs=() 00:28:56.518 20:29:33 -- nvmf/common.sh@290 -- # local -a pci_devs 00:28:56.518 20:29:33 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:28:56.518 20:29:33 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:28:56.518 20:29:33 -- nvmf/common.sh@292 -- # pci_drivers=() 00:28:56.518 20:29:33 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:28:56.518 20:29:33 -- nvmf/common.sh@294 -- # net_devs=() 00:28:56.518 20:29:33 -- nvmf/common.sh@294 -- # local -ga net_devs 00:28:56.518 20:29:33 -- nvmf/common.sh@295 -- # e810=() 00:28:56.518 20:29:33 -- nvmf/common.sh@295 -- # local -ga e810 00:28:56.518 20:29:33 -- nvmf/common.sh@296 -- # x722=() 00:28:56.518 20:29:33 -- nvmf/common.sh@296 -- # local -ga x722 00:28:56.518 20:29:33 -- nvmf/common.sh@297 -- # mlx=() 00:28:56.518 20:29:33 -- nvmf/common.sh@297 -- # local -ga mlx 00:28:56.518 20:29:33 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:56.518 20:29:33 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:56.518 20:29:33 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:56.518 20:29:33 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:56.518 20:29:33 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:56.518 20:29:33 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:56.518 20:29:33 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:56.518 20:29:33 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:56.518 20:29:33 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:56.518 20:29:33 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:56.518 20:29:33 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:56.518 20:29:33 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:28:56.518 20:29:33 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:28:56.518 20:29:33 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:28:56.518 20:29:33 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:28:56.518 20:29:33 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:28:56.518 20:29:33 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:28:56.518 20:29:33 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:56.518 20:29:33 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:56.518 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:56.518 20:29:33 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:56.518 20:29:33 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:56.518 20:29:33 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:56.518 20:29:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:56.518 20:29:33 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:56.518 20:29:33 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:56.518 20:29:33 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:56.518 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:56.518 20:29:33 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:56.518 20:29:33 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:56.518 20:29:33 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:56.518 20:29:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:56.518 20:29:33 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:56.518 20:29:33 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:28:56.518 20:29:33 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:28:56.518 20:29:33 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:28:56.518 20:29:33 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:56.518 20:29:33 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:56.518 20:29:33 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:56.518 20:29:33 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:56.518 20:29:33 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:56.518 Found net devices under 0000:af:00.0: cvl_0_0 00:28:56.518 20:29:33 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:56.518 20:29:33 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:56.518 20:29:33 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:56.518 20:29:33 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:56.518 20:29:33 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:56.518 20:29:33 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:56.518 Found net devices under 0000:af:00.1: cvl_0_1 00:28:56.518 20:29:33 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:56.518 20:29:33 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:28:56.518 20:29:33 -- nvmf/common.sh@402 -- # is_hw=yes 00:28:56.518 20:29:33 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:28:56.518 20:29:33 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:28:56.518 20:29:33 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:28:56.518 20:29:33 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:56.518 20:29:33 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:56.518 20:29:33 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:56.518 20:29:33 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:28:56.518 20:29:33 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:56.518 20:29:33 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:56.518 20:29:33 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:28:56.518 20:29:33 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:56.518 20:29:33 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:56.518 20:29:33 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:28:56.518 20:29:33 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:28:56.518 20:29:33 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:28:56.518 20:29:33 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:56.777 20:29:33 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:56.778 20:29:33 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:56.778 20:29:33 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:28:56.778 20:29:33 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:56.778 20:29:34 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:56.778 20:29:34 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:56.778 20:29:34 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:28:56.778 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:56.778 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.262 ms 00:28:56.778 00:28:56.778 --- 10.0.0.2 ping statistics --- 00:28:56.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:56.778 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:28:56.778 20:29:34 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:56.778 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:56.778 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.395 ms 00:28:56.778 00:28:56.778 --- 10.0.0.1 ping statistics --- 00:28:56.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:56.778 rtt min/avg/max/mdev = 0.395/0.395/0.395/0.000 ms 00:28:56.778 20:29:34 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:56.778 20:29:34 -- nvmf/common.sh@410 -- # return 0 00:28:56.778 20:29:34 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:56.778 20:29:34 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:56.778 20:29:34 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:28:56.778 20:29:34 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:28:56.778 20:29:34 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:56.778 20:29:34 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:28:56.778 20:29:34 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:28:56.778 20:29:34 -- host/digest.sh@134 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:56.778 20:29:34 -- host/digest.sh@135 -- # run_test nvmf_digest_clean run_digest 00:28:56.778 20:29:34 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:28:56.778 20:29:34 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:28:56.778 20:29:34 -- common/autotest_common.sh@10 -- # set +x 00:28:56.778 ************************************ 00:28:56.778 START TEST nvmf_digest_clean 00:28:56.778 ************************************ 00:28:56.778 20:29:34 -- common/autotest_common.sh@1102 -- # run_digest 00:28:56.778 20:29:34 -- host/digest.sh@119 -- # nvmfappstart --wait-for-rpc 00:28:56.778 20:29:34 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:56.778 20:29:34 -- common/autotest_common.sh@710 -- # xtrace_disable 00:28:56.778 20:29:34 -- common/autotest_common.sh@10 -- # set +x 00:28:56.778 20:29:34 -- nvmf/common.sh@469 -- # nvmfpid=1949573 00:28:56.778 20:29:34 -- nvmf/common.sh@470 -- # waitforlisten 1949573 00:28:56.778 20:29:34 -- common/autotest_common.sh@817 -- # '[' -z 1949573 ']' 00:28:56.778 20:29:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:56.778 20:29:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:56.778 20:29:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:56.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:56.778 20:29:34 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:56.778 20:29:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:56.778 20:29:34 -- common/autotest_common.sh@10 -- # set +x 00:28:56.778 [2024-02-14 20:29:34.165790] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:28:56.778 [2024-02-14 20:29:34.165833] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:56.778 EAL: No free 2048 kB hugepages reported on node 1 00:28:57.037 [2024-02-14 20:29:34.227685] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:57.037 [2024-02-14 20:29:34.302589] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:57.037 [2024-02-14 20:29:34.302696] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:57.037 [2024-02-14 20:29:34.302704] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:57.037 [2024-02-14 20:29:34.302710] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:57.037 [2024-02-14 20:29:34.302725] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:57.604 20:29:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:57.604 20:29:34 -- common/autotest_common.sh@850 -- # return 0 00:28:57.604 20:29:34 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:57.604 20:29:34 -- common/autotest_common.sh@716 -- # xtrace_disable 00:28:57.604 20:29:34 -- common/autotest_common.sh@10 -- # set +x 00:28:57.604 20:29:34 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:57.604 20:29:34 -- host/digest.sh@120 -- # common_target_config 00:28:57.604 20:29:34 -- host/digest.sh@43 -- # rpc_cmd 00:28:57.604 20:29:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:57.604 20:29:34 -- common/autotest_common.sh@10 -- # set +x 00:28:57.863 null0 00:28:57.863 [2024-02-14 20:29:35.060670] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:57.863 [2024-02-14 20:29:35.084827] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:57.863 20:29:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:57.863 20:29:35 -- host/digest.sh@122 -- # run_bperf randread 4096 128 00:28:57.863 20:29:35 -- host/digest.sh@77 -- # local rw bs qd 00:28:57.863 20:29:35 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:57.863 20:29:35 -- host/digest.sh@80 -- # rw=randread 00:28:57.863 20:29:35 -- host/digest.sh@80 -- # bs=4096 00:28:57.863 20:29:35 -- host/digest.sh@80 -- # qd=128 00:28:57.863 20:29:35 -- host/digest.sh@82 -- # bperfpid=1949662 00:28:57.863 20:29:35 -- host/digest.sh@83 -- # waitforlisten 1949662 /var/tmp/bperf.sock 00:28:57.863 20:29:35 -- common/autotest_common.sh@817 -- # '[' -z 1949662 ']' 00:28:57.863 20:29:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:57.863 20:29:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:57.863 20:29:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:57.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:57.863 20:29:35 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:57.863 20:29:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:57.863 20:29:35 -- common/autotest_common.sh@10 -- # set +x 00:28:57.863 [2024-02-14 20:29:35.129412] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:28:57.863 [2024-02-14 20:29:35.129455] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1949662 ] 00:28:57.863 EAL: No free 2048 kB hugepages reported on node 1 00:28:57.863 [2024-02-14 20:29:35.187960] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:57.863 [2024-02-14 20:29:35.263510] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:58.800 20:29:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:58.800 20:29:35 -- common/autotest_common.sh@850 -- # return 0 00:28:58.800 20:29:35 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:28:58.800 20:29:35 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:28:58.800 20:29:35 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:58.800 20:29:36 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:58.800 20:29:36 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:59.367 nvme0n1 00:28:59.367 20:29:36 -- host/digest.sh@91 -- # bperf_py perform_tests 00:28:59.367 20:29:36 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:59.367 Running I/O for 2 seconds... 00:29:01.270 00:29:01.270 Latency(us) 00:29:01.270 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:01.270 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:01.270 nvme0n1 : 2.00 28964.57 113.14 0.00 0.00 4414.31 2293.76 18599.74 00:29:01.270 =================================================================================================================== 00:29:01.270 Total : 28964.57 113.14 0.00 0.00 4414.31 2293.76 18599.74 00:29:01.270 0 00:29:01.270 20:29:38 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:29:01.270 20:29:38 -- host/digest.sh@92 -- # get_accel_stats 00:29:01.270 20:29:38 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:01.270 20:29:38 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:01.270 | select(.opcode=="crc32c") 00:29:01.270 | "\(.module_name) \(.executed)"' 00:29:01.270 20:29:38 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:01.530 20:29:38 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:29:01.530 20:29:38 -- host/digest.sh@93 -- # exp_module=software 00:29:01.530 20:29:38 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:29:01.530 20:29:38 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:01.530 20:29:38 -- host/digest.sh@97 -- # killprocess 1949662 00:29:01.530 20:29:38 -- common/autotest_common.sh@924 -- # '[' -z 1949662 ']' 00:29:01.530 20:29:38 -- common/autotest_common.sh@928 -- # kill -0 1949662 00:29:01.531 20:29:38 -- common/autotest_common.sh@929 -- # uname 00:29:01.531 20:29:38 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:29:01.531 20:29:38 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 1949662 00:29:01.531 20:29:38 -- common/autotest_common.sh@930 -- # process_name=reactor_1 00:29:01.531 20:29:38 -- common/autotest_common.sh@934 -- # '[' reactor_1 = sudo ']' 00:29:01.531 20:29:38 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 1949662' 00:29:01.531 killing process with pid 1949662 00:29:01.531 20:29:38 -- common/autotest_common.sh@943 -- # kill 1949662 00:29:01.531 Received shutdown signal, test time was about 2.000000 seconds 00:29:01.531 00:29:01.531 Latency(us) 00:29:01.531 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:01.531 =================================================================================================================== 00:29:01.531 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:01.531 20:29:38 -- common/autotest_common.sh@948 -- # wait 1949662 00:29:01.790 20:29:39 -- host/digest.sh@123 -- # run_bperf randread 131072 16 00:29:01.790 20:29:39 -- host/digest.sh@77 -- # local rw bs qd 00:29:01.790 20:29:39 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:01.790 20:29:39 -- host/digest.sh@80 -- # rw=randread 00:29:01.790 20:29:39 -- host/digest.sh@80 -- # bs=131072 00:29:01.790 20:29:39 -- host/digest.sh@80 -- # qd=16 00:29:01.790 20:29:39 -- host/digest.sh@82 -- # bperfpid=1950300 00:29:01.790 20:29:39 -- host/digest.sh@83 -- # waitforlisten 1950300 /var/tmp/bperf.sock 00:29:01.790 20:29:39 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:29:01.790 20:29:39 -- common/autotest_common.sh@817 -- # '[' -z 1950300 ']' 00:29:01.790 20:29:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:01.790 20:29:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:01.790 20:29:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:01.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:01.790 20:29:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:01.790 20:29:39 -- common/autotest_common.sh@10 -- # set +x 00:29:01.790 [2024-02-14 20:29:39.086770] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:29:01.790 [2024-02-14 20:29:39.086816] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1950300 ] 00:29:01.790 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:01.790 Zero copy mechanism will not be used. 00:29:01.790 EAL: No free 2048 kB hugepages reported on node 1 00:29:01.790 [2024-02-14 20:29:39.148100] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:02.049 [2024-02-14 20:29:39.221624] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:02.618 20:29:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:02.618 20:29:39 -- common/autotest_common.sh@850 -- # return 0 00:29:02.618 20:29:39 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:29:02.618 20:29:39 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:29:02.618 20:29:39 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:02.878 20:29:40 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:02.878 20:29:40 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:03.138 nvme0n1 00:29:03.138 20:29:40 -- host/digest.sh@91 -- # bperf_py perform_tests 00:29:03.138 20:29:40 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:03.138 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:03.138 Zero copy mechanism will not be used. 00:29:03.138 Running I/O for 2 seconds... 00:29:05.675 00:29:05.675 Latency(us) 00:29:05.675 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:05.675 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:05.675 nvme0n1 : 2.00 2663.20 332.90 0.00 0.00 6005.40 5024.43 20597.03 00:29:05.675 =================================================================================================================== 00:29:05.675 Total : 2663.20 332.90 0.00 0.00 6005.40 5024.43 20597.03 00:29:05.675 0 00:29:05.675 20:29:42 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:29:05.675 20:29:42 -- host/digest.sh@92 -- # get_accel_stats 00:29:05.675 20:29:42 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:05.675 20:29:42 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:05.675 | select(.opcode=="crc32c") 00:29:05.675 | "\(.module_name) \(.executed)"' 00:29:05.675 20:29:42 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:05.675 20:29:42 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:29:05.675 20:29:42 -- host/digest.sh@93 -- # exp_module=software 00:29:05.675 20:29:42 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:29:05.675 20:29:42 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:05.675 20:29:42 -- host/digest.sh@97 -- # killprocess 1950300 00:29:05.675 20:29:42 -- common/autotest_common.sh@924 -- # '[' -z 1950300 ']' 00:29:05.675 20:29:42 -- common/autotest_common.sh@928 -- # kill -0 1950300 00:29:05.675 20:29:42 -- common/autotest_common.sh@929 -- # uname 00:29:05.675 20:29:42 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:29:05.675 20:29:42 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 1950300 00:29:05.675 20:29:42 -- common/autotest_common.sh@930 -- # process_name=reactor_1 00:29:05.675 20:29:42 -- common/autotest_common.sh@934 -- # '[' reactor_1 = sudo ']' 00:29:05.675 20:29:42 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 1950300' 00:29:05.675 killing process with pid 1950300 00:29:05.675 20:29:42 -- common/autotest_common.sh@943 -- # kill 1950300 00:29:05.675 Received shutdown signal, test time was about 2.000000 seconds 00:29:05.675 00:29:05.675 Latency(us) 00:29:05.675 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:05.675 =================================================================================================================== 00:29:05.675 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:05.675 20:29:42 -- common/autotest_common.sh@948 -- # wait 1950300 00:29:05.675 20:29:42 -- host/digest.sh@124 -- # run_bperf randwrite 4096 128 00:29:05.675 20:29:42 -- host/digest.sh@77 -- # local rw bs qd 00:29:05.675 20:29:42 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:05.675 20:29:42 -- host/digest.sh@80 -- # rw=randwrite 00:29:05.675 20:29:42 -- host/digest.sh@80 -- # bs=4096 00:29:05.675 20:29:42 -- host/digest.sh@80 -- # qd=128 00:29:05.675 20:29:42 -- host/digest.sh@82 -- # bperfpid=1950996 00:29:05.675 20:29:42 -- host/digest.sh@83 -- # waitforlisten 1950996 /var/tmp/bperf.sock 00:29:05.675 20:29:42 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:29:05.675 20:29:42 -- common/autotest_common.sh@817 -- # '[' -z 1950996 ']' 00:29:05.675 20:29:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:05.675 20:29:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:05.675 20:29:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:05.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:05.675 20:29:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:05.675 20:29:42 -- common/autotest_common.sh@10 -- # set +x 00:29:05.675 [2024-02-14 20:29:43.005580] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:29:05.675 [2024-02-14 20:29:43.005626] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1950996 ] 00:29:05.675 EAL: No free 2048 kB hugepages reported on node 1 00:29:05.675 [2024-02-14 20:29:43.063257] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:05.935 [2024-02-14 20:29:43.129828] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:06.503 20:29:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:06.503 20:29:43 -- common/autotest_common.sh@850 -- # return 0 00:29:06.503 20:29:43 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:29:06.503 20:29:43 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:29:06.503 20:29:43 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:06.763 20:29:44 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:06.763 20:29:44 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:07.022 nvme0n1 00:29:07.022 20:29:44 -- host/digest.sh@91 -- # bperf_py perform_tests 00:29:07.022 20:29:44 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:07.022 Running I/O for 2 seconds... 00:29:09.563 00:29:09.563 Latency(us) 00:29:09.563 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:09.563 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:09.563 nvme0n1 : 2.00 28308.97 110.58 0.00 0.00 4514.21 2153.33 19473.55 00:29:09.563 =================================================================================================================== 00:29:09.563 Total : 28308.97 110.58 0.00 0.00 4514.21 2153.33 19473.55 00:29:09.563 0 00:29:09.563 20:29:46 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:29:09.563 20:29:46 -- host/digest.sh@92 -- # get_accel_stats 00:29:09.563 20:29:46 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:09.563 20:29:46 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:09.563 | select(.opcode=="crc32c") 00:29:09.563 | "\(.module_name) \(.executed)"' 00:29:09.563 20:29:46 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:09.563 20:29:46 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:29:09.563 20:29:46 -- host/digest.sh@93 -- # exp_module=software 00:29:09.563 20:29:46 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:29:09.563 20:29:46 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:09.563 20:29:46 -- host/digest.sh@97 -- # killprocess 1950996 00:29:09.563 20:29:46 -- common/autotest_common.sh@924 -- # '[' -z 1950996 ']' 00:29:09.563 20:29:46 -- common/autotest_common.sh@928 -- # kill -0 1950996 00:29:09.563 20:29:46 -- common/autotest_common.sh@929 -- # uname 00:29:09.563 20:29:46 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:29:09.563 20:29:46 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 1950996 00:29:09.563 20:29:46 -- common/autotest_common.sh@930 -- # process_name=reactor_1 00:29:09.563 20:29:46 -- common/autotest_common.sh@934 -- # '[' reactor_1 = sudo ']' 00:29:09.563 20:29:46 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 1950996' 00:29:09.563 killing process with pid 1950996 00:29:09.563 20:29:46 -- common/autotest_common.sh@943 -- # kill 1950996 00:29:09.563 Received shutdown signal, test time was about 2.000000 seconds 00:29:09.563 00:29:09.563 Latency(us) 00:29:09.563 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:09.563 =================================================================================================================== 00:29:09.563 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:09.563 20:29:46 -- common/autotest_common.sh@948 -- # wait 1950996 00:29:09.563 20:29:46 -- host/digest.sh@125 -- # run_bperf randwrite 131072 16 00:29:09.563 20:29:46 -- host/digest.sh@77 -- # local rw bs qd 00:29:09.563 20:29:46 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:09.563 20:29:46 -- host/digest.sh@80 -- # rw=randwrite 00:29:09.563 20:29:46 -- host/digest.sh@80 -- # bs=131072 00:29:09.563 20:29:46 -- host/digest.sh@80 -- # qd=16 00:29:09.563 20:29:46 -- host/digest.sh@82 -- # bperfpid=1951692 00:29:09.563 20:29:46 -- host/digest.sh@83 -- # waitforlisten 1951692 /var/tmp/bperf.sock 00:29:09.563 20:29:46 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:29:09.563 20:29:46 -- common/autotest_common.sh@817 -- # '[' -z 1951692 ']' 00:29:09.563 20:29:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:09.563 20:29:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:09.563 20:29:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:09.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:09.563 20:29:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:09.563 20:29:46 -- common/autotest_common.sh@10 -- # set +x 00:29:09.563 [2024-02-14 20:29:46.830106] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:29:09.563 [2024-02-14 20:29:46.830153] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1951692 ] 00:29:09.563 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:09.563 Zero copy mechanism will not be used. 00:29:09.563 EAL: No free 2048 kB hugepages reported on node 1 00:29:09.563 [2024-02-14 20:29:46.888545] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:09.563 [2024-02-14 20:29:46.963101] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:10.572 20:29:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:10.572 20:29:47 -- common/autotest_common.sh@850 -- # return 0 00:29:10.572 20:29:47 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:29:10.572 20:29:47 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:29:10.572 20:29:47 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:10.572 20:29:47 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:10.572 20:29:47 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:10.831 nvme0n1 00:29:10.831 20:29:48 -- host/digest.sh@91 -- # bperf_py perform_tests 00:29:10.831 20:29:48 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:10.831 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:10.831 Zero copy mechanism will not be used. 00:29:10.831 Running I/O for 2 seconds... 00:29:13.371 00:29:13.371 Latency(us) 00:29:13.371 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:13.371 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:13.371 nvme0n1 : 2.01 1885.89 235.74 0.00 0.00 8466.58 6116.69 32455.92 00:29:13.371 =================================================================================================================== 00:29:13.371 Total : 1885.89 235.74 0.00 0.00 8466.58 6116.69 32455.92 00:29:13.371 0 00:29:13.371 20:29:50 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:29:13.371 20:29:50 -- host/digest.sh@92 -- # get_accel_stats 00:29:13.371 20:29:50 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:13.371 20:29:50 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:13.371 | select(.opcode=="crc32c") 00:29:13.371 | "\(.module_name) \(.executed)"' 00:29:13.371 20:29:50 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:13.371 20:29:50 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:29:13.372 20:29:50 -- host/digest.sh@93 -- # exp_module=software 00:29:13.372 20:29:50 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:29:13.372 20:29:50 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:13.372 20:29:50 -- host/digest.sh@97 -- # killprocess 1951692 00:29:13.372 20:29:50 -- common/autotest_common.sh@924 -- # '[' -z 1951692 ']' 00:29:13.372 20:29:50 -- common/autotest_common.sh@928 -- # kill -0 1951692 00:29:13.372 20:29:50 -- common/autotest_common.sh@929 -- # uname 00:29:13.372 20:29:50 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:29:13.372 20:29:50 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 1951692 00:29:13.372 20:29:50 -- common/autotest_common.sh@930 -- # process_name=reactor_1 00:29:13.372 20:29:50 -- common/autotest_common.sh@934 -- # '[' reactor_1 = sudo ']' 00:29:13.372 20:29:50 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 1951692' 00:29:13.372 killing process with pid 1951692 00:29:13.372 20:29:50 -- common/autotest_common.sh@943 -- # kill 1951692 00:29:13.372 Received shutdown signal, test time was about 2.000000 seconds 00:29:13.372 00:29:13.372 Latency(us) 00:29:13.372 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:13.372 =================================================================================================================== 00:29:13.372 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:13.372 20:29:50 -- common/autotest_common.sh@948 -- # wait 1951692 00:29:13.372 20:29:50 -- host/digest.sh@126 -- # killprocess 1949573 00:29:13.372 20:29:50 -- common/autotest_common.sh@924 -- # '[' -z 1949573 ']' 00:29:13.372 20:29:50 -- common/autotest_common.sh@928 -- # kill -0 1949573 00:29:13.372 20:29:50 -- common/autotest_common.sh@929 -- # uname 00:29:13.372 20:29:50 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:29:13.372 20:29:50 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 1949573 00:29:13.372 20:29:50 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:29:13.372 20:29:50 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:29:13.372 20:29:50 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 1949573' 00:29:13.372 killing process with pid 1949573 00:29:13.372 20:29:50 -- common/autotest_common.sh@943 -- # kill 1949573 00:29:13.372 20:29:50 -- common/autotest_common.sh@948 -- # wait 1949573 00:29:13.632 00:29:13.632 real 0m16.774s 00:29:13.632 user 0m32.790s 00:29:13.632 sys 0m3.664s 00:29:13.632 20:29:50 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:13.632 20:29:50 -- common/autotest_common.sh@10 -- # set +x 00:29:13.632 ************************************ 00:29:13.632 END TEST nvmf_digest_clean 00:29:13.632 ************************************ 00:29:13.632 20:29:50 -- host/digest.sh@136 -- # run_test nvmf_digest_error run_digest_error 00:29:13.632 20:29:50 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:29:13.632 20:29:50 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:29:13.632 20:29:50 -- common/autotest_common.sh@10 -- # set +x 00:29:13.632 ************************************ 00:29:13.632 START TEST nvmf_digest_error 00:29:13.632 ************************************ 00:29:13.632 20:29:50 -- common/autotest_common.sh@1102 -- # run_digest_error 00:29:13.632 20:29:50 -- host/digest.sh@101 -- # nvmfappstart --wait-for-rpc 00:29:13.632 20:29:50 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:13.632 20:29:50 -- common/autotest_common.sh@710 -- # xtrace_disable 00:29:13.632 20:29:50 -- common/autotest_common.sh@10 -- # set +x 00:29:13.632 20:29:50 -- nvmf/common.sh@469 -- # nvmfpid=1952417 00:29:13.632 20:29:50 -- nvmf/common.sh@470 -- # waitforlisten 1952417 00:29:13.632 20:29:50 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:29:13.632 20:29:50 -- common/autotest_common.sh@817 -- # '[' -z 1952417 ']' 00:29:13.632 20:29:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:13.632 20:29:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:13.632 20:29:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:13.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:13.632 20:29:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:13.632 20:29:50 -- common/autotest_common.sh@10 -- # set +x 00:29:13.632 [2024-02-14 20:29:50.977966] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:29:13.632 [2024-02-14 20:29:50.978015] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:13.632 EAL: No free 2048 kB hugepages reported on node 1 00:29:13.632 [2024-02-14 20:29:51.035852] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:13.892 [2024-02-14 20:29:51.111513] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:13.892 [2024-02-14 20:29:51.111618] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:13.892 [2024-02-14 20:29:51.111625] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:13.892 [2024-02-14 20:29:51.111631] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:13.892 [2024-02-14 20:29:51.111651] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:14.460 20:29:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:14.460 20:29:51 -- common/autotest_common.sh@850 -- # return 0 00:29:14.460 20:29:51 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:14.460 20:29:51 -- common/autotest_common.sh@716 -- # xtrace_disable 00:29:14.460 20:29:51 -- common/autotest_common.sh@10 -- # set +x 00:29:14.460 20:29:51 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:14.460 20:29:51 -- host/digest.sh@103 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:29:14.460 20:29:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:14.460 20:29:51 -- common/autotest_common.sh@10 -- # set +x 00:29:14.460 [2024-02-14 20:29:51.805657] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:29:14.460 20:29:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:14.460 20:29:51 -- host/digest.sh@104 -- # common_target_config 00:29:14.460 20:29:51 -- host/digest.sh@43 -- # rpc_cmd 00:29:14.460 20:29:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:14.460 20:29:51 -- common/autotest_common.sh@10 -- # set +x 00:29:14.720 null0 00:29:14.720 [2024-02-14 20:29:51.898345] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:14.720 [2024-02-14 20:29:51.922510] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:14.720 20:29:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:14.720 20:29:51 -- host/digest.sh@107 -- # run_bperf_err randread 4096 128 00:29:14.720 20:29:51 -- host/digest.sh@54 -- # local rw bs qd 00:29:14.720 20:29:51 -- host/digest.sh@56 -- # rw=randread 00:29:14.720 20:29:51 -- host/digest.sh@56 -- # bs=4096 00:29:14.720 20:29:51 -- host/digest.sh@56 -- # qd=128 00:29:14.720 20:29:51 -- host/digest.sh@58 -- # bperfpid=1952454 00:29:14.720 20:29:51 -- host/digest.sh@60 -- # waitforlisten 1952454 /var/tmp/bperf.sock 00:29:14.720 20:29:51 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:29:14.720 20:29:51 -- common/autotest_common.sh@817 -- # '[' -z 1952454 ']' 00:29:14.720 20:29:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:14.720 20:29:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:14.720 20:29:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:14.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:14.720 20:29:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:14.720 20:29:51 -- common/autotest_common.sh@10 -- # set +x 00:29:14.720 [2024-02-14 20:29:51.969862] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:29:14.720 [2024-02-14 20:29:51.969902] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1952454 ] 00:29:14.720 EAL: No free 2048 kB hugepages reported on node 1 00:29:14.720 [2024-02-14 20:29:52.030573] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:14.720 [2024-02-14 20:29:52.106396] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:15.659 20:29:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:15.659 20:29:52 -- common/autotest_common.sh@850 -- # return 0 00:29:15.659 20:29:52 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:15.659 20:29:52 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:15.659 20:29:52 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:15.659 20:29:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:15.659 20:29:52 -- common/autotest_common.sh@10 -- # set +x 00:29:15.659 20:29:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:15.659 20:29:52 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:15.659 20:29:52 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:15.919 nvme0n1 00:29:15.919 20:29:53 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:15.919 20:29:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:15.919 20:29:53 -- common/autotest_common.sh@10 -- # set +x 00:29:15.919 20:29:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:15.919 20:29:53 -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:15.919 20:29:53 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:15.919 Running I/O for 2 seconds... 00:29:15.919 [2024-02-14 20:29:53.275852] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:15.919 [2024-02-14 20:29:53.275885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.919 [2024-02-14 20:29:53.275895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.919 [2024-02-14 20:29:53.287723] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:15.919 [2024-02-14 20:29:53.287748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:12036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.919 [2024-02-14 20:29:53.287757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.919 [2024-02-14 20:29:53.298463] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:15.919 [2024-02-14 20:29:53.298484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:16999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.919 [2024-02-14 20:29:53.298492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.919 [2024-02-14 20:29:53.306586] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:15.919 [2024-02-14 20:29:53.306605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.919 [2024-02-14 20:29:53.306618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.919 [2024-02-14 20:29:53.314892] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:15.919 [2024-02-14 20:29:53.314913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:21582 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.919 [2024-02-14 20:29:53.314921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.919 [2024-02-14 20:29:53.327603] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:15.919 [2024-02-14 20:29:53.327622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:18169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.919 [2024-02-14 20:29:53.327630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.180 [2024-02-14 20:29:53.336554] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:16.180 [2024-02-14 20:29:53.336574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.180 [2024-02-14 20:29:53.336581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.180 [2024-02-14 20:29:53.345866] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:16.180 [2024-02-14 20:29:53.345887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:3196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.180 [2024-02-14 20:29:53.345894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.180 [2024-02-14 20:29:53.354180] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:16.180 [2024-02-14 20:29:53.354200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.180 [2024-02-14 20:29:53.354208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.180 [2024-02-14 20:29:53.362383] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:16.180 [2024-02-14 20:29:53.362403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:9766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.180 [2024-02-14 20:29:53.362411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.180 [2024-02-14 20:29:53.371966] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:16.180 [2024-02-14 20:29:53.371985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.180 [2024-02-14 20:29:53.371994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.180 [2024-02-14 20:29:53.380747] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:16.180 [2024-02-14 20:29:53.380766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:24919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.180 [2024-02-14 20:29:53.380775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.180 [2024-02-14 20:29:53.389599] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:16.180 [2024-02-14 20:29:53.389623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:23421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.180 [2024-02-14 20:29:53.389631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.180 [2024-02-14 20:29:53.398507] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:16.180 [2024-02-14 20:29:53.398527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:1556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.180 [2024-02-14 20:29:53.398535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.180 [2024-02-14 20:29:53.407768] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:16.180 [2024-02-14 20:29:53.407788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.180 [2024-02-14 20:29:53.407796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.180 [2024-02-14 20:29:53.417043] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:16.180 [2024-02-14 20:29:53.417062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.180 [2024-02-14 20:29:53.417071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.180 [2024-02-14 20:29:53.426202] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:16.180 [2024-02-14 20:29:53.426222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.181 [2024-02-14 20:29:53.426230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.181 [2024-02-14 20:29:53.435189] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:16.181 [2024-02-14 20:29:53.435209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.181 [2024-02-14 20:29:53.435217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.181 [2024-02-14 20:29:53.444523] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:16.181 [2024-02-14 20:29:53.444542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:21723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.181 [2024-02-14 20:29:53.444550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.181 [2024-02-14 20:29:53.453594] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:16.181 [2024-02-14 20:29:53.453616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:7195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.181 [2024-02-14 20:29:53.453625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.181 [2024-02-14 20:29:53.462097] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:16.181 [2024-02-14 20:29:53.462118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:16350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.181 [2024-02-14 20:29:53.462127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.181 [2024-02-14 20:29:53.470567] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:16.181 [2024-02-14 20:29:53.470587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:7281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.181 [2024-02-14 20:29:53.470595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.181 [2024-02-14 20:29:53.479773] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:16.181 [2024-02-14 20:29:53.479793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:8411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.181 [2024-02-14 20:29:53.479801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.181 [2024-02-14 20:29:53.488366] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:16.181 [2024-02-14 20:29:53.488386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.181 [2024-02-14 20:29:53.488394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.181 [2024-02-14 20:29:53.496820] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:16.181 [2024-02-14 20:29:53.496839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:20264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.181 [2024-02-14 20:29:53.496847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.181 [2024-02-14 20:29:53.505463] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:16.181 [2024-02-14 20:29:53.505483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:5661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.181 [2024-02-14 20:29:53.505491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.181 [2024-02-14 20:29:53.514537] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:16.181 [2024-02-14 20:29:53.514557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:2525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.181 [2024-02-14 20:29:53.514565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.181 [2024-02-14 20:29:53.523116] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:16.181 [2024-02-14 20:29:53.523136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.181 [2024-02-14 20:29:53.523143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.181 [2024-02-14 20:29:53.531572] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:16.181 [2024-02-14 20:29:53.531591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:17151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.181 [2024-02-14 20:29:53.531599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.181 [2024-02-14 20:29:53.540199] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:16.181 [2024-02-14 20:29:53.540221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:22269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.181 [2024-02-14 20:29:53.540232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.181 [2024-02-14 20:29:53.549220] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:16.181 [2024-02-14 20:29:53.549240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:6579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.181 [2024-02-14 20:29:53.549249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.181 [2024-02-14 20:29:53.557799] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:16.181 [2024-02-14 20:29:53.557819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:9703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.181 [2024-02-14 20:29:53.557827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.181 [2024-02-14 20:29:53.566251] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:16.181 [2024-02-14 20:29:53.566270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:12140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.181 [2024-02-14 20:29:53.566278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.181 [2024-02-14 20:29:53.574880] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:16.181 [2024-02-14 20:29:53.574902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:2247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.181 [2024-02-14 20:29:53.574910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.181 [2024-02-14 20:29:53.583795] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:16.181 [2024-02-14 20:29:53.583815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:10270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.181 [2024-02-14 20:29:53.583824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.181 [2024-02-14 20:29:53.592360] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:16.181 [2024-02-14 20:29:53.592380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:19506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.181 [2024-02-14 20:29:53.592388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.442 [2024-02-14 20:29:53.600819] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:16.442 [2024-02-14 20:29:53.600839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:13674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.442 [2024-02-14 20:29:53.600847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.442 [2024-02-14 20:29:53.609474] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:16.442 [2024-02-14 20:29:53.609494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:21923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.442 [2024-02-14 20:29:53.609501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.442 [2024-02-14 20:29:53.618529] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:16.442 [2024-02-14 20:29:53.618552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:23760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.442 [2024-02-14 20:29:53.618561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.442 [2024-02-14 20:29:53.627017] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:16.442 [2024-02-14 20:29:53.627037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:3881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.442 [2024-02-14 20:29:53.627046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.442 [2024-02-14 20:29:53.635652] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:16.442 [2024-02-14 20:29:53.635672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:7392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.442 [2024-02-14 20:29:53.635680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.442 [2024-02-14 20:29:53.644263] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:16.442 [2024-02-14 20:29:53.644283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:5666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.442 [2024-02-14 20:29:53.644291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.442 [2024-02-14 20:29:53.653315] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:16.443 [2024-02-14 20:29:53.653335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:24937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.443 [2024-02-14 20:29:53.653343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.443 [2024-02-14 20:29:53.661883] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:16.443 [2024-02-14 20:29:53.661903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:3292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.443 [2024-02-14 20:29:53.661911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.443 [2024-02-14 20:29:53.670305] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:16.443 [2024-02-14 20:29:53.670325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:16042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.443 [2024-02-14 20:29:53.670333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.443 [2024-02-14 20:29:53.678947] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:16.443 [2024-02-14 20:29:53.678967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:2893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.443 [2024-02-14 20:29:53.678975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.443 [2024-02-14 20:29:53.687943] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:16.443 [2024-02-14 20:29:53.687963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:25470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.443 [2024-02-14 20:29:53.687970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.443 [2024-02-14 20:29:53.696510] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:16.443 [2024-02-14 20:29:53.696529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:18118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.443 [2024-02-14 20:29:53.696536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.443 [2024-02-14 20:29:53.704961] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:16.443 [2024-02-14 20:29:53.704980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:17041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.443 [2024-02-14 20:29:53.704988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.443 [2024-02-14 20:29:53.714091] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:16.443 [2024-02-14 20:29:53.714111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.443 [2024-02-14 20:29:53.714119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.443 [2024-02-14 20:29:53.722636] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:16.443 [2024-02-14 20:29:53.722662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:11917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.443 [2024-02-14 20:29:53.722670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.443 [2024-02-14 20:29:53.731104] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:16.443 [2024-02-14 20:29:53.731124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:21461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.443 [2024-02-14 20:29:53.731132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.443 [2024-02-14 20:29:53.739512] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:16.443 [2024-02-14 20:29:53.739531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:12622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.443 [2024-02-14 20:29:53.739538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.443 [2024-02-14 20:29:53.748470] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:16.443 [2024-02-14 20:29:53.748489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:25066 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.443 [2024-02-14 20:29:53.748497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.443 [2024-02-14 20:29:53.757576] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:16.443 [2024-02-14 20:29:53.757596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:4618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.443 [2024-02-14 20:29:53.757604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.443 [2024-02-14 20:29:53.767254] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:16.443 [2024-02-14 20:29:53.767273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:11878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.443 [2024-02-14 20:29:53.767284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.443 [2024-02-14 20:29:53.777813] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:16.443 [2024-02-14 20:29:53.777832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.443 [2024-02-14 20:29:53.777841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.443 [2024-02-14 20:29:53.790230] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:16.443 [2024-02-14 20:29:53.790250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:4504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.443 [2024-02-14 20:29:53.790257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.443 [2024-02-14 20:29:53.798859] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:16.443 [2024-02-14 20:29:53.798879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:18260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.443 [2024-02-14 20:29:53.798887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.443 [2024-02-14 20:29:53.818292] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:16.443 [2024-02-14 20:29:53.818311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:14920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.443 [2024-02-14 20:29:53.818319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.443 [2024-02-14 20:29:53.830486] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:16.443 [2024-02-14 20:29:53.830505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:23263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.443 [2024-02-14 20:29:53.830513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.443 [2024-02-14 20:29:53.841709] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:16.443 [2024-02-14 20:29:53.841728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:10854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.443 [2024-02-14 20:29:53.841736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.443 [2024-02-14 20:29:53.853772] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:16.443 [2024-02-14 20:29:53.853791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:16319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.443 [2024-02-14 20:29:53.853799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.704 [2024-02-14 20:29:53.862234] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:16.704 [2024-02-14 20:29:53.862253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:12984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.704 [2024-02-14 20:29:53.862261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.704 [2024-02-14 20:29:53.873679] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:16.704 [2024-02-14 20:29:53.873698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:6615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.704 [2024-02-14 20:29:53.873705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.704 [2024-02-14 20:29:53.888518] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:16.704 [2024-02-14 20:29:53.888538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:4491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.704 [2024-02-14 20:29:53.888546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.704 [2024-02-14 20:29:53.903457] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:16.704 [2024-02-14 20:29:53.903475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:25226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.704 [2024-02-14 20:29:53.903483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.704 [2024-02-14 20:29:53.915740] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:16.704 [2024-02-14 20:29:53.915760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:8291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.704 [2024-02-14 20:29:53.915768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.704 [2024-02-14 20:29:53.926457] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:16.704 [2024-02-14 20:29:53.926476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:7148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.704 [2024-02-14 20:29:53.926484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.704 [2024-02-14 20:29:53.940334] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:16.704 [2024-02-14 20:29:53.940353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:6315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.704 [2024-02-14 20:29:53.940361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.704 [2024-02-14 20:29:53.949685] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:16.704 [2024-02-14 20:29:53.949704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:22250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.704 [2024-02-14 20:29:53.949712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.704 [2024-02-14 20:29:53.961233] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:16.704 [2024-02-14 20:29:53.961252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:14183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.704 [2024-02-14 20:29:53.961259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.704 [2024-02-14 20:29:53.972933] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:16.704 [2024-02-14 20:29:53.972951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:2181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.704 [2024-02-14 20:29:53.972962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.704 [2024-02-14 20:29:53.988181] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:16.704 [2024-02-14 20:29:53.988200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:24542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.704 [2024-02-14 20:29:53.988208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.704 [2024-02-14 20:29:53.997357] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:16.704 [2024-02-14 20:29:53.997377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:13821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.704 [2024-02-14 20:29:53.997385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.704 [2024-02-14 20:29:54.010154] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:16.704 [2024-02-14 20:29:54.010172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:2384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.704 [2024-02-14 20:29:54.010180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.704 [2024-02-14 20:29:54.022801] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:16.704 [2024-02-14 20:29:54.022820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:4094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.704 [2024-02-14 20:29:54.022828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.704 [2024-02-14 20:29:54.032447] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:16.704 [2024-02-14 20:29:54.032466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:12198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.704 [2024-02-14 20:29:54.032473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.704 [2024-02-14 20:29:54.041709] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:16.704 [2024-02-14 20:29:54.041727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:13525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.704 [2024-02-14 20:29:54.041735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.704 [2024-02-14 20:29:54.051809] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:16.704 [2024-02-14 20:29:54.051828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:15134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.704 [2024-02-14 20:29:54.051836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.704 [2024-02-14 20:29:54.064946] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:16.704 [2024-02-14 20:29:54.064964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:1440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.704 [2024-02-14 20:29:54.064972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.704 [2024-02-14 20:29:54.076165] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:16.704 [2024-02-14 20:29:54.076187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:24315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.704 [2024-02-14 20:29:54.076194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.704 [2024-02-14 20:29:54.086535] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:16.704 [2024-02-14 20:29:54.086553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:23158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.704 [2024-02-14 20:29:54.086561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.704 [2024-02-14 20:29:54.096378] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:16.704 [2024-02-14 20:29:54.096398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:17213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.705 [2024-02-14 20:29:54.096405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.705 [2024-02-14 20:29:54.105895] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:16.705 [2024-02-14 20:29:54.105913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:18851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.705 [2024-02-14 20:29:54.105921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.705 [2024-02-14 20:29:54.119844] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:16.705 [2024-02-14 20:29:54.119864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:3395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.705 [2024-02-14 20:29:54.119871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.964 [2024-02-14 20:29:54.129865] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:16.964 [2024-02-14 20:29:54.129884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.965 [2024-02-14 20:29:54.129892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.965 [2024-02-14 20:29:54.143747] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:16.965 [2024-02-14 20:29:54.143766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:1930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.965 [2024-02-14 20:29:54.143773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.965 [2024-02-14 20:29:54.154246] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:16.965 [2024-02-14 20:29:54.154264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:7987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.965 [2024-02-14 20:29:54.154272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.965 [2024-02-14 20:29:54.164435] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:16.965 [2024-02-14 20:29:54.164454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:14418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.965 [2024-02-14 20:29:54.164461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.965 [2024-02-14 20:29:54.176869] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:16.965 [2024-02-14 20:29:54.176890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:12227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.965 [2024-02-14 20:29:54.176897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.965 [2024-02-14 20:29:54.187603] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:16.965 [2024-02-14 20:29:54.187624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.965 [2024-02-14 20:29:54.187632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.965 [2024-02-14 20:29:54.202127] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:16.965 [2024-02-14 20:29:54.202147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:18022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.965 [2024-02-14 20:29:54.202155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.965 [2024-02-14 20:29:54.218551] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:16.965 [2024-02-14 20:29:54.218571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:20078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.965 [2024-02-14 20:29:54.218579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.965 [2024-02-14 20:29:54.229696] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:16.965 [2024-02-14 20:29:54.229715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.965 [2024-02-14 20:29:54.229723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.965 [2024-02-14 20:29:54.241580] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:16.965 [2024-02-14 20:29:54.241599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:9559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.965 [2024-02-14 20:29:54.241607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.965 [2024-02-14 20:29:54.261128] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:16.965 [2024-02-14 20:29:54.261147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:3657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.965 [2024-02-14 20:29:54.261154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.965 [2024-02-14 20:29:54.272386] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:16.965 [2024-02-14 20:29:54.272405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:12364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.965 [2024-02-14 20:29:54.272412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.965 [2024-02-14 20:29:54.282470] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:16.965 [2024-02-14 20:29:54.282489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:11439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.965 [2024-02-14 20:29:54.282502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.965 [2024-02-14 20:29:54.296254] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:16.965 [2024-02-14 20:29:54.296273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.965 [2024-02-14 20:29:54.296281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.965 [2024-02-14 20:29:54.308186] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:16.965 [2024-02-14 20:29:54.308205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:7893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.965 [2024-02-14 20:29:54.308213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.965 [2024-02-14 20:29:54.316299] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:16.965 [2024-02-14 20:29:54.316318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:1811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.965 [2024-02-14 20:29:54.316326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.965 [2024-02-14 20:29:54.325186] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:16.965 [2024-02-14 20:29:54.325204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:12570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.965 [2024-02-14 20:29:54.325212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.965 [2024-02-14 20:29:54.335538] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:16.965 [2024-02-14 20:29:54.335557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.965 [2024-02-14 20:29:54.335564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.965 [2024-02-14 20:29:54.352716] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:16.965 [2024-02-14 20:29:54.352735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:5873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.965 [2024-02-14 20:29:54.352742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.965 [2024-02-14 20:29:54.367966] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:16.965 [2024-02-14 20:29:54.367984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:22102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.965 [2024-02-14 20:29:54.367991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.965 [2024-02-14 20:29:54.380673] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:16.965 [2024-02-14 20:29:54.380692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:22279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.965 [2024-02-14 20:29:54.380700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.226 [2024-02-14 20:29:54.388882] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:17.226 [2024-02-14 20:29:54.388903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.226 [2024-02-14 20:29:54.388910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.226 [2024-02-14 20:29:54.401369] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:17.226 [2024-02-14 20:29:54.401388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:1657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.226 [2024-02-14 20:29:54.401395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.226 [2024-02-14 20:29:54.410324] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:17.226 [2024-02-14 20:29:54.410343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:8734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.226 [2024-02-14 20:29:54.410350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.226 [2024-02-14 20:29:54.420992] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:17.226 [2024-02-14 20:29:54.421011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:21253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.226 [2024-02-14 20:29:54.421019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.226 [2024-02-14 20:29:54.428688] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:17.226 [2024-02-14 20:29:54.428707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15044 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.226 [2024-02-14 20:29:54.428715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.226 [2024-02-14 20:29:54.438847] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:17.226 [2024-02-14 20:29:54.438866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:21557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.226 [2024-02-14 20:29:54.438874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.226 [2024-02-14 20:29:54.447989] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:17.226 [2024-02-14 20:29:54.448007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:18653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.226 [2024-02-14 20:29:54.448014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.226 [2024-02-14 20:29:54.455492] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:17.226 [2024-02-14 20:29:54.455510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.226 [2024-02-14 20:29:54.455518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.226 [2024-02-14 20:29:54.464550] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:17.226 [2024-02-14 20:29:54.464569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:10579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.226 [2024-02-14 20:29:54.464577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.226 [2024-02-14 20:29:54.472742] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:17.226 [2024-02-14 20:29:54.472761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.226 [2024-02-14 20:29:54.472769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.226 [2024-02-14 20:29:54.481531] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:17.226 [2024-02-14 20:29:54.481551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:7546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.226 [2024-02-14 20:29:54.481559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.226 [2024-02-14 20:29:54.489873] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:17.226 [2024-02-14 20:29:54.489892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:16173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.226 [2024-02-14 20:29:54.489900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.226 [2024-02-14 20:29:54.498146] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:17.226 [2024-02-14 20:29:54.498166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:24253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.226 [2024-02-14 20:29:54.498174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.226 [2024-02-14 20:29:54.506570] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:17.226 [2024-02-14 20:29:54.506589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:34 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.226 [2024-02-14 20:29:54.506596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.226 [2024-02-14 20:29:54.515416] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:17.226 [2024-02-14 20:29:54.515435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:24907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.226 [2024-02-14 20:29:54.515443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.226 [2024-02-14 20:29:54.523839] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:17.226 [2024-02-14 20:29:54.523858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:20009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.226 [2024-02-14 20:29:54.523866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.226 [2024-02-14 20:29:54.532168] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:17.226 [2024-02-14 20:29:54.532187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:8654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.226 [2024-02-14 20:29:54.532195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.226 [2024-02-14 20:29:54.540621] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:17.226 [2024-02-14 20:29:54.540640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:14485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.226 [2024-02-14 20:29:54.540657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.226 [2024-02-14 20:29:54.549158] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:17.226 [2024-02-14 20:29:54.549177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:4629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.226 [2024-02-14 20:29:54.549185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.226 [2024-02-14 20:29:54.558547] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:17.226 [2024-02-14 20:29:54.558566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:15581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.226 [2024-02-14 20:29:54.558573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.226 [2024-02-14 20:29:54.566574] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:17.226 [2024-02-14 20:29:54.566592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:15558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.226 [2024-02-14 20:29:54.566600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.226 [2024-02-14 20:29:54.574433] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:17.226 [2024-02-14 20:29:54.574451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:23491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.226 [2024-02-14 20:29:54.574458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.226 [2024-02-14 20:29:54.584675] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:17.226 [2024-02-14 20:29:54.584694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:9229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.226 [2024-02-14 20:29:54.584702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.226 [2024-02-14 20:29:54.594365] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:17.226 [2024-02-14 20:29:54.594392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:19851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.226 [2024-02-14 20:29:54.594399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.226 [2024-02-14 20:29:54.602612] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:17.226 [2024-02-14 20:29:54.602631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:17232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.226 [2024-02-14 20:29:54.602639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.226 [2024-02-14 20:29:54.613382] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:17.227 [2024-02-14 20:29:54.613401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:14629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.227 [2024-02-14 20:29:54.613409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.227 [2024-02-14 20:29:54.621422] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:17.227 [2024-02-14 20:29:54.621441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:16345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.227 [2024-02-14 20:29:54.621448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.227 [2024-02-14 20:29:54.632237] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:17.227 [2024-02-14 20:29:54.632256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:24163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.227 [2024-02-14 20:29:54.632264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.487 [2024-02-14 20:29:54.642313] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:17.487 [2024-02-14 20:29:54.642333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:21644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.487 [2024-02-14 20:29:54.642341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.487 [2024-02-14 20:29:54.651111] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:17.487 [2024-02-14 20:29:54.651130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:16165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.487 [2024-02-14 20:29:54.651138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.487 [2024-02-14 20:29:54.659335] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:17.487 [2024-02-14 20:29:54.659355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.487 [2024-02-14 20:29:54.659362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.487 [2024-02-14 20:29:54.667692] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:17.487 [2024-02-14 20:29:54.667711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:18312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.487 [2024-02-14 20:29:54.667719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.487 [2024-02-14 20:29:54.676103] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:17.487 [2024-02-14 20:29:54.676122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.487 [2024-02-14 20:29:54.676130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.487 [2024-02-14 20:29:54.684831] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:17.487 [2024-02-14 20:29:54.684850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:21787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.487 [2024-02-14 20:29:54.684857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.487 [2024-02-14 20:29:54.693174] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:17.487 [2024-02-14 20:29:54.693193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:14773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.487 [2024-02-14 20:29:54.693203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.487 [2024-02-14 20:29:54.701352] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:17.487 [2024-02-14 20:29:54.701370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:6411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.487 [2024-02-14 20:29:54.701378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.487 [2024-02-14 20:29:54.709719] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:17.487 [2024-02-14 20:29:54.709737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.487 [2024-02-14 20:29:54.709745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.487 [2024-02-14 20:29:54.718515] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:17.487 [2024-02-14 20:29:54.718533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:5458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.487 [2024-02-14 20:29:54.718541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.487 [2024-02-14 20:29:54.726638] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:17.487 [2024-02-14 20:29:54.726663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:21822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.487 [2024-02-14 20:29:54.726671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.487 [2024-02-14 20:29:54.735577] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:17.488 [2024-02-14 20:29:54.735596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:15597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.488 [2024-02-14 20:29:54.735604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.488 [2024-02-14 20:29:54.743813] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:17.488 [2024-02-14 20:29:54.743832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:2330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.488 [2024-02-14 20:29:54.743840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.488 [2024-02-14 20:29:54.752537] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:17.488 [2024-02-14 20:29:54.752556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:18179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.488 [2024-02-14 20:29:54.752564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.488 [2024-02-14 20:29:54.760873] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:17.488 [2024-02-14 20:29:54.760892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:11665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.488 [2024-02-14 20:29:54.760900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.488 [2024-02-14 20:29:54.769069] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:17.488 [2024-02-14 20:29:54.769093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:22816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.488 [2024-02-14 20:29:54.769101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.488 [2024-02-14 20:29:54.777440] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:17.488 [2024-02-14 20:29:54.777459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:24318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.488 [2024-02-14 20:29:54.777467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.488 [2024-02-14 20:29:54.786243] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:17.488 [2024-02-14 20:29:54.786264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:14424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.488 [2024-02-14 20:29:54.786272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.488 [2024-02-14 20:29:54.794641] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:17.488 [2024-02-14 20:29:54.794666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.488 [2024-02-14 20:29:54.794674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.488 [2024-02-14 20:29:54.802899] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:17.488 [2024-02-14 20:29:54.802918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:7557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.488 [2024-02-14 20:29:54.802926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.488 [2024-02-14 20:29:54.812096] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:17.488 [2024-02-14 20:29:54.812117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:10424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.488 [2024-02-14 20:29:54.812125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.488 [2024-02-14 20:29:54.820594] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:17.488 [2024-02-14 20:29:54.820613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:2155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.488 [2024-02-14 20:29:54.820621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.488 [2024-02-14 20:29:54.829076] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:17.488 [2024-02-14 20:29:54.829096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:24668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.488 [2024-02-14 20:29:54.829104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.488 [2024-02-14 20:29:54.837296] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:17.488 [2024-02-14 20:29:54.837315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.488 [2024-02-14 20:29:54.837323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.488 [2024-02-14 20:29:54.846216] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:17.488 [2024-02-14 20:29:54.846236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:2051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.488 [2024-02-14 20:29:54.846243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.488 [2024-02-14 20:29:54.854501] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:17.488 [2024-02-14 20:29:54.854520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:7492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.488 [2024-02-14 20:29:54.854528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.488 [2024-02-14 20:29:54.862665] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:17.488 [2024-02-14 20:29:54.862685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.488 [2024-02-14 20:29:54.862693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.488 [2024-02-14 20:29:54.871037] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:17.488 [2024-02-14 20:29:54.871056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:5149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.488 [2024-02-14 20:29:54.871064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.488 [2024-02-14 20:29:54.879816] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:17.488 [2024-02-14 20:29:54.879835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:22542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.488 [2024-02-14 20:29:54.879842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.488 [2024-02-14 20:29:54.888115] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:17.488 [2024-02-14 20:29:54.888134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:17884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.488 [2024-02-14 20:29:54.888141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.488 [2024-02-14 20:29:54.896358] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:17.488 [2024-02-14 20:29:54.896377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:5631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.488 [2024-02-14 20:29:54.896385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.749 [2024-02-14 20:29:54.904971] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:17.749 [2024-02-14 20:29:54.904991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:15810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.749 [2024-02-14 20:29:54.904998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.749 [2024-02-14 20:29:54.913929] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:17.749 [2024-02-14 20:29:54.913948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:25461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.749 [2024-02-14 20:29:54.913959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.749 [2024-02-14 20:29:54.922294] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:17.749 [2024-02-14 20:29:54.922314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:10271 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.749 [2024-02-14 20:29:54.922321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.749 [2024-02-14 20:29:54.930563] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:17.749 [2024-02-14 20:29:54.930582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:7559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.749 [2024-02-14 20:29:54.930590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.749 [2024-02-14 20:29:54.939081] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:17.749 [2024-02-14 20:29:54.939105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:16000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.749 [2024-02-14 20:29:54.939113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.749 [2024-02-14 20:29:54.947930] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:17.749 [2024-02-14 20:29:54.947950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:21062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.749 [2024-02-14 20:29:54.947958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.749 [2024-02-14 20:29:54.956222] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:17.749 [2024-02-14 20:29:54.956242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:16100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.749 [2024-02-14 20:29:54.956249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.749 [2024-02-14 20:29:54.964506] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:17.749 [2024-02-14 20:29:54.964526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:8554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.749 [2024-02-14 20:29:54.964534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.749 [2024-02-14 20:29:54.972929] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:17.749 [2024-02-14 20:29:54.972948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.749 [2024-02-14 20:29:54.972956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.749 [2024-02-14 20:29:54.981638] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:17.749 [2024-02-14 20:29:54.981663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:16384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.749 [2024-02-14 20:29:54.981671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.749 [2024-02-14 20:29:54.990016] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:17.749 [2024-02-14 20:29:54.990040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:21494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.749 [2024-02-14 20:29:54.990047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.749 [2024-02-14 20:29:54.998205] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:17.749 [2024-02-14 20:29:54.998225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:12061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.749 [2024-02-14 20:29:54.998232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.749 [2024-02-14 20:29:55.006563] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:17.749 [2024-02-14 20:29:55.006582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:11643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.749 [2024-02-14 20:29:55.006590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.749 [2024-02-14 20:29:55.015369] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:17.749 [2024-02-14 20:29:55.015388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:3470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.749 [2024-02-14 20:29:55.015396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.749 [2024-02-14 20:29:55.023689] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:17.749 [2024-02-14 20:29:55.023716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:18959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.749 [2024-02-14 20:29:55.023723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.749 [2024-02-14 20:29:55.031944] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:17.749 [2024-02-14 20:29:55.031963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.749 [2024-02-14 20:29:55.031971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.749 [2024-02-14 20:29:55.040419] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:17.749 [2024-02-14 20:29:55.040438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:1706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.749 [2024-02-14 20:29:55.040446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.749 [2024-02-14 20:29:55.049284] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:17.749 [2024-02-14 20:29:55.049303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:19279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.749 [2024-02-14 20:29:55.049310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.749 [2024-02-14 20:29:55.057622] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:17.749 [2024-02-14 20:29:55.057641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.749 [2024-02-14 20:29:55.057656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.749 [2024-02-14 20:29:55.066066] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:17.749 [2024-02-14 20:29:55.066085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:18939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.749 [2024-02-14 20:29:55.066092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.749 [2024-02-14 20:29:55.075196] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:17.749 [2024-02-14 20:29:55.075215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.749 [2024-02-14 20:29:55.075223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.749 [2024-02-14 20:29:55.083673] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:17.749 [2024-02-14 20:29:55.083693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:15121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.749 [2024-02-14 20:29:55.083700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.749 [2024-02-14 20:29:55.091929] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:17.750 [2024-02-14 20:29:55.091948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:21735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.750 [2024-02-14 20:29:55.091956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.750 [2024-02-14 20:29:55.100115] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:17.750 [2024-02-14 20:29:55.100134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.750 [2024-02-14 20:29:55.100142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.750 [2024-02-14 20:29:55.109076] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:17.750 [2024-02-14 20:29:55.109095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:13143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.750 [2024-02-14 20:29:55.109103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.750 [2024-02-14 20:29:55.117434] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:17.750 [2024-02-14 20:29:55.117453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:9744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.750 [2024-02-14 20:29:55.117460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.750 [2024-02-14 20:29:55.125676] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:17.750 [2024-02-14 20:29:55.125696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.750 [2024-02-14 20:29:55.125703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.750 [2024-02-14 20:29:55.134036] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:17.750 [2024-02-14 20:29:55.134058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.750 [2024-02-14 20:29:55.134066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.750 [2024-02-14 20:29:55.142934] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:17.750 [2024-02-14 20:29:55.142953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:25446 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.750 [2024-02-14 20:29:55.142961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.750 [2024-02-14 20:29:55.151298] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:17.750 [2024-02-14 20:29:55.151317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:24314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.750 [2024-02-14 20:29:55.151324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.750 [2024-02-14 20:29:55.159428] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:17.750 [2024-02-14 20:29:55.159447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:14330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.750 [2024-02-14 20:29:55.159455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.010 [2024-02-14 20:29:55.167994] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:18.010 [2024-02-14 20:29:55.168014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:24708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.010 [2024-02-14 20:29:55.168022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.010 [2024-02-14 20:29:55.177072] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:18.010 [2024-02-14 20:29:55.177092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:10808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.010 [2024-02-14 20:29:55.177100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.010 [2024-02-14 20:29:55.185511] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:18.010 [2024-02-14 20:29:55.185530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:15341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.010 [2024-02-14 20:29:55.185538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.010 [2024-02-14 20:29:55.193912] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:18.010 [2024-02-14 20:29:55.193932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:4319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.010 [2024-02-14 20:29:55.193940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.010 [2024-02-14 20:29:55.202287] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:18.010 [2024-02-14 20:29:55.202306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:13374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.010 [2024-02-14 20:29:55.202314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.010 [2024-02-14 20:29:55.211016] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:18.010 [2024-02-14 20:29:55.211034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:18159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.010 [2024-02-14 20:29:55.211042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.010 [2024-02-14 20:29:55.219359] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:18.010 [2024-02-14 20:29:55.219377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:14455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.010 [2024-02-14 20:29:55.219385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.010 [2024-02-14 20:29:55.227577] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:18.010 [2024-02-14 20:29:55.227595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:25300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.010 [2024-02-14 20:29:55.227603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.010 [2024-02-14 20:29:55.235978] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:18.010 [2024-02-14 20:29:55.235996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.010 [2024-02-14 20:29:55.236004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.010 [2024-02-14 20:29:55.244771] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:18.010 [2024-02-14 20:29:55.244789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:21659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.010 [2024-02-14 20:29:55.244797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.010 [2024-02-14 20:29:55.253126] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:18.011 [2024-02-14 20:29:55.253144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:11156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.011 [2024-02-14 20:29:55.253152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.011 [2024-02-14 20:29:55.261359] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1c080) 00:29:18.011 [2024-02-14 20:29:55.261378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:11041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.011 [2024-02-14 20:29:55.261386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.011 00:29:18.011 Latency(us) 00:29:18.011 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:18.011 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:18.011 nvme0n1 : 2.00 26719.12 104.37 0.00 0.00 4785.64 2044.10 19598.38 00:29:18.011 =================================================================================================================== 00:29:18.011 Total : 26719.12 104.37 0.00 0.00 4785.64 2044.10 19598.38 00:29:18.011 0 00:29:18.011 20:29:55 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:18.011 20:29:55 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:18.011 20:29:55 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:18.011 | .driver_specific 00:29:18.011 | .nvme_error 00:29:18.011 | .status_code 00:29:18.011 | .command_transient_transport_error' 00:29:18.011 20:29:55 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:18.271 20:29:55 -- host/digest.sh@71 -- # (( 209 > 0 )) 00:29:18.271 20:29:55 -- host/digest.sh@73 -- # killprocess 1952454 00:29:18.271 20:29:55 -- common/autotest_common.sh@924 -- # '[' -z 1952454 ']' 00:29:18.271 20:29:55 -- common/autotest_common.sh@928 -- # kill -0 1952454 00:29:18.271 20:29:55 -- common/autotest_common.sh@929 -- # uname 00:29:18.271 20:29:55 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:29:18.271 20:29:55 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 1952454 00:29:18.271 20:29:55 -- common/autotest_common.sh@930 -- # process_name=reactor_1 00:29:18.271 20:29:55 -- common/autotest_common.sh@934 -- # '[' reactor_1 = sudo ']' 00:29:18.271 20:29:55 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 1952454' 00:29:18.271 killing process with pid 1952454 00:29:18.271 20:29:55 -- common/autotest_common.sh@943 -- # kill 1952454 00:29:18.271 Received shutdown signal, test time was about 2.000000 seconds 00:29:18.271 00:29:18.271 Latency(us) 00:29:18.271 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:18.271 =================================================================================================================== 00:29:18.271 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:18.271 20:29:55 -- common/autotest_common.sh@948 -- # wait 1952454 00:29:18.531 20:29:55 -- host/digest.sh@108 -- # run_bperf_err randread 131072 16 00:29:18.531 20:29:55 -- host/digest.sh@54 -- # local rw bs qd 00:29:18.531 20:29:55 -- host/digest.sh@56 -- # rw=randread 00:29:18.531 20:29:55 -- host/digest.sh@56 -- # bs=131072 00:29:18.531 20:29:55 -- host/digest.sh@56 -- # qd=16 00:29:18.531 20:29:55 -- host/digest.sh@58 -- # bperfpid=1953151 00:29:18.531 20:29:55 -- host/digest.sh@60 -- # waitforlisten 1953151 /var/tmp/bperf.sock 00:29:18.531 20:29:55 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:29:18.531 20:29:55 -- common/autotest_common.sh@817 -- # '[' -z 1953151 ']' 00:29:18.531 20:29:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:18.531 20:29:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:18.531 20:29:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:18.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:18.531 20:29:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:18.531 20:29:55 -- common/autotest_common.sh@10 -- # set +x 00:29:18.531 [2024-02-14 20:29:55.742163] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:29:18.531 [2024-02-14 20:29:55.742210] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1953151 ] 00:29:18.531 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:18.531 Zero copy mechanism will not be used. 00:29:18.531 EAL: No free 2048 kB hugepages reported on node 1 00:29:18.531 [2024-02-14 20:29:55.801329] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:18.531 [2024-02-14 20:29:55.869776] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:19.470 20:29:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:19.470 20:29:56 -- common/autotest_common.sh@850 -- # return 0 00:29:19.470 20:29:56 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:19.470 20:29:56 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:19.470 20:29:56 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:19.470 20:29:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:19.470 20:29:56 -- common/autotest_common.sh@10 -- # set +x 00:29:19.470 20:29:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:19.470 20:29:56 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:19.470 20:29:56 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:19.730 nvme0n1 00:29:19.730 20:29:56 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:19.730 20:29:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:19.730 20:29:56 -- common/autotest_common.sh@10 -- # set +x 00:29:19.730 20:29:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:19.730 20:29:56 -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:19.730 20:29:56 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:19.730 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:19.730 Zero copy mechanism will not be used. 00:29:19.730 Running I/O for 2 seconds... 00:29:19.730 [2024-02-14 20:29:57.034480] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:19.730 [2024-02-14 20:29:57.034513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.730 [2024-02-14 20:29:57.034523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:19.730 [2024-02-14 20:29:57.047540] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:19.730 [2024-02-14 20:29:57.047562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.730 [2024-02-14 20:29:57.047571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:19.730 [2024-02-14 20:29:57.058821] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:19.730 [2024-02-14 20:29:57.058852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.730 [2024-02-14 20:29:57.058860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:19.730 [2024-02-14 20:29:57.069467] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:19.730 [2024-02-14 20:29:57.069486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.730 [2024-02-14 20:29:57.069495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.730 [2024-02-14 20:29:57.080204] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:19.730 [2024-02-14 20:29:57.080223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.730 [2024-02-14 20:29:57.080231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:19.730 [2024-02-14 20:29:57.090748] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:19.730 [2024-02-14 20:29:57.090767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.730 [2024-02-14 20:29:57.090774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:19.730 [2024-02-14 20:29:57.101610] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:19.730 [2024-02-14 20:29:57.101633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.730 [2024-02-14 20:29:57.101641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:19.730 [2024-02-14 20:29:57.112557] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:19.730 [2024-02-14 20:29:57.112576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.730 [2024-02-14 20:29:57.112584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.730 [2024-02-14 20:29:57.123229] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:19.730 [2024-02-14 20:29:57.123248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.730 [2024-02-14 20:29:57.123256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:19.730 [2024-02-14 20:29:57.133833] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:19.730 [2024-02-14 20:29:57.133852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.730 [2024-02-14 20:29:57.133859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:19.730 [2024-02-14 20:29:57.144571] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:19.730 [2024-02-14 20:29:57.144591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.730 [2024-02-14 20:29:57.144599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:19.991 [2024-02-14 20:29:57.155316] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:19.991 [2024-02-14 20:29:57.155334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.991 [2024-02-14 20:29:57.155342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.991 [2024-02-14 20:29:57.165986] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:19.991 [2024-02-14 20:29:57.166005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.991 [2024-02-14 20:29:57.166013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:19.991 [2024-02-14 20:29:57.176660] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:19.991 [2024-02-14 20:29:57.176679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.991 [2024-02-14 20:29:57.176686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:19.991 [2024-02-14 20:29:57.187396] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:19.991 [2024-02-14 20:29:57.187416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.991 [2024-02-14 20:29:57.187424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:19.991 [2024-02-14 20:29:57.198231] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:19.991 [2024-02-14 20:29:57.198250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.991 [2024-02-14 20:29:57.198257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.991 [2024-02-14 20:29:57.209101] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:19.991 [2024-02-14 20:29:57.209121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.991 [2024-02-14 20:29:57.209128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:19.991 [2024-02-14 20:29:57.219812] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:19.991 [2024-02-14 20:29:57.219831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.991 [2024-02-14 20:29:57.219839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:19.991 [2024-02-14 20:29:57.230628] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:19.991 [2024-02-14 20:29:57.230654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.991 [2024-02-14 20:29:57.230662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:19.991 [2024-02-14 20:29:57.241244] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:19.991 [2024-02-14 20:29:57.241264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.991 [2024-02-14 20:29:57.241271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.991 [2024-02-14 20:29:57.251904] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:19.991 [2024-02-14 20:29:57.251923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.991 [2024-02-14 20:29:57.251930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:19.991 [2024-02-14 20:29:57.262652] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:19.991 [2024-02-14 20:29:57.262671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.991 [2024-02-14 20:29:57.262679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:19.991 [2024-02-14 20:29:57.273350] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:19.991 [2024-02-14 20:29:57.273369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.991 [2024-02-14 20:29:57.273377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:19.991 [2024-02-14 20:29:57.283996] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:19.991 [2024-02-14 20:29:57.284014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.991 [2024-02-14 20:29:57.284025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.991 [2024-02-14 20:29:57.294693] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:19.991 [2024-02-14 20:29:57.294712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.991 [2024-02-14 20:29:57.294719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:19.991 [2024-02-14 20:29:57.305343] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:19.991 [2024-02-14 20:29:57.305362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.991 [2024-02-14 20:29:57.305369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:19.991 [2024-02-14 20:29:57.316089] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:19.991 [2024-02-14 20:29:57.316108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.991 [2024-02-14 20:29:57.316115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:19.992 [2024-02-14 20:29:57.326776] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:19.992 [2024-02-14 20:29:57.326794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.992 [2024-02-14 20:29:57.326802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.992 [2024-02-14 20:29:57.337436] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:19.992 [2024-02-14 20:29:57.337455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.992 [2024-02-14 20:29:57.337462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:19.992 [2024-02-14 20:29:57.348020] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:19.992 [2024-02-14 20:29:57.348039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.992 [2024-02-14 20:29:57.348047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:19.992 [2024-02-14 20:29:57.358626] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:19.992 [2024-02-14 20:29:57.358644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.992 [2024-02-14 20:29:57.358658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:19.992 [2024-02-14 20:29:57.369238] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:19.992 [2024-02-14 20:29:57.369256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.992 [2024-02-14 20:29:57.369264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.992 [2024-02-14 20:29:57.379904] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:19.992 [2024-02-14 20:29:57.379923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.992 [2024-02-14 20:29:57.379930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:19.992 [2024-02-14 20:29:57.390465] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:19.992 [2024-02-14 20:29:57.390484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.992 [2024-02-14 20:29:57.390491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:19.992 [2024-02-14 20:29:57.401207] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:19.992 [2024-02-14 20:29:57.401226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.992 [2024-02-14 20:29:57.401234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.252 [2024-02-14 20:29:57.411934] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:20.252 [2024-02-14 20:29:57.411953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.252 [2024-02-14 20:29:57.411961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.252 [2024-02-14 20:29:57.422621] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:20.252 [2024-02-14 20:29:57.422639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.252 [2024-02-14 20:29:57.422652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.252 [2024-02-14 20:29:57.433242] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:20.252 [2024-02-14 20:29:57.433261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.252 [2024-02-14 20:29:57.433269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.252 [2024-02-14 20:29:57.443934] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:20.252 [2024-02-14 20:29:57.443953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.252 [2024-02-14 20:29:57.443961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.252 [2024-02-14 20:29:57.454564] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:20.252 [2024-02-14 20:29:57.454583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.252 [2024-02-14 20:29:57.454590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.252 [2024-02-14 20:29:57.465194] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:20.252 [2024-02-14 20:29:57.465212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.252 [2024-02-14 20:29:57.465223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.252 [2024-02-14 20:29:57.475781] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:20.252 [2024-02-14 20:29:57.475800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.252 [2024-02-14 20:29:57.475807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.252 [2024-02-14 20:29:57.486447] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:20.252 [2024-02-14 20:29:57.486466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.252 [2024-02-14 20:29:57.486474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.252 [2024-02-14 20:29:57.497221] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:20.252 [2024-02-14 20:29:57.497239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.252 [2024-02-14 20:29:57.497247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.252 [2024-02-14 20:29:57.507881] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:20.252 [2024-02-14 20:29:57.507900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.252 [2024-02-14 20:29:57.507908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.252 [2024-02-14 20:29:57.518454] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:20.252 [2024-02-14 20:29:57.518473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.252 [2024-02-14 20:29:57.518481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.252 [2024-02-14 20:29:57.529143] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:20.252 [2024-02-14 20:29:57.529162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.252 [2024-02-14 20:29:57.529170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.252 [2024-02-14 20:29:57.539772] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:20.252 [2024-02-14 20:29:57.539790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.252 [2024-02-14 20:29:57.539798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.252 [2024-02-14 20:29:57.550427] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:20.252 [2024-02-14 20:29:57.550446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.252 [2024-02-14 20:29:57.550454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.252 [2024-02-14 20:29:57.561124] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:20.252 [2024-02-14 20:29:57.561146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.252 [2024-02-14 20:29:57.561153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.252 [2024-02-14 20:29:57.571782] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:20.252 [2024-02-14 20:29:57.571801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.252 [2024-02-14 20:29:57.571808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.252 [2024-02-14 20:29:57.582451] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:20.252 [2024-02-14 20:29:57.582470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.252 [2024-02-14 20:29:57.582477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.252 [2024-02-14 20:29:57.593064] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:20.252 [2024-02-14 20:29:57.593083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.252 [2024-02-14 20:29:57.593091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.253 [2024-02-14 20:29:57.603879] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:20.253 [2024-02-14 20:29:57.603898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.253 [2024-02-14 20:29:57.603905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.253 [2024-02-14 20:29:57.614560] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:20.253 [2024-02-14 20:29:57.614581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.253 [2024-02-14 20:29:57.614589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.253 [2024-02-14 20:29:57.625190] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:20.253 [2024-02-14 20:29:57.625210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.253 [2024-02-14 20:29:57.625217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.253 [2024-02-14 20:29:57.635941] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:20.253 [2024-02-14 20:29:57.635961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.253 [2024-02-14 20:29:57.635968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.253 [2024-02-14 20:29:57.647081] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:20.253 [2024-02-14 20:29:57.647101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.253 [2024-02-14 20:29:57.647110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.253 [2024-02-14 20:29:57.658301] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:20.253 [2024-02-14 20:29:57.658321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.253 [2024-02-14 20:29:57.658330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.514 [2024-02-14 20:29:57.669468] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:20.514 [2024-02-14 20:29:57.669489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.514 [2024-02-14 20:29:57.669497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.514 [2024-02-14 20:29:57.680577] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:20.514 [2024-02-14 20:29:57.680596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.514 [2024-02-14 20:29:57.680604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.514 [2024-02-14 20:29:57.691965] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:20.514 [2024-02-14 20:29:57.691984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.514 [2024-02-14 20:29:57.691991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.514 [2024-02-14 20:29:57.703352] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:20.514 [2024-02-14 20:29:57.703371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.514 [2024-02-14 20:29:57.703378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.514 [2024-02-14 20:29:57.714841] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:20.514 [2024-02-14 20:29:57.714861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.514 [2024-02-14 20:29:57.714868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.514 [2024-02-14 20:29:57.725862] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:20.514 [2024-02-14 20:29:57.725882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.514 [2024-02-14 20:29:57.725891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.514 [2024-02-14 20:29:57.736937] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:20.514 [2024-02-14 20:29:57.736958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.514 [2024-02-14 20:29:57.736967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.514 [2024-02-14 20:29:57.747982] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:20.514 [2024-02-14 20:29:57.748003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.514 [2024-02-14 20:29:57.748016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.514 [2024-02-14 20:29:57.759178] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:20.514 [2024-02-14 20:29:57.759198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.514 [2024-02-14 20:29:57.759206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.514 [2024-02-14 20:29:57.770256] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:20.514 [2024-02-14 20:29:57.770276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.514 [2024-02-14 20:29:57.770283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.514 [2024-02-14 20:29:57.781257] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:20.514 [2024-02-14 20:29:57.781276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.514 [2024-02-14 20:29:57.781284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.514 [2024-02-14 20:29:57.792355] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:20.514 [2024-02-14 20:29:57.792375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.514 [2024-02-14 20:29:57.792382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.514 [2024-02-14 20:29:57.803700] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:20.514 [2024-02-14 20:29:57.803719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.514 [2024-02-14 20:29:57.803726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.514 [2024-02-14 20:29:57.815301] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:20.514 [2024-02-14 20:29:57.815320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.514 [2024-02-14 20:29:57.815328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.514 [2024-02-14 20:29:57.826338] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:20.514 [2024-02-14 20:29:57.826357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.514 [2024-02-14 20:29:57.826365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.514 [2024-02-14 20:29:57.837442] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:20.514 [2024-02-14 20:29:57.837462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.514 [2024-02-14 20:29:57.837469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.514 [2024-02-14 20:29:57.848427] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:20.514 [2024-02-14 20:29:57.848447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.514 [2024-02-14 20:29:57.848455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.514 [2024-02-14 20:29:57.859140] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:20.514 [2024-02-14 20:29:57.859159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.514 [2024-02-14 20:29:57.859167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.514 [2024-02-14 20:29:57.869980] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:20.514 [2024-02-14 20:29:57.869999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.514 [2024-02-14 20:29:57.870007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.514 [2024-02-14 20:29:57.881170] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:20.514 [2024-02-14 20:29:57.881189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.514 [2024-02-14 20:29:57.881197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.515 [2024-02-14 20:29:57.892337] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:20.515 [2024-02-14 20:29:57.892356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.515 [2024-02-14 20:29:57.892363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.515 [2024-02-14 20:29:57.903452] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:20.515 [2024-02-14 20:29:57.903472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.515 [2024-02-14 20:29:57.903480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.515 [2024-02-14 20:29:57.914457] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:20.515 [2024-02-14 20:29:57.914477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.515 [2024-02-14 20:29:57.914485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.515 [2024-02-14 20:29:57.925195] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:20.515 [2024-02-14 20:29:57.925215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.515 [2024-02-14 20:29:57.925223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.775 [2024-02-14 20:29:57.936339] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:20.775 [2024-02-14 20:29:57.936358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.775 [2024-02-14 20:29:57.936368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.775 [2024-02-14 20:29:57.947359] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:20.775 [2024-02-14 20:29:57.947378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.775 [2024-02-14 20:29:57.947386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.775 [2024-02-14 20:29:57.958448] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:20.775 [2024-02-14 20:29:57.958467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.775 [2024-02-14 20:29:57.958475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.775 [2024-02-14 20:29:57.969295] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:20.775 [2024-02-14 20:29:57.969314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.775 [2024-02-14 20:29:57.969322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.775 [2024-02-14 20:29:57.980291] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:20.775 [2024-02-14 20:29:57.980311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.775 [2024-02-14 20:29:57.980318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.775 [2024-02-14 20:29:57.991384] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:20.775 [2024-02-14 20:29:57.991403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.775 [2024-02-14 20:29:57.991410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.775 [2024-02-14 20:29:58.002405] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:20.775 [2024-02-14 20:29:58.002424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.775 [2024-02-14 20:29:58.002431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.775 [2024-02-14 20:29:58.013143] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:20.775 [2024-02-14 20:29:58.013162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.775 [2024-02-14 20:29:58.013169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.775 [2024-02-14 20:29:58.024156] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:20.775 [2024-02-14 20:29:58.024176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.775 [2024-02-14 20:29:58.024184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.775 [2024-02-14 20:29:58.035521] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:20.775 [2024-02-14 20:29:58.035543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.776 [2024-02-14 20:29:58.035550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.776 [2024-02-14 20:29:58.047057] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:20.776 [2024-02-14 20:29:58.047076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.776 [2024-02-14 20:29:58.047083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.776 [2024-02-14 20:29:58.058115] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:20.776 [2024-02-14 20:29:58.058134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.776 [2024-02-14 20:29:58.058141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.776 [2024-02-14 20:29:58.074806] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:20.776 [2024-02-14 20:29:58.074825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.776 [2024-02-14 20:29:58.074833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.776 [2024-02-14 20:29:58.089842] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:20.776 [2024-02-14 20:29:58.089862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.776 [2024-02-14 20:29:58.089870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.776 [2024-02-14 20:29:58.102367] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:20.776 [2024-02-14 20:29:58.102386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.776 [2024-02-14 20:29:58.102394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.776 [2024-02-14 20:29:58.113671] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:20.776 [2024-02-14 20:29:58.113689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.776 [2024-02-14 20:29:58.113697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.776 [2024-02-14 20:29:58.125280] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:20.776 [2024-02-14 20:29:58.125298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.776 [2024-02-14 20:29:58.125306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.776 [2024-02-14 20:29:58.145967] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:20.776 [2024-02-14 20:29:58.145986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.776 [2024-02-14 20:29:58.145993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.776 [2024-02-14 20:29:58.160736] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:20.776 [2024-02-14 20:29:58.160755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.776 [2024-02-14 20:29:58.160763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.776 [2024-02-14 20:29:58.179246] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:20.776 [2024-02-14 20:29:58.179266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.776 [2024-02-14 20:29:58.179274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.036 [2024-02-14 20:29:58.195470] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:21.036 [2024-02-14 20:29:58.195491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.036 [2024-02-14 20:29:58.195498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.036 [2024-02-14 20:29:58.207839] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:21.036 [2024-02-14 20:29:58.207858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.036 [2024-02-14 20:29:58.207866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.036 [2024-02-14 20:29:58.221434] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:21.036 [2024-02-14 20:29:58.221454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.036 [2024-02-14 20:29:58.221461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.036 [2024-02-14 20:29:58.241025] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:21.036 [2024-02-14 20:29:58.241045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.037 [2024-02-14 20:29:58.241052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.037 [2024-02-14 20:29:58.256542] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:21.037 [2024-02-14 20:29:58.256563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.037 [2024-02-14 20:29:58.256571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.037 [2024-02-14 20:29:58.268101] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:21.037 [2024-02-14 20:29:58.268121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.037 [2024-02-14 20:29:58.268129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.037 [2024-02-14 20:29:58.279873] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:21.037 [2024-02-14 20:29:58.279895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.037 [2024-02-14 20:29:58.279903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.037 [2024-02-14 20:29:58.290751] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:21.037 [2024-02-14 20:29:58.290769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.037 [2024-02-14 20:29:58.290776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.037 [2024-02-14 20:29:58.301944] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:21.037 [2024-02-14 20:29:58.301963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.037 [2024-02-14 20:29:58.301970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.037 [2024-02-14 20:29:58.318597] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:21.037 [2024-02-14 20:29:58.318616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.037 [2024-02-14 20:29:58.318624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.037 [2024-02-14 20:29:58.333556] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:21.037 [2024-02-14 20:29:58.333575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.037 [2024-02-14 20:29:58.333583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.037 [2024-02-14 20:29:58.345275] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:21.037 [2024-02-14 20:29:58.345294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.037 [2024-02-14 20:29:58.345302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.037 [2024-02-14 20:29:58.356267] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:21.037 [2024-02-14 20:29:58.356285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.037 [2024-02-14 20:29:58.356293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.037 [2024-02-14 20:29:58.367562] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:21.037 [2024-02-14 20:29:58.367581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.037 [2024-02-14 20:29:58.367589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.037 [2024-02-14 20:29:58.388029] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:21.037 [2024-02-14 20:29:58.388048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.037 [2024-02-14 20:29:58.388056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.037 [2024-02-14 20:29:58.408966] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:21.037 [2024-02-14 20:29:58.408985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.037 [2024-02-14 20:29:58.408992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.037 [2024-02-14 20:29:58.427186] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:21.037 [2024-02-14 20:29:58.427204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.037 [2024-02-14 20:29:58.427212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.037 [2024-02-14 20:29:58.440217] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:21.037 [2024-02-14 20:29:58.440237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.037 [2024-02-14 20:29:58.440245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.297 [2024-02-14 20:29:58.454436] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:21.297 [2024-02-14 20:29:58.454456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.297 [2024-02-14 20:29:58.454464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.297 [2024-02-14 20:29:58.466473] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:21.297 [2024-02-14 20:29:58.466492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.297 [2024-02-14 20:29:58.466499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.297 [2024-02-14 20:29:58.477621] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:21.297 [2024-02-14 20:29:58.477639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.297 [2024-02-14 20:29:58.477656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.297 [2024-02-14 20:29:58.488626] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:21.297 [2024-02-14 20:29:58.488644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.297 [2024-02-14 20:29:58.488657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.297 [2024-02-14 20:29:58.500143] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:21.297 [2024-02-14 20:29:58.500162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.297 [2024-02-14 20:29:58.500169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.297 [2024-02-14 20:29:58.518753] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:21.297 [2024-02-14 20:29:58.518771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.297 [2024-02-14 20:29:58.518911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.297 [2024-02-14 20:29:58.534194] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:21.297 [2024-02-14 20:29:58.534213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.297 [2024-02-14 20:29:58.534221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.297 [2024-02-14 20:29:58.551774] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:21.297 [2024-02-14 20:29:58.551794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.297 [2024-02-14 20:29:58.551801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.297 [2024-02-14 20:29:58.571721] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:21.297 [2024-02-14 20:29:58.571740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.297 [2024-02-14 20:29:58.571748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.297 [2024-02-14 20:29:58.587601] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:21.297 [2024-02-14 20:29:58.587620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.298 [2024-02-14 20:29:58.587629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.298 [2024-02-14 20:29:58.602800] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:21.298 [2024-02-14 20:29:58.602820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.298 [2024-02-14 20:29:58.602828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.298 [2024-02-14 20:29:58.622906] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:21.298 [2024-02-14 20:29:58.622927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.298 [2024-02-14 20:29:58.622935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.298 [2024-02-14 20:29:58.642430] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:21.298 [2024-02-14 20:29:58.642449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.298 [2024-02-14 20:29:58.642457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.298 [2024-02-14 20:29:58.658628] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:21.298 [2024-02-14 20:29:58.658653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.298 [2024-02-14 20:29:58.658662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.298 [2024-02-14 20:29:58.671258] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:21.298 [2024-02-14 20:29:58.671281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.298 [2024-02-14 20:29:58.671289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.298 [2024-02-14 20:29:58.682732] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:21.298 [2024-02-14 20:29:58.682751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.298 [2024-02-14 20:29:58.682758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.298 [2024-02-14 20:29:58.701290] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:21.298 [2024-02-14 20:29:58.701310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.298 [2024-02-14 20:29:58.701317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.558 [2024-02-14 20:29:58.716665] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:21.558 [2024-02-14 20:29:58.716687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.558 [2024-02-14 20:29:58.716694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.558 [2024-02-14 20:29:58.728185] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:21.558 [2024-02-14 20:29:58.728205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.558 [2024-02-14 20:29:58.728212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.558 [2024-02-14 20:29:58.738786] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:21.558 [2024-02-14 20:29:58.738805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.558 [2024-02-14 20:29:58.738813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.558 [2024-02-14 20:29:58.749387] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:21.558 [2024-02-14 20:29:58.749406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.558 [2024-02-14 20:29:58.749414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.558 [2024-02-14 20:29:58.760101] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:21.558 [2024-02-14 20:29:58.760119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.558 [2024-02-14 20:29:58.760126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.558 [2024-02-14 20:29:58.770957] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:21.559 [2024-02-14 20:29:58.770976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.559 [2024-02-14 20:29:58.770984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.559 [2024-02-14 20:29:58.782181] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:21.559 [2024-02-14 20:29:58.782200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.559 [2024-02-14 20:29:58.782207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.559 [2024-02-14 20:29:58.793641] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:21.559 [2024-02-14 20:29:58.793665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.559 [2024-02-14 20:29:58.793673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.559 [2024-02-14 20:29:58.804333] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:21.559 [2024-02-14 20:29:58.804352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.559 [2024-02-14 20:29:58.804360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.559 [2024-02-14 20:29:58.816006] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:21.559 [2024-02-14 20:29:58.816025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.559 [2024-02-14 20:29:58.816032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.559 [2024-02-14 20:29:58.826968] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:21.559 [2024-02-14 20:29:58.826987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.559 [2024-02-14 20:29:58.826995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.559 [2024-02-14 20:29:58.838266] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:21.559 [2024-02-14 20:29:58.838286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.559 [2024-02-14 20:29:58.838293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.559 [2024-02-14 20:29:58.850028] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:21.559 [2024-02-14 20:29:58.850049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.559 [2024-02-14 20:29:58.850057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.559 [2024-02-14 20:29:58.863431] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:21.559 [2024-02-14 20:29:58.863452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.559 [2024-02-14 20:29:58.863460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.559 [2024-02-14 20:29:58.875879] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:21.559 [2024-02-14 20:29:58.875901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.559 [2024-02-14 20:29:58.875909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.559 [2024-02-14 20:29:58.886684] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:21.559 [2024-02-14 20:29:58.886703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.559 [2024-02-14 20:29:58.886711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.559 [2024-02-14 20:29:58.897360] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:21.559 [2024-02-14 20:29:58.897379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.559 [2024-02-14 20:29:58.897386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.559 [2024-02-14 20:29:58.908886] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:21.559 [2024-02-14 20:29:58.908906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.559 [2024-02-14 20:29:58.908913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.559 [2024-02-14 20:29:58.921436] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:21.559 [2024-02-14 20:29:58.921456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.559 [2024-02-14 20:29:58.921464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.559 [2024-02-14 20:29:58.936024] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:21.559 [2024-02-14 20:29:58.936044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.559 [2024-02-14 20:29:58.936052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.559 [2024-02-14 20:29:58.950539] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:21.559 [2024-02-14 20:29:58.950560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.559 [2024-02-14 20:29:58.950568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.559 [2024-02-14 20:29:58.967012] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:21.559 [2024-02-14 20:29:58.967031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.559 [2024-02-14 20:29:58.967039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.819 [2024-02-14 20:29:58.981264] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:21.819 [2024-02-14 20:29:58.981284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.819 [2024-02-14 20:29:58.981292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.819 [2024-02-14 20:29:58.999810] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:21.819 [2024-02-14 20:29:58.999831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.819 [2024-02-14 20:29:58.999839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.819 [2024-02-14 20:29:59.014902] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1396a20) 00:29:21.819 [2024-02-14 20:29:59.014923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.819 [2024-02-14 20:29:59.014930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.819 00:29:21.819 Latency(us) 00:29:21.819 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:21.819 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:21.819 nvme0n1 : 2.00 2521.80 315.23 0.00 0.00 6340.32 5211.67 21221.18 00:29:21.819 =================================================================================================================== 00:29:21.819 Total : 2521.80 315.23 0.00 0.00 6340.32 5211.67 21221.18 00:29:21.819 0 00:29:21.819 20:29:59 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:21.819 20:29:59 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:21.819 20:29:59 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:21.819 20:29:59 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:21.819 | .driver_specific 00:29:21.819 | .nvme_error 00:29:21.819 | .status_code 00:29:21.819 | .command_transient_transport_error' 00:29:21.819 20:29:59 -- host/digest.sh@71 -- # (( 163 > 0 )) 00:29:21.819 20:29:59 -- host/digest.sh@73 -- # killprocess 1953151 00:29:21.819 20:29:59 -- common/autotest_common.sh@924 -- # '[' -z 1953151 ']' 00:29:21.819 20:29:59 -- common/autotest_common.sh@928 -- # kill -0 1953151 00:29:21.819 20:29:59 -- common/autotest_common.sh@929 -- # uname 00:29:21.819 20:29:59 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:29:21.819 20:29:59 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 1953151 00:29:22.079 20:29:59 -- common/autotest_common.sh@930 -- # process_name=reactor_1 00:29:22.079 20:29:59 -- common/autotest_common.sh@934 -- # '[' reactor_1 = sudo ']' 00:29:22.079 20:29:59 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 1953151' 00:29:22.079 killing process with pid 1953151 00:29:22.079 20:29:59 -- common/autotest_common.sh@943 -- # kill 1953151 00:29:22.079 Received shutdown signal, test time was about 2.000000 seconds 00:29:22.079 00:29:22.079 Latency(us) 00:29:22.079 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:22.079 =================================================================================================================== 00:29:22.079 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:22.079 20:29:59 -- common/autotest_common.sh@948 -- # wait 1953151 00:29:22.079 20:29:59 -- host/digest.sh@113 -- # run_bperf_err randwrite 4096 128 00:29:22.079 20:29:59 -- host/digest.sh@54 -- # local rw bs qd 00:29:22.079 20:29:59 -- host/digest.sh@56 -- # rw=randwrite 00:29:22.079 20:29:59 -- host/digest.sh@56 -- # bs=4096 00:29:22.079 20:29:59 -- host/digest.sh@56 -- # qd=128 00:29:22.079 20:29:59 -- host/digest.sh@58 -- # bperfpid=1953847 00:29:22.079 20:29:59 -- host/digest.sh@60 -- # waitforlisten 1953847 /var/tmp/bperf.sock 00:29:22.079 20:29:59 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:29:22.079 20:29:59 -- common/autotest_common.sh@817 -- # '[' -z 1953847 ']' 00:29:22.079 20:29:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:22.079 20:29:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:22.079 20:29:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:22.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:22.079 20:29:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:22.079 20:29:59 -- common/autotest_common.sh@10 -- # set +x 00:29:22.338 [2024-02-14 20:29:59.500348] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:29:22.338 [2024-02-14 20:29:59.500396] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1953847 ] 00:29:22.338 EAL: No free 2048 kB hugepages reported on node 1 00:29:22.338 [2024-02-14 20:29:59.558120] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:22.338 [2024-02-14 20:29:59.633532] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:22.904 20:30:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:22.904 20:30:00 -- common/autotest_common.sh@850 -- # return 0 00:29:22.904 20:30:00 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:22.904 20:30:00 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:23.163 20:30:00 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:23.163 20:30:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:23.163 20:30:00 -- common/autotest_common.sh@10 -- # set +x 00:29:23.163 20:30:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:23.163 20:30:00 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:23.163 20:30:00 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:23.422 nvme0n1 00:29:23.422 20:30:00 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:23.422 20:30:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:23.422 20:30:00 -- common/autotest_common.sh@10 -- # set +x 00:29:23.422 20:30:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:23.422 20:30:00 -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:23.422 20:30:00 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:23.422 Running I/O for 2 seconds... 00:29:23.682 [2024-02-14 20:30:00.867302] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fcdd0 00:29:23.682 [2024-02-14 20:30:00.868560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.682 [2024-02-14 20:30:00.868589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:23.682 [2024-02-14 20:30:00.877425] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:23.682 [2024-02-14 20:30:00.877673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.682 [2024-02-14 20:30:00.877695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:23.682 [2024-02-14 20:30:00.886665] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:23.682 [2024-02-14 20:30:00.886884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.682 [2024-02-14 20:30:00.886903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:23.682 [2024-02-14 20:30:00.895815] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:23.682 [2024-02-14 20:30:00.896066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:11323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.682 [2024-02-14 20:30:00.896085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:23.682 [2024-02-14 20:30:00.904954] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:23.682 [2024-02-14 20:30:00.905215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.682 [2024-02-14 20:30:00.905234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:23.682 [2024-02-14 20:30:00.914054] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:23.682 [2024-02-14 20:30:00.914294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.682 [2024-02-14 20:30:00.914312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:23.682 [2024-02-14 20:30:00.923105] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:23.682 [2024-02-14 20:30:00.923348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:17378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.682 [2024-02-14 20:30:00.923366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:23.682 [2024-02-14 20:30:00.932202] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:23.682 [2024-02-14 20:30:00.932443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:2676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.682 [2024-02-14 20:30:00.932461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:23.682 [2024-02-14 20:30:00.941294] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:23.682 [2024-02-14 20:30:00.941549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:4210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.682 [2024-02-14 20:30:00.941567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:23.682 [2024-02-14 20:30:00.950399] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:23.682 [2024-02-14 20:30:00.950755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:7937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.682 [2024-02-14 20:30:00.950773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:23.682 [2024-02-14 20:30:00.959446] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:23.682 [2024-02-14 20:30:00.959702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:5023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.682 [2024-02-14 20:30:00.959721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:23.682 [2024-02-14 20:30:00.968562] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:23.682 [2024-02-14 20:30:00.968794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:19531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.682 [2024-02-14 20:30:00.968816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:23.682 [2024-02-14 20:30:00.977663] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:23.682 [2024-02-14 20:30:00.977925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.682 [2024-02-14 20:30:00.977943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:23.682 [2024-02-14 20:30:00.986765] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:23.682 [2024-02-14 20:30:00.987023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.682 [2024-02-14 20:30:00.987041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:23.682 [2024-02-14 20:30:00.995864] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:23.682 [2024-02-14 20:30:00.996110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.682 [2024-02-14 20:30:00.996128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:23.682 [2024-02-14 20:30:01.005088] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:23.682 [2024-02-14 20:30:01.005345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.682 [2024-02-14 20:30:01.005364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:23.682 [2024-02-14 20:30:01.014170] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:23.682 [2024-02-14 20:30:01.014405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.682 [2024-02-14 20:30:01.014423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:23.682 [2024-02-14 20:30:01.023199] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:23.682 [2024-02-14 20:30:01.023438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:16131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.682 [2024-02-14 20:30:01.023456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:23.682 [2024-02-14 20:30:01.032290] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:23.682 [2024-02-14 20:30:01.032525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:21286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.682 [2024-02-14 20:30:01.032543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:23.682 [2024-02-14 20:30:01.041383] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:23.682 [2024-02-14 20:30:01.041640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:19482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.682 [2024-02-14 20:30:01.041663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:23.682 [2024-02-14 20:30:01.050548] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:23.682 [2024-02-14 20:30:01.050789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:9088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.682 [2024-02-14 20:30:01.050811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:23.682 [2024-02-14 20:30:01.059727] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:23.682 [2024-02-14 20:30:01.059963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:24842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.682 [2024-02-14 20:30:01.059981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:23.682 [2024-02-14 20:30:01.068782] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:23.682 [2024-02-14 20:30:01.069041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:1494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.682 [2024-02-14 20:30:01.069059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:23.682 [2024-02-14 20:30:01.077844] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:23.682 [2024-02-14 20:30:01.078104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.682 [2024-02-14 20:30:01.078123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:23.682 [2024-02-14 20:30:01.086859] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:23.682 [2024-02-14 20:30:01.087094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.682 [2024-02-14 20:30:01.087112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:23.682 [2024-02-14 20:30:01.095938] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:23.682 [2024-02-14 20:30:01.096180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:5465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.682 [2024-02-14 20:30:01.096199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:23.942 [2024-02-14 20:30:01.105355] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:23.942 [2024-02-14 20:30:01.105610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.942 [2024-02-14 20:30:01.105628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:23.942 [2024-02-14 20:30:01.114445] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:23.942 [2024-02-14 20:30:01.114683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.942 [2024-02-14 20:30:01.114701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:23.943 [2024-02-14 20:30:01.123418] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:23.943 [2024-02-14 20:30:01.123672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:18035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.943 [2024-02-14 20:30:01.123690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:23.943 [2024-02-14 20:30:01.132606] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:23.943 [2024-02-14 20:30:01.132868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:13952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.943 [2024-02-14 20:30:01.132886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:23.943 [2024-02-14 20:30:01.141806] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:23.943 [2024-02-14 20:30:01.142049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:17338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.943 [2024-02-14 20:30:01.142068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:23.943 [2024-02-14 20:30:01.150984] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:23.943 [2024-02-14 20:30:01.151227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:20696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.943 [2024-02-14 20:30:01.151245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:23.943 [2024-02-14 20:30:01.160014] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:23.943 [2024-02-14 20:30:01.160254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:9427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.943 [2024-02-14 20:30:01.160272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:23.943 [2024-02-14 20:30:01.169076] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:23.943 [2024-02-14 20:30:01.169332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:7104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.943 [2024-02-14 20:30:01.169350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:23.943 [2024-02-14 20:30:01.178124] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:23.943 [2024-02-14 20:30:01.178361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.943 [2024-02-14 20:30:01.178378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:23.943 [2024-02-14 20:30:01.187372] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:23.943 [2024-02-14 20:30:01.187629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.943 [2024-02-14 20:30:01.187650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:23.943 [2024-02-14 20:30:01.196485] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:23.943 [2024-02-14 20:30:01.196742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:3 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.943 [2024-02-14 20:30:01.196760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:23.943 [2024-02-14 20:30:01.205566] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:23.943 [2024-02-14 20:30:01.205826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.943 [2024-02-14 20:30:01.205844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:23.943 [2024-02-14 20:30:01.214635] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:23.943 [2024-02-14 20:30:01.214882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.943 [2024-02-14 20:30:01.214901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:23.943 [2024-02-14 20:30:01.223628] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:23.943 [2024-02-14 20:30:01.223871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:5332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.943 [2024-02-14 20:30:01.223889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:23.943 [2024-02-14 20:30:01.232717] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:23.943 [2024-02-14 20:30:01.232972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:1444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.943 [2024-02-14 20:30:01.232990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:23.943 [2024-02-14 20:30:01.241790] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:23.943 [2024-02-14 20:30:01.242041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:16114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.943 [2024-02-14 20:30:01.242059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:23.943 [2024-02-14 20:30:01.250911] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:23.943 [2024-02-14 20:30:01.251144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:18292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.943 [2024-02-14 20:30:01.251162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:23.943 [2024-02-14 20:30:01.259924] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:23.943 [2024-02-14 20:30:01.260160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:3034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.943 [2024-02-14 20:30:01.260177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:23.943 [2024-02-14 20:30:01.268976] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:23.943 [2024-02-14 20:30:01.269231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:10379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.943 [2024-02-14 20:30:01.269249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:23.943 [2024-02-14 20:30:01.278060] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:23.943 [2024-02-14 20:30:01.278299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.943 [2024-02-14 20:30:01.278317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:23.943 [2024-02-14 20:30:01.287134] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:23.943 [2024-02-14 20:30:01.287373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.943 [2024-02-14 20:30:01.287394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:23.943 [2024-02-14 20:30:01.296167] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:23.943 [2024-02-14 20:30:01.296418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:9952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.943 [2024-02-14 20:30:01.296436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:23.943 [2024-02-14 20:30:01.305248] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:23.943 [2024-02-14 20:30:01.305497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:6844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.943 [2024-02-14 20:30:01.305516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:23.943 [2024-02-14 20:30:01.314348] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:23.943 [2024-02-14 20:30:01.314579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.943 [2024-02-14 20:30:01.314597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:23.943 [2024-02-14 20:30:01.323393] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:23.943 [2024-02-14 20:30:01.323626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:24617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.943 [2024-02-14 20:30:01.323644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:23.943 [2024-02-14 20:30:01.332491] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:23.943 [2024-02-14 20:30:01.332746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:16254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.943 [2024-02-14 20:30:01.332764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:23.943 [2024-02-14 20:30:01.341534] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:23.943 [2024-02-14 20:30:01.341775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:15208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.943 [2024-02-14 20:30:01.341794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:23.943 [2024-02-14 20:30:01.350668] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:23.943 [2024-02-14 20:30:01.350905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:20777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.943 [2024-02-14 20:30:01.350922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:24.204 [2024-02-14 20:30:01.359995] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:24.204 [2024-02-14 20:30:01.360232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:23387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.204 [2024-02-14 20:30:01.360250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:24.204 [2024-02-14 20:30:01.369181] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:24.204 [2024-02-14 20:30:01.369425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:12538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.204 [2024-02-14 20:30:01.369442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:24.204 [2024-02-14 20:30:01.378259] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:24.204 [2024-02-14 20:30:01.378501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.204 [2024-02-14 20:30:01.378519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:24.204 [2024-02-14 20:30:01.387556] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:24.204 [2024-02-14 20:30:01.387798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.204 [2024-02-14 20:30:01.387817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:24.204 [2024-02-14 20:30:01.396741] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:24.204 [2024-02-14 20:30:01.396986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.204 [2024-02-14 20:30:01.397004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:24.204 [2024-02-14 20:30:01.405873] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:24.204 [2024-02-14 20:30:01.406113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.204 [2024-02-14 20:30:01.406132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:24.204 [2024-02-14 20:30:01.414867] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:24.204 [2024-02-14 20:30:01.415107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.204 [2024-02-14 20:30:01.415125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:24.204 [2024-02-14 20:30:01.423923] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:24.204 [2024-02-14 20:30:01.424177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:6208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.204 [2024-02-14 20:30:01.424194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:24.204 [2024-02-14 20:30:01.433013] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:24.204 [2024-02-14 20:30:01.433252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:15765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.204 [2024-02-14 20:30:01.433269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:24.204 [2024-02-14 20:30:01.442063] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:24.204 [2024-02-14 20:30:01.442299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:9246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.204 [2024-02-14 20:30:01.442316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:24.204 [2024-02-14 20:30:01.451212] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:24.204 [2024-02-14 20:30:01.451454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:8238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.204 [2024-02-14 20:30:01.451472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:24.204 [2024-02-14 20:30:01.460279] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:24.204 [2024-02-14 20:30:01.460536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:15785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.204 [2024-02-14 20:30:01.460554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:24.204 [2024-02-14 20:30:01.469319] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:24.204 [2024-02-14 20:30:01.469560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:20901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.204 [2024-02-14 20:30:01.469579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:24.204 [2024-02-14 20:30:01.478336] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:24.204 [2024-02-14 20:30:01.478573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.204 [2024-02-14 20:30:01.478590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:24.204 [2024-02-14 20:30:01.487408] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:24.204 [2024-02-14 20:30:01.487642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.204 [2024-02-14 20:30:01.487664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:24.204 [2024-02-14 20:30:01.496509] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:24.204 [2024-02-14 20:30:01.496762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:12630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.204 [2024-02-14 20:30:01.496780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:24.204 [2024-02-14 20:30:01.505590] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:24.204 [2024-02-14 20:30:01.505834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.204 [2024-02-14 20:30:01.505851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:24.204 [2024-02-14 20:30:01.514599] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:24.204 [2024-02-14 20:30:01.514838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.204 [2024-02-14 20:30:01.514856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:24.204 [2024-02-14 20:30:01.523667] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:24.204 [2024-02-14 20:30:01.523921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:14145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.204 [2024-02-14 20:30:01.523942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:24.204 [2024-02-14 20:30:01.532822] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:24.204 [2024-02-14 20:30:01.533060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:18276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.204 [2024-02-14 20:30:01.533078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:24.204 [2024-02-14 20:30:01.541840] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:24.204 [2024-02-14 20:30:01.542077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:4772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.205 [2024-02-14 20:30:01.542095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:24.205 [2024-02-14 20:30:01.550943] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:24.205 [2024-02-14 20:30:01.551182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:24156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.205 [2024-02-14 20:30:01.551200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:24.205 [2024-02-14 20:30:01.560005] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:24.205 [2024-02-14 20:30:01.560262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:19604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.205 [2024-02-14 20:30:01.560281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:24.205 [2024-02-14 20:30:01.569058] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:24.205 [2024-02-14 20:30:01.569297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:16037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.205 [2024-02-14 20:30:01.569316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:24.205 [2024-02-14 20:30:01.578064] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:24.205 [2024-02-14 20:30:01.578302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.205 [2024-02-14 20:30:01.578319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:24.205 [2024-02-14 20:30:01.587129] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:24.205 [2024-02-14 20:30:01.587385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.205 [2024-02-14 20:30:01.587403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:24.205 [2024-02-14 20:30:01.596225] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:24.205 [2024-02-14 20:30:01.596481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:9227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.205 [2024-02-14 20:30:01.596499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:24.205 [2024-02-14 20:30:01.605295] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:24.205 [2024-02-14 20:30:01.605536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:3736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.205 [2024-02-14 20:30:01.605554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:24.205 [2024-02-14 20:30:01.614309] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:24.205 [2024-02-14 20:30:01.614566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.205 [2024-02-14 20:30:01.614585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:24.465 [2024-02-14 20:30:01.623638] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:24.465 [2024-02-14 20:30:01.623886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:1429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.465 [2024-02-14 20:30:01.623904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:24.465 [2024-02-14 20:30:01.632989] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:24.465 [2024-02-14 20:30:01.633235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:24156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.465 [2024-02-14 20:30:01.633253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:24.465 [2024-02-14 20:30:01.642274] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:24.465 [2024-02-14 20:30:01.642531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:8555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.465 [2024-02-14 20:30:01.642549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:24.465 [2024-02-14 20:30:01.651518] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:24.465 [2024-02-14 20:30:01.651777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:23771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.465 [2024-02-14 20:30:01.651796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:24.465 [2024-02-14 20:30:01.660710] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:24.465 [2024-02-14 20:30:01.660951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:24213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.465 [2024-02-14 20:30:01.660970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:24.466 [2024-02-14 20:30:01.669866] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:24.466 [2024-02-14 20:30:01.670105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:12136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.466 [2024-02-14 20:30:01.670123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:24.466 [2024-02-14 20:30:01.678887] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:24.466 [2024-02-14 20:30:01.679127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.466 [2024-02-14 20:30:01.679145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:24.466 [2024-02-14 20:30:01.687981] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:24.466 [2024-02-14 20:30:01.688238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.466 [2024-02-14 20:30:01.688258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:24.466 [2024-02-14 20:30:01.697037] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:24.466 [2024-02-14 20:30:01.697279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:6121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.466 [2024-02-14 20:30:01.697297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:24.466 [2024-02-14 20:30:01.706053] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:24.466 [2024-02-14 20:30:01.706289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:20548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.466 [2024-02-14 20:30:01.706308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:24.466 [2024-02-14 20:30:01.715064] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:24.466 [2024-02-14 20:30:01.715301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.466 [2024-02-14 20:30:01.715318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:24.466 [2024-02-14 20:30:01.724137] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:24.466 [2024-02-14 20:30:01.724390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:15659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.466 [2024-02-14 20:30:01.724408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:24.466 [2024-02-14 20:30:01.733210] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:24.466 [2024-02-14 20:30:01.733450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:15286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.466 [2024-02-14 20:30:01.733467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:24.466 [2024-02-14 20:30:01.742247] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:24.466 [2024-02-14 20:30:01.742485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.466 [2024-02-14 20:30:01.742503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:24.466 [2024-02-14 20:30:01.751361] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:24.466 [2024-02-14 20:30:01.751601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:9046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.466 [2024-02-14 20:30:01.751618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:24.466 [2024-02-14 20:30:01.760439] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:24.466 [2024-02-14 20:30:01.760696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:12017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.466 [2024-02-14 20:30:01.760717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:24.466 [2024-02-14 20:30:01.769485] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:24.466 [2024-02-14 20:30:01.769723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:12243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.466 [2024-02-14 20:30:01.769741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:24.466 [2024-02-14 20:30:01.778501] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:24.466 [2024-02-14 20:30:01.778759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.466 [2024-02-14 20:30:01.778777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:24.466 [2024-02-14 20:30:01.787590] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:24.466 [2024-02-14 20:30:01.787851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.466 [2024-02-14 20:30:01.787869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:24.466 [2024-02-14 20:30:01.796709] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:24.466 [2024-02-14 20:30:01.796971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.466 [2024-02-14 20:30:01.796989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:24.466 [2024-02-14 20:30:01.805746] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:24.466 [2024-02-14 20:30:01.805983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:25546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.466 [2024-02-14 20:30:01.806002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:24.466 [2024-02-14 20:30:01.814813] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:24.466 [2024-02-14 20:30:01.815047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.466 [2024-02-14 20:30:01.815064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:24.466 [2024-02-14 20:30:01.823911] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:24.466 [2024-02-14 20:30:01.824163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:15194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.466 [2024-02-14 20:30:01.824180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:24.466 [2024-02-14 20:30:01.833036] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:24.466 [2024-02-14 20:30:01.833274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:4142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.466 [2024-02-14 20:30:01.833291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:24.466 [2024-02-14 20:30:01.842127] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:24.466 [2024-02-14 20:30:01.842373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:18130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.466 [2024-02-14 20:30:01.842391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:24.466 [2024-02-14 20:30:01.851256] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:24.466 [2024-02-14 20:30:01.851493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.466 [2024-02-14 20:30:01.851512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:24.466 [2024-02-14 20:30:01.860253] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:24.466 [2024-02-14 20:30:01.860492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:17369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.466 [2024-02-14 20:30:01.860509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:24.466 [2024-02-14 20:30:01.869313] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:24.466 [2024-02-14 20:30:01.869549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.466 [2024-02-14 20:30:01.869566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:24.466 [2024-02-14 20:30:01.878557] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:24.466 [2024-02-14 20:30:01.878799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.466 [2024-02-14 20:30:01.878817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:24.735 [2024-02-14 20:30:01.887834] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:24.735 [2024-02-14 20:30:01.888076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.735 [2024-02-14 20:30:01.888095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:24.735 [2024-02-14 20:30:01.897105] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:24.735 [2024-02-14 20:30:01.897375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:18885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.735 [2024-02-14 20:30:01.897392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:24.735 [2024-02-14 20:30:01.906309] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:24.735 [2024-02-14 20:30:01.906536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.735 [2024-02-14 20:30:01.906555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:24.735 [2024-02-14 20:30:01.915361] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:24.735 [2024-02-14 20:30:01.915588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.735 [2024-02-14 20:30:01.915606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:24.735 [2024-02-14 20:30:01.924506] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:24.735 [2024-02-14 20:30:01.924751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:17242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.735 [2024-02-14 20:30:01.924770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:24.735 [2024-02-14 20:30:01.933735] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:24.735 [2024-02-14 20:30:01.933977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:24784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.735 [2024-02-14 20:30:01.933995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:24.735 [2024-02-14 20:30:01.942799] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:24.735 [2024-02-14 20:30:01.943061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:1365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.735 [2024-02-14 20:30:01.943080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:24.735 [2024-02-14 20:30:01.951884] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:24.735 [2024-02-14 20:30:01.952140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:9156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.735 [2024-02-14 20:30:01.952158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:24.735 [2024-02-14 20:30:01.960940] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:24.735 [2024-02-14 20:30:01.961166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:7366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.735 [2024-02-14 20:30:01.961184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:24.735 [2024-02-14 20:30:01.970028] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:24.735 [2024-02-14 20:30:01.970260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:5462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.735 [2024-02-14 20:30:01.970277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:24.735 [2024-02-14 20:30:01.979111] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:24.735 [2024-02-14 20:30:01.979363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.735 [2024-02-14 20:30:01.979382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:24.735 [2024-02-14 20:30:01.988234] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:24.735 [2024-02-14 20:30:01.988471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.735 [2024-02-14 20:30:01.988490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:24.735 [2024-02-14 20:30:01.997268] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:24.735 [2024-02-14 20:30:01.997508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.735 [2024-02-14 20:30:01.997530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:24.735 [2024-02-14 20:30:02.006427] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:24.735 [2024-02-14 20:30:02.006666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.735 [2024-02-14 20:30:02.006701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:24.735 [2024-02-14 20:30:02.015552] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:24.735 [2024-02-14 20:30:02.015810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.735 [2024-02-14 20:30:02.015829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:24.735 [2024-02-14 20:30:02.024588] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:24.735 [2024-02-14 20:30:02.024834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:22177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.735 [2024-02-14 20:30:02.024853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:24.735 [2024-02-14 20:30:02.033874] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:24.735 [2024-02-14 20:30:02.034112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:22244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.735 [2024-02-14 20:30:02.034130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:24.735 [2024-02-14 20:30:02.043154] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:24.735 [2024-02-14 20:30:02.043391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:14278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.735 [2024-02-14 20:30:02.043408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:24.735 [2024-02-14 20:30:02.052730] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:24.735 [2024-02-14 20:30:02.052981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:7128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.735 [2024-02-14 20:30:02.053000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:24.735 [2024-02-14 20:30:02.062667] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:24.735 [2024-02-14 20:30:02.062933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:8727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.735 [2024-02-14 20:30:02.062952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:24.735 [2024-02-14 20:30:02.072751] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:24.735 [2024-02-14 20:30:02.073025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:16462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.735 [2024-02-14 20:30:02.073043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:24.735 [2024-02-14 20:30:02.082204] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:24.735 [2024-02-14 20:30:02.082459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.735 [2024-02-14 20:30:02.082477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:24.735 [2024-02-14 20:30:02.091516] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:24.735 [2024-02-14 20:30:02.091759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.735 [2024-02-14 20:30:02.091777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:24.735 [2024-02-14 20:30:02.100705] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:24.735 [2024-02-14 20:30:02.100946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:22115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.735 [2024-02-14 20:30:02.100964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:24.735 [2024-02-14 20:30:02.109921] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:24.735 [2024-02-14 20:30:02.110181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:18366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.735 [2024-02-14 20:30:02.110199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:24.736 [2024-02-14 20:30:02.119144] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:24.736 [2024-02-14 20:30:02.119380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.736 [2024-02-14 20:30:02.119398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:24.736 [2024-02-14 20:30:02.128279] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:24.736 [2024-02-14 20:30:02.128511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:18196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.736 [2024-02-14 20:30:02.128529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:24.736 [2024-02-14 20:30:02.137353] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:24.736 [2024-02-14 20:30:02.137608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:13370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.736 [2024-02-14 20:30:02.137626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:25.060 [2024-02-14 20:30:02.146895] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:25.060 [2024-02-14 20:30:02.147135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:10375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.060 [2024-02-14 20:30:02.147160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:25.060 [2024-02-14 20:30:02.156280] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:25.060 [2024-02-14 20:30:02.156525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:15216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.060 [2024-02-14 20:30:02.156543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:25.060 [2024-02-14 20:30:02.165569] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:25.060 [2024-02-14 20:30:02.165818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:8732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.060 [2024-02-14 20:30:02.165837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:25.060 [2024-02-14 20:30:02.174774] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:25.060 [2024-02-14 20:30:02.175041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:17925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.060 [2024-02-14 20:30:02.175059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:25.060 [2024-02-14 20:30:02.183985] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:25.060 [2024-02-14 20:30:02.184226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.060 [2024-02-14 20:30:02.184243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:25.060 [2024-02-14 20:30:02.193255] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:25.060 [2024-02-14 20:30:02.193511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.060 [2024-02-14 20:30:02.193530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:25.060 [2024-02-14 20:30:02.202374] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:25.060 [2024-02-14 20:30:02.202614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:10709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.060 [2024-02-14 20:30:02.202632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:25.060 [2024-02-14 20:30:02.211503] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:25.060 [2024-02-14 20:30:02.211768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.060 [2024-02-14 20:30:02.211786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:25.060 [2024-02-14 20:30:02.220590] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:25.060 [2024-02-14 20:30:02.220829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.060 [2024-02-14 20:30:02.220847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:25.060 [2024-02-14 20:30:02.229700] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:25.060 [2024-02-14 20:30:02.229950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:25175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.060 [2024-02-14 20:30:02.229967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:25.060 [2024-02-14 20:30:02.238794] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:25.060 [2024-02-14 20:30:02.239019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:14283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.060 [2024-02-14 20:30:02.239043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:25.060 [2024-02-14 20:30:02.247866] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:25.060 [2024-02-14 20:30:02.248122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:15237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.060 [2024-02-14 20:30:02.248139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:25.060 [2024-02-14 20:30:02.256915] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:25.060 [2024-02-14 20:30:02.257152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:2063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.060 [2024-02-14 20:30:02.257170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:25.060 [2024-02-14 20:30:02.265953] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:25.060 [2024-02-14 20:30:02.266186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:17072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.060 [2024-02-14 20:30:02.266204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:25.060 [2024-02-14 20:30:02.274951] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:25.061 [2024-02-14 20:30:02.275186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:5048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.061 [2024-02-14 20:30:02.275204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:25.061 [2024-02-14 20:30:02.284017] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:25.061 [2024-02-14 20:30:02.284274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.061 [2024-02-14 20:30:02.284292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:25.061 [2024-02-14 20:30:02.293079] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:25.061 [2024-02-14 20:30:02.293318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.061 [2024-02-14 20:30:02.293336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:25.061 [2024-02-14 20:30:02.302026] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fd640 00:29:25.061 [2024-02-14 20:30:02.302977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.061 [2024-02-14 20:30:02.302995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:25.061 [2024-02-14 20:30:02.311954] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190f6cc8 00:29:25.061 [2024-02-14 20:30:02.313223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.061 [2024-02-14 20:30:02.313242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:25.061 [2024-02-14 20:30:02.321207] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fb048 00:29:25.061 [2024-02-14 20:30:02.321463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:6111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.061 [2024-02-14 20:30:02.321484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.061 [2024-02-14 20:30:02.330264] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fb048 00:29:25.061 [2024-02-14 20:30:02.330524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:13376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.061 [2024-02-14 20:30:02.330542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.061 [2024-02-14 20:30:02.339350] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fb048 00:29:25.061 [2024-02-14 20:30:02.339589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.061 [2024-02-14 20:30:02.339607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.061 [2024-02-14 20:30:02.348443] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fb048 00:29:25.061 [2024-02-14 20:30:02.348848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:3930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.061 [2024-02-14 20:30:02.348866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.061 [2024-02-14 20:30:02.357514] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fb048 00:29:25.061 [2024-02-14 20:30:02.358401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.061 [2024-02-14 20:30:02.358419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.061 [2024-02-14 20:30:02.369207] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fef90 00:29:25.061 [2024-02-14 20:30:02.370351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:19837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.061 [2024-02-14 20:30:02.370369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.061 [2024-02-14 20:30:02.378984] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190f6020 00:29:25.061 [2024-02-14 20:30:02.379660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:11800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.061 [2024-02-14 20:30:02.379694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:25.061 [2024-02-14 20:30:02.388118] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190f6020 00:29:25.061 [2024-02-14 20:30:02.388358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:18292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.061 [2024-02-14 20:30:02.388376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:25.061 [2024-02-14 20:30:02.397190] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190f6020 00:29:25.061 [2024-02-14 20:30:02.397722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:5901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.061 [2024-02-14 20:30:02.397740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:25.061 [2024-02-14 20:30:02.406366] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190f6890 00:29:25.061 [2024-02-14 20:30:02.407807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:10395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.061 [2024-02-14 20:30:02.407825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:25.061 [2024-02-14 20:30:02.416312] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190f7da8 00:29:25.061 [2024-02-14 20:30:02.416723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:20942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.061 [2024-02-14 20:30:02.416741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:25.061 [2024-02-14 20:30:02.425547] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190f7da8 00:29:25.061 [2024-02-14 20:30:02.425964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:21076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.061 [2024-02-14 20:30:02.425981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:25.061 [2024-02-14 20:30:02.434757] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190f7da8 00:29:25.061 [2024-02-14 20:30:02.434951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.061 [2024-02-14 20:30:02.434969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:25.061 [2024-02-14 20:30:02.444039] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190f7da8 00:29:25.061 [2024-02-14 20:30:02.444228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:19056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.061 [2024-02-14 20:30:02.444245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:25.061 [2024-02-14 20:30:02.453287] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190f7da8 00:29:25.061 [2024-02-14 20:30:02.453481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:7274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.061 [2024-02-14 20:30:02.453499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:25.061 [2024-02-14 20:30:02.462537] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190f7da8 00:29:25.061 [2024-02-14 20:30:02.462854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:10406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.061 [2024-02-14 20:30:02.462872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:25.061 [2024-02-14 20:30:02.471758] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190f7da8 00:29:25.061 [2024-02-14 20:30:02.472098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:18445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.061 [2024-02-14 20:30:02.472116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:25.321 [2024-02-14 20:30:02.480996] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190f7da8 00:29:25.321 [2024-02-14 20:30:02.481614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:13054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.321 [2024-02-14 20:30:02.481633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:25.321 [2024-02-14 20:30:02.490196] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190f7da8 00:29:25.321 [2024-02-14 20:30:02.491073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:7659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.321 [2024-02-14 20:30:02.491090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:25.321 [2024-02-14 20:30:02.502870] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190f5be8 00:29:25.321 [2024-02-14 20:30:02.503834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:7492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.321 [2024-02-14 20:30:02.503852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:25.321 [2024-02-14 20:30:02.512099] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190f7100 00:29:25.321 [2024-02-14 20:30:02.512478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.321 [2024-02-14 20:30:02.512496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:25.321 [2024-02-14 20:30:02.521182] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190f7100 00:29:25.321 [2024-02-14 20:30:02.521391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:15225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.321 [2024-02-14 20:30:02.521409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:25.321 [2024-02-14 20:30:02.530238] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190f7100 00:29:25.321 [2024-02-14 20:30:02.530675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:16642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.322 [2024-02-14 20:30:02.530693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:25.322 [2024-02-14 20:30:02.539307] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190f7100 00:29:25.322 [2024-02-14 20:30:02.539516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.322 [2024-02-14 20:30:02.539533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:25.322 [2024-02-14 20:30:02.548349] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190f7100 00:29:25.322 [2024-02-14 20:30:02.548570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:14554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.322 [2024-02-14 20:30:02.548588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:25.322 [2024-02-14 20:30:02.557335] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190f7100 00:29:25.322 [2024-02-14 20:30:02.558902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:18500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.322 [2024-02-14 20:30:02.558919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:25.322 [2024-02-14 20:30:02.567026] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190f81e0 00:29:25.322 [2024-02-14 20:30:02.568461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:24845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.322 [2024-02-14 20:30:02.568482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:25.322 [2024-02-14 20:30:02.575753] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190f81e0 00:29:25.322 [2024-02-14 20:30:02.577099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:10183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.322 [2024-02-14 20:30:02.577117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:25.322 [2024-02-14 20:30:02.584460] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190f5be8 00:29:25.322 [2024-02-14 20:30:02.585804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:1390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.322 [2024-02-14 20:30:02.585822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:25.322 [2024-02-14 20:30:02.593266] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190f3a28 00:29:25.322 [2024-02-14 20:30:02.594581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:14065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.322 [2024-02-14 20:30:02.594599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.322 [2024-02-14 20:30:02.601869] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190f1ca0 00:29:25.322 [2024-02-14 20:30:02.603551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:10268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.322 [2024-02-14 20:30:02.603569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.322 [2024-02-14 20:30:02.611511] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190f96f8 00:29:25.322 [2024-02-14 20:30:02.612869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.322 [2024-02-14 20:30:02.612887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:25.322 [2024-02-14 20:30:02.621604] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190f8e88 00:29:25.322 [2024-02-14 20:30:02.622210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:14960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.322 [2024-02-14 20:30:02.622228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.322 [2024-02-14 20:30:02.630727] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190f8e88 00:29:25.322 [2024-02-14 20:30:02.630974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:7238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.322 [2024-02-14 20:30:02.630993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.322 [2024-02-14 20:30:02.639780] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190f8e88 00:29:25.322 [2024-02-14 20:30:02.640164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:1263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.322 [2024-02-14 20:30:02.640182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.322 [2024-02-14 20:30:02.648832] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190f8e88 00:29:25.322 [2024-02-14 20:30:02.649091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:9335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.322 [2024-02-14 20:30:02.649116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.322 [2024-02-14 20:30:02.658162] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190f8e88 00:29:25.322 [2024-02-14 20:30:02.658409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:4663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.322 [2024-02-14 20:30:02.658427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.322 [2024-02-14 20:30:02.667303] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190f8e88 00:29:25.322 [2024-02-14 20:30:02.667657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:25143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.322 [2024-02-14 20:30:02.667675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.322 [2024-02-14 20:30:02.676444] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190f8e88 00:29:25.322 [2024-02-14 20:30:02.677785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:23796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.322 [2024-02-14 20:30:02.677802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.322 [2024-02-14 20:30:02.686978] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190f4b08 00:29:25.322 [2024-02-14 20:30:02.688129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:22530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.322 [2024-02-14 20:30:02.688148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:25.322 [2024-02-14 20:30:02.695560] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190eee38 00:29:25.322 [2024-02-14 20:30:02.696442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:17611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.322 [2024-02-14 20:30:02.696461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.322 [2024-02-14 20:30:02.704289] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190f1430 00:29:25.322 [2024-02-14 20:30:02.705805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:21989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.322 [2024-02-14 20:30:02.705823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.322 [2024-02-14 20:30:02.712177] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190f6890 00:29:25.322 [2024-02-14 20:30:02.713310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:2637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.322 [2024-02-14 20:30:02.713328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:25.322 [2024-02-14 20:30:02.725440] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190f0788 00:29:25.322 [2024-02-14 20:30:02.726496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:12528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.322 [2024-02-14 20:30:02.726514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:25.322 [2024-02-14 20:30:02.735679] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190f7100 00:29:25.322 [2024-02-14 20:30:02.736032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.322 [2024-02-14 20:30:02.736051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:25.581 [2024-02-14 20:30:02.744966] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190f7100 00:29:25.581 [2024-02-14 20:30:02.745207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.581 [2024-02-14 20:30:02.745225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:25.581 [2024-02-14 20:30:02.754055] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190f7100 00:29:25.581 [2024-02-14 20:30:02.754265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:18887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.581 [2024-02-14 20:30:02.754283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:25.581 [2024-02-14 20:30:02.763070] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190f7100 00:29:25.581 [2024-02-14 20:30:02.763535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:1387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.581 [2024-02-14 20:30:02.763553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:25.581 [2024-02-14 20:30:02.772119] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190f7100 00:29:25.581 [2024-02-14 20:30:02.772626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:16104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.581 [2024-02-14 20:30:02.772644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:25.581 [2024-02-14 20:30:02.781194] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190f7100 00:29:25.581 [2024-02-14 20:30:02.781404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:2810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.582 [2024-02-14 20:30:02.781422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:25.582 [2024-02-14 20:30:02.791133] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fc128 00:29:25.582 [2024-02-14 20:30:02.793010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:12515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.582 [2024-02-14 20:30:02.793027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:25.582 [2024-02-14 20:30:02.801517] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190f1868 00:29:25.582 [2024-02-14 20:30:02.802475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:12751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.582 [2024-02-14 20:30:02.802493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:25.582 [2024-02-14 20:30:02.810101] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190edd58 00:29:25.582 [2024-02-14 20:30:02.811099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:16623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.582 [2024-02-14 20:30:02.811121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:25.582 [2024-02-14 20:30:02.818819] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190fc128 00:29:25.582 [2024-02-14 20:30:02.819640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:1176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.582 [2024-02-14 20:30:02.819662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:25.582 [2024-02-14 20:30:02.827522] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190f8618 00:29:25.582 [2024-02-14 20:30:02.828572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:5072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.582 [2024-02-14 20:30:02.828593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:25.582 [2024-02-14 20:30:02.836106] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7bc0) with pdu=0x2000190f3e60 00:29:25.582 [2024-02-14 20:30:02.838783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:8872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.582 [2024-02-14 20:30:02.838801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:25.582 00:29:25.582 Latency(us) 00:29:25.582 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:25.582 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:25.582 nvme0n1 : 2.01 27379.58 106.95 0.00 0.00 4664.89 2808.69 26838.55 00:29:25.582 =================================================================================================================== 00:29:25.582 Total : 27379.58 106.95 0.00 0.00 4664.89 2808.69 26838.55 00:29:25.582 0 00:29:25.582 20:30:02 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:25.582 20:30:02 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:25.582 20:30:02 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:25.582 | .driver_specific 00:29:25.582 | .nvme_error 00:29:25.582 | .status_code 00:29:25.582 | .command_transient_transport_error' 00:29:25.582 20:30:02 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:25.841 20:30:03 -- host/digest.sh@71 -- # (( 215 > 0 )) 00:29:25.841 20:30:03 -- host/digest.sh@73 -- # killprocess 1953847 00:29:25.841 20:30:03 -- common/autotest_common.sh@924 -- # '[' -z 1953847 ']' 00:29:25.841 20:30:03 -- common/autotest_common.sh@928 -- # kill -0 1953847 00:29:25.841 20:30:03 -- common/autotest_common.sh@929 -- # uname 00:29:25.841 20:30:03 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:29:25.841 20:30:03 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 1953847 00:29:25.841 20:30:03 -- common/autotest_common.sh@930 -- # process_name=reactor_1 00:29:25.841 20:30:03 -- common/autotest_common.sh@934 -- # '[' reactor_1 = sudo ']' 00:29:25.841 20:30:03 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 1953847' 00:29:25.841 killing process with pid 1953847 00:29:25.841 20:30:03 -- common/autotest_common.sh@943 -- # kill 1953847 00:29:25.841 Received shutdown signal, test time was about 2.000000 seconds 00:29:25.841 00:29:25.841 Latency(us) 00:29:25.841 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:25.841 =================================================================================================================== 00:29:25.841 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:25.841 20:30:03 -- common/autotest_common.sh@948 -- # wait 1953847 00:29:26.101 20:30:03 -- host/digest.sh@114 -- # run_bperf_err randwrite 131072 16 00:29:26.101 20:30:03 -- host/digest.sh@54 -- # local rw bs qd 00:29:26.101 20:30:03 -- host/digest.sh@56 -- # rw=randwrite 00:29:26.101 20:30:03 -- host/digest.sh@56 -- # bs=131072 00:29:26.101 20:30:03 -- host/digest.sh@56 -- # qd=16 00:29:26.101 20:30:03 -- host/digest.sh@58 -- # bperfpid=1954456 00:29:26.101 20:30:03 -- host/digest.sh@60 -- # waitforlisten 1954456 /var/tmp/bperf.sock 00:29:26.101 20:30:03 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:29:26.101 20:30:03 -- common/autotest_common.sh@817 -- # '[' -z 1954456 ']' 00:29:26.101 20:30:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:26.101 20:30:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:26.101 20:30:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:26.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:26.101 20:30:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:26.101 20:30:03 -- common/autotest_common.sh@10 -- # set +x 00:29:26.101 [2024-02-14 20:30:03.332252] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:29:26.101 [2024-02-14 20:30:03.332302] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1954456 ] 00:29:26.101 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:26.101 Zero copy mechanism will not be used. 00:29:26.101 EAL: No free 2048 kB hugepages reported on node 1 00:29:26.101 [2024-02-14 20:30:03.392924] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:26.101 [2024-02-14 20:30:03.469538] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:27.040 20:30:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:27.040 20:30:04 -- common/autotest_common.sh@850 -- # return 0 00:29:27.040 20:30:04 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:27.040 20:30:04 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:27.040 20:30:04 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:27.040 20:30:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:27.040 20:30:04 -- common/autotest_common.sh@10 -- # set +x 00:29:27.040 20:30:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:27.040 20:30:04 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:27.040 20:30:04 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:27.300 nvme0n1 00:29:27.300 20:30:04 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:27.300 20:30:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:27.300 20:30:04 -- common/autotest_common.sh@10 -- # set +x 00:29:27.300 20:30:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:27.300 20:30:04 -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:27.300 20:30:04 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:27.560 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:27.560 Zero copy mechanism will not be used. 00:29:27.560 Running I/O for 2 seconds... 00:29:27.560 [2024-02-14 20:30:04.793484] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:27.560 [2024-02-14 20:30:04.793832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.560 [2024-02-14 20:30:04.793859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.560 [2024-02-14 20:30:04.808799] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:27.560 [2024-02-14 20:30:04.809193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.560 [2024-02-14 20:30:04.809216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.560 [2024-02-14 20:30:04.824636] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:27.560 [2024-02-14 20:30:04.825245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.560 [2024-02-14 20:30:04.825265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.560 [2024-02-14 20:30:04.840792] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:27.560 [2024-02-14 20:30:04.841113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.560 [2024-02-14 20:30:04.841133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.560 [2024-02-14 20:30:04.859185] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:27.560 [2024-02-14 20:30:04.859665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.560 [2024-02-14 20:30:04.859684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.560 [2024-02-14 20:30:04.876642] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:27.560 [2024-02-14 20:30:04.877198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.560 [2024-02-14 20:30:04.877218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.560 [2024-02-14 20:30:04.891849] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:27.560 [2024-02-14 20:30:04.892218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.560 [2024-02-14 20:30:04.892237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.560 [2024-02-14 20:30:04.907158] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:27.560 [2024-02-14 20:30:04.907667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.560 [2024-02-14 20:30:04.907685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.561 [2024-02-14 20:30:04.921634] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:27.561 [2024-02-14 20:30:04.922338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.561 [2024-02-14 20:30:04.922358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.561 [2024-02-14 20:30:04.939665] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:27.561 [2024-02-14 20:30:04.940244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.561 [2024-02-14 20:30:04.940263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.561 [2024-02-14 20:30:04.958325] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:27.561 [2024-02-14 20:30:04.958898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.561 [2024-02-14 20:30:04.958916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.561 [2024-02-14 20:30:04.976395] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:27.821 [2024-02-14 20:30:04.976782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.821 [2024-02-14 20:30:04.976801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.821 [2024-02-14 20:30:04.994760] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:27.821 [2024-02-14 20:30:04.995296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.821 [2024-02-14 20:30:04.995316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.821 [2024-02-14 20:30:05.012791] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:27.821 [2024-02-14 20:30:05.013304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.821 [2024-02-14 20:30:05.013323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.821 [2024-02-14 20:30:05.030291] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:27.821 [2024-02-14 20:30:05.030754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.821 [2024-02-14 20:30:05.030772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.821 [2024-02-14 20:30:05.049514] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:27.821 [2024-02-14 20:30:05.050313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.821 [2024-02-14 20:30:05.050333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.821 [2024-02-14 20:30:05.067825] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:27.821 [2024-02-14 20:30:05.068365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.821 [2024-02-14 20:30:05.068384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.821 [2024-02-14 20:30:05.086428] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:27.821 [2024-02-14 20:30:05.087112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.821 [2024-02-14 20:30:05.087131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.821 [2024-02-14 20:30:05.106081] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:27.822 [2024-02-14 20:30:05.106640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.822 [2024-02-14 20:30:05.106667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.822 [2024-02-14 20:30:05.125564] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:27.822 [2024-02-14 20:30:05.126177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.822 [2024-02-14 20:30:05.126196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.822 [2024-02-14 20:30:05.143994] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:27.822 [2024-02-14 20:30:05.144473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.822 [2024-02-14 20:30:05.144491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.822 [2024-02-14 20:30:05.161932] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:27.822 [2024-02-14 20:30:05.162538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.822 [2024-02-14 20:30:05.162557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.822 [2024-02-14 20:30:05.180425] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:27.822 [2024-02-14 20:30:05.180973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.822 [2024-02-14 20:30:05.180991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.822 [2024-02-14 20:30:05.199489] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:27.822 [2024-02-14 20:30:05.200057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.822 [2024-02-14 20:30:05.200075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.822 [2024-02-14 20:30:05.217073] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:27.822 [2024-02-14 20:30:05.217618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.822 [2024-02-14 20:30:05.217637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.822 [2024-02-14 20:30:05.234503] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:27.822 [2024-02-14 20:30:05.234905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.822 [2024-02-14 20:30:05.234925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.082 [2024-02-14 20:30:05.252534] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:28.082 [2024-02-14 20:30:05.253164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.082 [2024-02-14 20:30:05.253183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.082 [2024-02-14 20:30:05.269541] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:28.082 [2024-02-14 20:30:05.269865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.082 [2024-02-14 20:30:05.269883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.082 [2024-02-14 20:30:05.287642] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:28.082 [2024-02-14 20:30:05.288246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.082 [2024-02-14 20:30:05.288265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.082 [2024-02-14 20:30:05.306612] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:28.082 [2024-02-14 20:30:05.307229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.082 [2024-02-14 20:30:05.307248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.082 [2024-02-14 20:30:05.324500] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:28.082 [2024-02-14 20:30:05.324907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.082 [2024-02-14 20:30:05.324926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.082 [2024-02-14 20:30:05.341510] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:28.082 [2024-02-14 20:30:05.341944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.082 [2024-02-14 20:30:05.341964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.082 [2024-02-14 20:30:05.357206] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:28.082 [2024-02-14 20:30:05.357580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.082 [2024-02-14 20:30:05.357599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.082 [2024-02-14 20:30:05.375100] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:28.082 [2024-02-14 20:30:05.375568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.082 [2024-02-14 20:30:05.375586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.082 [2024-02-14 20:30:05.392378] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:28.082 [2024-02-14 20:30:05.392846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.082 [2024-02-14 20:30:05.392865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.082 [2024-02-14 20:30:05.408946] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:28.082 [2024-02-14 20:30:05.409388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.082 [2024-02-14 20:30:05.409410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.082 [2024-02-14 20:30:05.424599] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:28.082 [2024-02-14 20:30:05.424918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.083 [2024-02-14 20:30:05.424936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.083 [2024-02-14 20:30:05.442667] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:28.083 [2024-02-14 20:30:05.443127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.083 [2024-02-14 20:30:05.443146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.083 [2024-02-14 20:30:05.460985] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:28.083 [2024-02-14 20:30:05.461426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.083 [2024-02-14 20:30:05.461444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.083 [2024-02-14 20:30:05.479270] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:28.083 [2024-02-14 20:30:05.479763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.083 [2024-02-14 20:30:05.479782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.083 [2024-02-14 20:30:05.496090] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:28.083 [2024-02-14 20:30:05.496762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.083 [2024-02-14 20:30:05.496781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.343 [2024-02-14 20:30:05.510454] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:28.343 [2024-02-14 20:30:05.510917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.343 [2024-02-14 20:30:05.510936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.343 [2024-02-14 20:30:05.527043] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:28.343 [2024-02-14 20:30:05.527446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.343 [2024-02-14 20:30:05.527464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.343 [2024-02-14 20:30:05.542010] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:28.343 [2024-02-14 20:30:05.542289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.343 [2024-02-14 20:30:05.542307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.343 [2024-02-14 20:30:05.557002] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:28.343 [2024-02-14 20:30:05.557555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.343 [2024-02-14 20:30:05.557573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.343 [2024-02-14 20:30:05.574130] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:28.343 [2024-02-14 20:30:05.574506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.343 [2024-02-14 20:30:05.574525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.343 [2024-02-14 20:30:05.589964] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:28.343 [2024-02-14 20:30:05.590348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.343 [2024-02-14 20:30:05.590367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.343 [2024-02-14 20:30:05.605289] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:28.343 [2024-02-14 20:30:05.605775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.343 [2024-02-14 20:30:05.605794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.343 [2024-02-14 20:30:05.621182] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:28.343 [2024-02-14 20:30:05.621711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.343 [2024-02-14 20:30:05.621729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.343 [2024-02-14 20:30:05.636104] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:28.343 [2024-02-14 20:30:05.636542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.343 [2024-02-14 20:30:05.636560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.343 [2024-02-14 20:30:05.650639] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:28.343 [2024-02-14 20:30:05.650983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.343 [2024-02-14 20:30:05.651003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.343 [2024-02-14 20:30:05.665721] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:28.343 [2024-02-14 20:30:05.666016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.343 [2024-02-14 20:30:05.666033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.343 [2024-02-14 20:30:05.682545] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:28.344 [2024-02-14 20:30:05.683048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.344 [2024-02-14 20:30:05.683067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.344 [2024-02-14 20:30:05.699831] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:28.344 [2024-02-14 20:30:05.700449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.344 [2024-02-14 20:30:05.700468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.344 [2024-02-14 20:30:05.718571] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:28.344 [2024-02-14 20:30:05.719058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.344 [2024-02-14 20:30:05.719075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.344 [2024-02-14 20:30:05.736613] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:28.344 [2024-02-14 20:30:05.737236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.344 [2024-02-14 20:30:05.737256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.344 [2024-02-14 20:30:05.755230] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:28.344 [2024-02-14 20:30:05.755733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.344 [2024-02-14 20:30:05.755752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.604 [2024-02-14 20:30:05.773241] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:28.604 [2024-02-14 20:30:05.773847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.604 [2024-02-14 20:30:05.773865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.604 [2024-02-14 20:30:05.792416] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:28.604 [2024-02-14 20:30:05.792890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.604 [2024-02-14 20:30:05.792908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.604 [2024-02-14 20:30:05.811597] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:28.604 [2024-02-14 20:30:05.812376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.604 [2024-02-14 20:30:05.812395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.604 [2024-02-14 20:30:05.829713] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:28.604 [2024-02-14 20:30:05.830085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.604 [2024-02-14 20:30:05.830102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.604 [2024-02-14 20:30:05.845517] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:28.604 [2024-02-14 20:30:05.845963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.604 [2024-02-14 20:30:05.845986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.604 [2024-02-14 20:30:05.861949] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:28.604 [2024-02-14 20:30:05.862260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.604 [2024-02-14 20:30:05.862278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.604 [2024-02-14 20:30:05.877537] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:28.604 [2024-02-14 20:30:05.878179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.604 [2024-02-14 20:30:05.878197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.604 [2024-02-14 20:30:05.894428] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:28.604 [2024-02-14 20:30:05.894835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.604 [2024-02-14 20:30:05.894855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.604 [2024-02-14 20:30:05.912732] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:28.604 [2024-02-14 20:30:05.913008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.604 [2024-02-14 20:30:05.913027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.604 [2024-02-14 20:30:05.931465] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:28.604 [2024-02-14 20:30:05.932086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.604 [2024-02-14 20:30:05.932104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.604 [2024-02-14 20:30:05.950359] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:28.604 [2024-02-14 20:30:05.950900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.604 [2024-02-14 20:30:05.950920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.604 [2024-02-14 20:30:05.968325] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:28.604 [2024-02-14 20:30:05.968757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.604 [2024-02-14 20:30:05.968775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.604 [2024-02-14 20:30:05.983580] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:28.604 [2024-02-14 20:30:05.983997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.604 [2024-02-14 20:30:05.984015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.604 [2024-02-14 20:30:06.000341] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:28.604 [2024-02-14 20:30:06.000688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.604 [2024-02-14 20:30:06.000706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.604 [2024-02-14 20:30:06.016767] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:28.604 [2024-02-14 20:30:06.017250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.604 [2024-02-14 20:30:06.017269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.865 [2024-02-14 20:30:06.033812] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:28.865 [2024-02-14 20:30:06.034272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.865 [2024-02-14 20:30:06.034290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.865 [2024-02-14 20:30:06.051183] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:28.865 [2024-02-14 20:30:06.051604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.865 [2024-02-14 20:30:06.051623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.865 [2024-02-14 20:30:06.068272] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:28.865 [2024-02-14 20:30:06.068614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.865 [2024-02-14 20:30:06.068634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.865 [2024-02-14 20:30:06.082726] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:28.865 [2024-02-14 20:30:06.083200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.865 [2024-02-14 20:30:06.083218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.865 [2024-02-14 20:30:06.098762] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:28.865 [2024-02-14 20:30:06.099156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.865 [2024-02-14 20:30:06.099174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.865 [2024-02-14 20:30:06.114881] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:28.865 [2024-02-14 20:30:06.115484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.865 [2024-02-14 20:30:06.115501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.865 [2024-02-14 20:30:06.131374] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:28.865 [2024-02-14 20:30:06.131921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.865 [2024-02-14 20:30:06.131940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.865 [2024-02-14 20:30:06.149214] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:28.865 [2024-02-14 20:30:06.149673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.865 [2024-02-14 20:30:06.149693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.865 [2024-02-14 20:30:06.165876] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:28.865 [2024-02-14 20:30:06.166386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.865 [2024-02-14 20:30:06.166406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.865 [2024-02-14 20:30:06.183430] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:28.865 [2024-02-14 20:30:06.184184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.865 [2024-02-14 20:30:06.184202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.865 [2024-02-14 20:30:06.201970] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:28.865 [2024-02-14 20:30:06.202452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.865 [2024-02-14 20:30:06.202471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.865 [2024-02-14 20:30:06.221527] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:28.865 [2024-02-14 20:30:06.222094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.865 [2024-02-14 20:30:06.222112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.865 [2024-02-14 20:30:06.241887] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:28.865 [2024-02-14 20:30:06.242380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.865 [2024-02-14 20:30:06.242399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.865 [2024-02-14 20:30:06.261275] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:28.865 [2024-02-14 20:30:06.261871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.865 [2024-02-14 20:30:06.261889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.865 [2024-02-14 20:30:06.280221] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:28.865 [2024-02-14 20:30:06.280686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.865 [2024-02-14 20:30:06.280706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.126 [2024-02-14 20:30:06.299490] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:29.126 [2024-02-14 20:30:06.300253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.126 [2024-02-14 20:30:06.300274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.126 [2024-02-14 20:30:06.318386] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:29.126 [2024-02-14 20:30:06.318850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.126 [2024-02-14 20:30:06.318871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.126 [2024-02-14 20:30:06.334886] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:29.126 [2024-02-14 20:30:06.335315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.126 [2024-02-14 20:30:06.335336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.126 [2024-02-14 20:30:06.352551] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:29.126 [2024-02-14 20:30:06.353043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.126 [2024-02-14 20:30:06.353064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.126 [2024-02-14 20:30:06.369771] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:29.126 [2024-02-14 20:30:06.370205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.126 [2024-02-14 20:30:06.370225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.126 [2024-02-14 20:30:06.386522] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:29.126 [2024-02-14 20:30:06.387189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.126 [2024-02-14 20:30:06.387209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.126 [2024-02-14 20:30:06.403639] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:29.126 [2024-02-14 20:30:06.404118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.126 [2024-02-14 20:30:06.404139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.126 [2024-02-14 20:30:06.421098] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:29.126 [2024-02-14 20:30:06.421766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.126 [2024-02-14 20:30:06.421786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.126 [2024-02-14 20:30:06.438832] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:29.126 [2024-02-14 20:30:06.439366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.126 [2024-02-14 20:30:06.439385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.126 [2024-02-14 20:30:06.456346] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:29.126 [2024-02-14 20:30:06.456986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.126 [2024-02-14 20:30:06.457005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.126 [2024-02-14 20:30:06.475335] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:29.126 [2024-02-14 20:30:06.475867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.126 [2024-02-14 20:30:06.475887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.126 [2024-02-14 20:30:06.493984] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:29.126 [2024-02-14 20:30:06.494505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.126 [2024-02-14 20:30:06.494524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.126 [2024-02-14 20:30:06.511880] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:29.126 [2024-02-14 20:30:06.512266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.126 [2024-02-14 20:30:06.512286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.126 [2024-02-14 20:30:06.530262] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:29.126 [2024-02-14 20:30:06.530852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.126 [2024-02-14 20:30:06.530872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.386 [2024-02-14 20:30:06.547007] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:29.386 [2024-02-14 20:30:06.547416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.386 [2024-02-14 20:30:06.547436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.386 [2024-02-14 20:30:06.562034] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:29.386 [2024-02-14 20:30:06.562519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.386 [2024-02-14 20:30:06.562538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.386 [2024-02-14 20:30:06.575797] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:29.386 [2024-02-14 20:30:06.576229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.386 [2024-02-14 20:30:06.576249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.386 [2024-02-14 20:30:06.589833] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:29.386 [2024-02-14 20:30:06.590292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.386 [2024-02-14 20:30:06.590315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.386 [2024-02-14 20:30:06.604361] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:29.386 [2024-02-14 20:30:06.604771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.386 [2024-02-14 20:30:06.604789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.386 [2024-02-14 20:30:06.618190] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:29.386 [2024-02-14 20:30:06.618601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.386 [2024-02-14 20:30:06.618621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.386 [2024-02-14 20:30:06.632388] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:29.386 [2024-02-14 20:30:06.632798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.386 [2024-02-14 20:30:06.632817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.387 [2024-02-14 20:30:06.646953] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:29.387 [2024-02-14 20:30:06.647577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.387 [2024-02-14 20:30:06.647596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.387 [2024-02-14 20:30:06.662523] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:29.387 [2024-02-14 20:30:06.662996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.387 [2024-02-14 20:30:06.663014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.387 [2024-02-14 20:30:06.678588] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:29.387 [2024-02-14 20:30:06.679101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.387 [2024-02-14 20:30:06.679121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.387 [2024-02-14 20:30:06.694407] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:29.387 [2024-02-14 20:30:06.694814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.387 [2024-02-14 20:30:06.694832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.387 [2024-02-14 20:30:06.709641] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:29.387 [2024-02-14 20:30:06.710029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.387 [2024-02-14 20:30:06.710047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.387 [2024-02-14 20:30:06.726273] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:29.387 [2024-02-14 20:30:06.726779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.387 [2024-02-14 20:30:06.726798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.387 [2024-02-14 20:30:06.743129] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:29.387 [2024-02-14 20:30:06.743548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.387 [2024-02-14 20:30:06.743568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.387 [2024-02-14 20:30:06.756734] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fd7f70) with pdu=0x2000190fef90 00:29:29.387 [2024-02-14 20:30:06.757142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.387 [2024-02-14 20:30:06.757160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.387 00:29:29.387 Latency(us) 00:29:29.387 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:29.387 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:29.387 nvme0n1 : 2.01 1801.57 225.20 0.00 0.00 8860.76 5929.45 29459.99 00:29:29.387 =================================================================================================================== 00:29:29.387 Total : 1801.57 225.20 0.00 0.00 8860.76 5929.45 29459.99 00:29:29.387 0 00:29:29.387 20:30:06 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:29.387 20:30:06 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:29.387 20:30:06 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:29.387 | .driver_specific 00:29:29.387 | .nvme_error 00:29:29.387 | .status_code 00:29:29.387 | .command_transient_transport_error' 00:29:29.387 20:30:06 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:29.647 20:30:06 -- host/digest.sh@71 -- # (( 116 > 0 )) 00:29:29.647 20:30:06 -- host/digest.sh@73 -- # killprocess 1954456 00:29:29.647 20:30:06 -- common/autotest_common.sh@924 -- # '[' -z 1954456 ']' 00:29:29.647 20:30:06 -- common/autotest_common.sh@928 -- # kill -0 1954456 00:29:29.647 20:30:06 -- common/autotest_common.sh@929 -- # uname 00:29:29.647 20:30:06 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:29:29.647 20:30:06 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 1954456 00:29:29.647 20:30:07 -- common/autotest_common.sh@930 -- # process_name=reactor_1 00:29:29.647 20:30:07 -- common/autotest_common.sh@934 -- # '[' reactor_1 = sudo ']' 00:29:29.647 20:30:07 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 1954456' 00:29:29.647 killing process with pid 1954456 00:29:29.647 20:30:07 -- common/autotest_common.sh@943 -- # kill 1954456 00:29:29.647 Received shutdown signal, test time was about 2.000000 seconds 00:29:29.647 00:29:29.647 Latency(us) 00:29:29.647 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:29.647 =================================================================================================================== 00:29:29.647 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:29.647 20:30:07 -- common/autotest_common.sh@948 -- # wait 1954456 00:29:29.907 20:30:07 -- host/digest.sh@115 -- # killprocess 1952417 00:29:29.907 20:30:07 -- common/autotest_common.sh@924 -- # '[' -z 1952417 ']' 00:29:29.907 20:30:07 -- common/autotest_common.sh@928 -- # kill -0 1952417 00:29:29.907 20:30:07 -- common/autotest_common.sh@929 -- # uname 00:29:29.907 20:30:07 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:29:29.907 20:30:07 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 1952417 00:29:29.907 20:30:07 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:29:29.907 20:30:07 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:29:29.907 20:30:07 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 1952417' 00:29:29.907 killing process with pid 1952417 00:29:29.907 20:30:07 -- common/autotest_common.sh@943 -- # kill 1952417 00:29:29.907 20:30:07 -- common/autotest_common.sh@948 -- # wait 1952417 00:29:30.166 00:29:30.166 real 0m16.532s 00:29:30.166 user 0m32.371s 00:29:30.166 sys 0m3.535s 00:29:30.166 20:30:07 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:30.166 20:30:07 -- common/autotest_common.sh@10 -- # set +x 00:29:30.166 ************************************ 00:29:30.166 END TEST nvmf_digest_error 00:29:30.166 ************************************ 00:29:30.166 20:30:07 -- host/digest.sh@138 -- # trap - SIGINT SIGTERM EXIT 00:29:30.166 20:30:07 -- host/digest.sh@139 -- # nvmftestfini 00:29:30.166 20:30:07 -- nvmf/common.sh@476 -- # nvmfcleanup 00:29:30.166 20:30:07 -- nvmf/common.sh@116 -- # sync 00:29:30.166 20:30:07 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:29:30.166 20:30:07 -- nvmf/common.sh@119 -- # set +e 00:29:30.166 20:30:07 -- nvmf/common.sh@120 -- # for i in {1..20} 00:29:30.166 20:30:07 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:29:30.166 rmmod nvme_tcp 00:29:30.166 rmmod nvme_fabrics 00:29:30.166 rmmod nvme_keyring 00:29:30.166 20:30:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:29:30.166 20:30:07 -- nvmf/common.sh@123 -- # set -e 00:29:30.166 20:30:07 -- nvmf/common.sh@124 -- # return 0 00:29:30.166 20:30:07 -- nvmf/common.sh@477 -- # '[' -n 1952417 ']' 00:29:30.166 20:30:07 -- nvmf/common.sh@478 -- # killprocess 1952417 00:29:30.166 20:30:07 -- common/autotest_common.sh@924 -- # '[' -z 1952417 ']' 00:29:30.166 20:30:07 -- common/autotest_common.sh@928 -- # kill -0 1952417 00:29:30.166 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 928: kill: (1952417) - No such process 00:29:30.166 20:30:07 -- common/autotest_common.sh@951 -- # echo 'Process with pid 1952417 is not found' 00:29:30.166 Process with pid 1952417 is not found 00:29:30.166 20:30:07 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:29:30.166 20:30:07 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:29:30.166 20:30:07 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:29:30.166 20:30:07 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:30.166 20:30:07 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:29:30.166 20:30:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:30.166 20:30:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:30.166 20:30:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:32.700 20:30:09 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:29:32.700 00:29:32.700 real 0m41.469s 00:29:32.700 user 1m6.721s 00:29:32.700 sys 0m11.718s 00:29:32.700 20:30:09 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:32.700 20:30:09 -- common/autotest_common.sh@10 -- # set +x 00:29:32.700 ************************************ 00:29:32.700 END TEST nvmf_digest 00:29:32.700 ************************************ 00:29:32.700 20:30:09 -- nvmf/nvmf.sh@109 -- # [[ 0 -eq 1 ]] 00:29:32.700 20:30:09 -- nvmf/nvmf.sh@114 -- # [[ 0 -eq 1 ]] 00:29:32.700 20:30:09 -- nvmf/nvmf.sh@119 -- # [[ phy == phy ]] 00:29:32.700 20:30:09 -- nvmf/nvmf.sh@121 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:32.700 20:30:09 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:29:32.700 20:30:09 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:29:32.700 20:30:09 -- common/autotest_common.sh@10 -- # set +x 00:29:32.700 ************************************ 00:29:32.700 START TEST nvmf_bdevperf 00:29:32.701 ************************************ 00:29:32.701 20:30:09 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:32.701 * Looking for test storage... 00:29:32.701 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:32.701 20:30:09 -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:32.701 20:30:09 -- nvmf/common.sh@7 -- # uname -s 00:29:32.701 20:30:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:32.701 20:30:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:32.701 20:30:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:32.701 20:30:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:32.701 20:30:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:32.701 20:30:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:32.701 20:30:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:32.701 20:30:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:32.701 20:30:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:32.701 20:30:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:32.701 20:30:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:29:32.701 20:30:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:29:32.701 20:30:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:32.701 20:30:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:32.701 20:30:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:32.701 20:30:09 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:32.701 20:30:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:32.701 20:30:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:32.701 20:30:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:32.701 20:30:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.701 20:30:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.701 20:30:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.701 20:30:09 -- paths/export.sh@5 -- # export PATH 00:29:32.701 20:30:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.701 20:30:09 -- nvmf/common.sh@46 -- # : 0 00:29:32.701 20:30:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:29:32.701 20:30:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:29:32.701 20:30:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:29:32.701 20:30:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:32.701 20:30:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:32.701 20:30:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:29:32.701 20:30:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:29:32.701 20:30:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:29:32.701 20:30:09 -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:32.701 20:30:09 -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:32.701 20:30:09 -- host/bdevperf.sh@24 -- # nvmftestinit 00:29:32.701 20:30:09 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:29:32.701 20:30:09 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:32.701 20:30:09 -- nvmf/common.sh@436 -- # prepare_net_devs 00:29:32.701 20:30:09 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:29:32.701 20:30:09 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:29:32.701 20:30:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:32.701 20:30:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:32.701 20:30:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:32.701 20:30:09 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:29:32.701 20:30:09 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:29:32.701 20:30:09 -- nvmf/common.sh@284 -- # xtrace_disable 00:29:32.701 20:30:09 -- common/autotest_common.sh@10 -- # set +x 00:29:39.269 20:30:15 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:29:39.269 20:30:15 -- nvmf/common.sh@290 -- # pci_devs=() 00:29:39.269 20:30:15 -- nvmf/common.sh@290 -- # local -a pci_devs 00:29:39.269 20:30:15 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:29:39.269 20:30:15 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:29:39.269 20:30:15 -- nvmf/common.sh@292 -- # pci_drivers=() 00:29:39.269 20:30:15 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:29:39.269 20:30:15 -- nvmf/common.sh@294 -- # net_devs=() 00:29:39.269 20:30:15 -- nvmf/common.sh@294 -- # local -ga net_devs 00:29:39.269 20:30:15 -- nvmf/common.sh@295 -- # e810=() 00:29:39.269 20:30:15 -- nvmf/common.sh@295 -- # local -ga e810 00:29:39.269 20:30:15 -- nvmf/common.sh@296 -- # x722=() 00:29:39.269 20:30:15 -- nvmf/common.sh@296 -- # local -ga x722 00:29:39.269 20:30:15 -- nvmf/common.sh@297 -- # mlx=() 00:29:39.269 20:30:15 -- nvmf/common.sh@297 -- # local -ga mlx 00:29:39.269 20:30:15 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:39.269 20:30:15 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:39.269 20:30:15 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:39.269 20:30:15 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:39.269 20:30:15 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:39.269 20:30:15 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:39.269 20:30:15 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:39.269 20:30:15 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:39.269 20:30:15 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:39.269 20:30:15 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:39.269 20:30:15 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:39.269 20:30:15 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:29:39.269 20:30:15 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:29:39.269 20:30:15 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:29:39.269 20:30:15 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:29:39.269 20:30:15 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:29:39.269 20:30:15 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:29:39.269 20:30:15 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:39.269 20:30:15 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:39.269 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:39.269 20:30:15 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:39.269 20:30:15 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:39.269 20:30:15 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:39.269 20:30:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:39.269 20:30:15 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:39.269 20:30:15 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:39.269 20:30:15 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:39.269 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:39.269 20:30:15 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:39.269 20:30:15 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:39.269 20:30:15 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:39.269 20:30:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:39.269 20:30:15 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:39.269 20:30:15 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:29:39.269 20:30:15 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:29:39.269 20:30:15 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:29:39.269 20:30:15 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:39.269 20:30:15 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:39.269 20:30:15 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:39.269 20:30:15 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:39.269 20:30:15 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:39.269 Found net devices under 0000:af:00.0: cvl_0_0 00:29:39.269 20:30:15 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:39.269 20:30:15 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:39.269 20:30:15 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:39.269 20:30:15 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:39.269 20:30:15 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:39.269 20:30:15 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:39.269 Found net devices under 0000:af:00.1: cvl_0_1 00:29:39.269 20:30:15 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:39.269 20:30:15 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:29:39.269 20:30:15 -- nvmf/common.sh@402 -- # is_hw=yes 00:29:39.269 20:30:15 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:29:39.269 20:30:15 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:29:39.269 20:30:15 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:29:39.269 20:30:15 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:39.269 20:30:15 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:39.269 20:30:15 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:39.269 20:30:15 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:29:39.269 20:30:15 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:39.269 20:30:15 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:39.269 20:30:15 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:29:39.269 20:30:15 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:39.269 20:30:15 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:39.269 20:30:15 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:29:39.269 20:30:15 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:29:39.269 20:30:15 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:29:39.269 20:30:15 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:39.269 20:30:15 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:39.269 20:30:15 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:39.269 20:30:15 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:29:39.269 20:30:15 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:39.269 20:30:15 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:39.269 20:30:15 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:39.269 20:30:15 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:29:39.269 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:39.269 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:29:39.269 00:29:39.269 --- 10.0.0.2 ping statistics --- 00:29:39.269 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:39.269 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:29:39.269 20:30:15 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:39.269 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:39.269 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:29:39.269 00:29:39.269 --- 10.0.0.1 ping statistics --- 00:29:39.269 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:39.269 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:29:39.269 20:30:15 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:39.269 20:30:15 -- nvmf/common.sh@410 -- # return 0 00:29:39.269 20:30:15 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:29:39.269 20:30:15 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:39.269 20:30:15 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:29:39.269 20:30:15 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:29:39.269 20:30:15 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:39.269 20:30:15 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:29:39.269 20:30:15 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:29:39.269 20:30:15 -- host/bdevperf.sh@25 -- # tgt_init 00:29:39.269 20:30:15 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:39.269 20:30:15 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:39.269 20:30:15 -- common/autotest_common.sh@710 -- # xtrace_disable 00:29:39.269 20:30:15 -- common/autotest_common.sh@10 -- # set +x 00:29:39.269 20:30:15 -- nvmf/common.sh@469 -- # nvmfpid=1959349 00:29:39.269 20:30:15 -- nvmf/common.sh@470 -- # waitforlisten 1959349 00:29:39.269 20:30:15 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:39.269 20:30:15 -- common/autotest_common.sh@817 -- # '[' -z 1959349 ']' 00:29:39.269 20:30:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:39.269 20:30:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:39.269 20:30:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:39.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:39.269 20:30:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:39.269 20:30:15 -- common/autotest_common.sh@10 -- # set +x 00:29:39.269 [2024-02-14 20:30:15.803662] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:29:39.269 [2024-02-14 20:30:15.803709] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:39.269 EAL: No free 2048 kB hugepages reported on node 1 00:29:39.269 [2024-02-14 20:30:15.866284] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:39.269 [2024-02-14 20:30:15.935985] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:39.269 [2024-02-14 20:30:15.936100] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:39.269 [2024-02-14 20:30:15.936108] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:39.269 [2024-02-14 20:30:15.936114] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:39.269 [2024-02-14 20:30:15.936238] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:39.269 [2024-02-14 20:30:15.936306] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:39.269 [2024-02-14 20:30:15.936307] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:39.269 20:30:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:39.269 20:30:16 -- common/autotest_common.sh@850 -- # return 0 00:29:39.269 20:30:16 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:39.269 20:30:16 -- common/autotest_common.sh@716 -- # xtrace_disable 00:29:39.269 20:30:16 -- common/autotest_common.sh@10 -- # set +x 00:29:39.269 20:30:16 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:39.270 20:30:16 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:39.270 20:30:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:39.270 20:30:16 -- common/autotest_common.sh@10 -- # set +x 00:29:39.270 [2024-02-14 20:30:16.636172] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:39.270 20:30:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:39.270 20:30:16 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:39.270 20:30:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:39.270 20:30:16 -- common/autotest_common.sh@10 -- # set +x 00:29:39.270 Malloc0 00:29:39.270 20:30:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:39.270 20:30:16 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:39.270 20:30:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:39.270 20:30:16 -- common/autotest_common.sh@10 -- # set +x 00:29:39.270 20:30:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:39.270 20:30:16 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:39.270 20:30:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:39.270 20:30:16 -- common/autotest_common.sh@10 -- # set +x 00:29:39.270 20:30:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:39.270 20:30:16 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:39.270 20:30:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:39.529 20:30:16 -- common/autotest_common.sh@10 -- # set +x 00:29:39.529 [2024-02-14 20:30:16.687537] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:39.529 20:30:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:39.529 20:30:16 -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:29:39.529 20:30:16 -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:29:39.529 20:30:16 -- nvmf/common.sh@520 -- # config=() 00:29:39.529 20:30:16 -- nvmf/common.sh@520 -- # local subsystem config 00:29:39.529 20:30:16 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:29:39.529 20:30:16 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:29:39.529 { 00:29:39.529 "params": { 00:29:39.529 "name": "Nvme$subsystem", 00:29:39.529 "trtype": "$TEST_TRANSPORT", 00:29:39.529 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:39.529 "adrfam": "ipv4", 00:29:39.529 "trsvcid": "$NVMF_PORT", 00:29:39.529 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:39.529 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:39.529 "hdgst": ${hdgst:-false}, 00:29:39.529 "ddgst": ${ddgst:-false} 00:29:39.529 }, 00:29:39.529 "method": "bdev_nvme_attach_controller" 00:29:39.529 } 00:29:39.529 EOF 00:29:39.529 )") 00:29:39.529 20:30:16 -- nvmf/common.sh@542 -- # cat 00:29:39.529 20:30:16 -- nvmf/common.sh@544 -- # jq . 00:29:39.529 20:30:16 -- nvmf/common.sh@545 -- # IFS=, 00:29:39.529 20:30:16 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:29:39.529 "params": { 00:29:39.529 "name": "Nvme1", 00:29:39.529 "trtype": "tcp", 00:29:39.529 "traddr": "10.0.0.2", 00:29:39.529 "adrfam": "ipv4", 00:29:39.529 "trsvcid": "4420", 00:29:39.529 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:39.529 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:39.529 "hdgst": false, 00:29:39.529 "ddgst": false 00:29:39.529 }, 00:29:39.529 "method": "bdev_nvme_attach_controller" 00:29:39.529 }' 00:29:39.529 [2024-02-14 20:30:16.732842] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:29:39.529 [2024-02-14 20:30:16.732883] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1959599 ] 00:29:39.529 EAL: No free 2048 kB hugepages reported on node 1 00:29:39.529 [2024-02-14 20:30:16.793193] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:39.529 [2024-02-14 20:30:16.868276] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:39.529 [2024-02-14 20:30:16.868328] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:29:39.788 Running I/O for 1 seconds... 00:29:41.164 00:29:41.164 Latency(us) 00:29:41.164 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:41.164 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:41.164 Verification LBA range: start 0x0 length 0x4000 00:29:41.164 Nvme1n1 : 1.01 16898.20 66.01 0.00 0.00 7545.08 1131.28 23093.64 00:29:41.164 =================================================================================================================== 00:29:41.164 Total : 16898.20 66.01 0.00 0.00 7545.08 1131.28 23093.64 00:29:41.164 [2024-02-14 20:30:18.152574] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:29:41.164 20:30:18 -- host/bdevperf.sh@30 -- # bdevperfpid=1959830 00:29:41.164 20:30:18 -- host/bdevperf.sh@32 -- # sleep 3 00:29:41.164 20:30:18 -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:29:41.164 20:30:18 -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:29:41.164 20:30:18 -- nvmf/common.sh@520 -- # config=() 00:29:41.164 20:30:18 -- nvmf/common.sh@520 -- # local subsystem config 00:29:41.164 20:30:18 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:29:41.164 20:30:18 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:29:41.164 { 00:29:41.164 "params": { 00:29:41.164 "name": "Nvme$subsystem", 00:29:41.164 "trtype": "$TEST_TRANSPORT", 00:29:41.164 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:41.164 "adrfam": "ipv4", 00:29:41.165 "trsvcid": "$NVMF_PORT", 00:29:41.165 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:41.165 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:41.165 "hdgst": ${hdgst:-false}, 00:29:41.165 "ddgst": ${ddgst:-false} 00:29:41.165 }, 00:29:41.165 "method": "bdev_nvme_attach_controller" 00:29:41.165 } 00:29:41.165 EOF 00:29:41.165 )") 00:29:41.165 20:30:18 -- nvmf/common.sh@542 -- # cat 00:29:41.165 20:30:18 -- nvmf/common.sh@544 -- # jq . 00:29:41.165 20:30:18 -- nvmf/common.sh@545 -- # IFS=, 00:29:41.165 20:30:18 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:29:41.165 "params": { 00:29:41.165 "name": "Nvme1", 00:29:41.165 "trtype": "tcp", 00:29:41.165 "traddr": "10.0.0.2", 00:29:41.165 "adrfam": "ipv4", 00:29:41.165 "trsvcid": "4420", 00:29:41.165 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:41.165 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:41.165 "hdgst": false, 00:29:41.165 "ddgst": false 00:29:41.165 }, 00:29:41.165 "method": "bdev_nvme_attach_controller" 00:29:41.165 }' 00:29:41.165 [2024-02-14 20:30:18.396447] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:29:41.165 [2024-02-14 20:30:18.396494] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1959830 ] 00:29:41.165 EAL: No free 2048 kB hugepages reported on node 1 00:29:41.165 [2024-02-14 20:30:18.455370] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:41.165 [2024-02-14 20:30:18.522636] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:41.165 [2024-02-14 20:30:18.522694] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:29:41.731 Running I/O for 15 seconds... 00:29:44.265 20:30:21 -- host/bdevperf.sh@33 -- # kill -9 1959349 00:29:44.265 20:30:21 -- host/bdevperf.sh@35 -- # sleep 3 00:29:44.265 [2024-02-14 20:30:21.371893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:78672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.265 [2024-02-14 20:30:21.371935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.265 [2024-02-14 20:30:21.371952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:78696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.265 [2024-02-14 20:30:21.371961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.265 [2024-02-14 20:30:21.371970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:78720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.265 [2024-02-14 20:30:21.371977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.265 [2024-02-14 20:30:21.371986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.265 [2024-02-14 20:30:21.371993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.265 [2024-02-14 20:30:21.372002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:78744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.265 [2024-02-14 20:30:21.372009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.265 [2024-02-14 20:30:21.372016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:78752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.265 [2024-02-14 20:30:21.372023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.265 [2024-02-14 20:30:21.372031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:78760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.265 [2024-02-14 20:30:21.372038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.265 [2024-02-14 20:30:21.372046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:78776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.265 [2024-02-14 20:30:21.372053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.265 [2024-02-14 20:30:21.372061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:78792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.265 [2024-02-14 20:30:21.372067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.265 [2024-02-14 20:30:21.372075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.265 [2024-02-14 20:30:21.372088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.265 [2024-02-14 20:30:21.372097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:78824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.265 [2024-02-14 20:30:21.372105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.265 [2024-02-14 20:30:21.372115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:78832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.265 [2024-02-14 20:30:21.372123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.265 [2024-02-14 20:30:21.372132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:78840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.265 [2024-02-14 20:30:21.372141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.265 [2024-02-14 20:30:21.372150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:78848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.265 [2024-02-14 20:30:21.372157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.265 [2024-02-14 20:30:21.372167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:78880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.265 [2024-02-14 20:30:21.372175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.265 [2024-02-14 20:30:21.372185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:78896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.265 [2024-02-14 20:30:21.372194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.265 [2024-02-14 20:30:21.372202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:78904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.265 [2024-02-14 20:30:21.372209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.265 [2024-02-14 20:30:21.372218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:78920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.265 [2024-02-14 20:30:21.372225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.265 [2024-02-14 20:30:21.372232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:78216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.265 [2024-02-14 20:30:21.372239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.265 [2024-02-14 20:30:21.372247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:78240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.265 [2024-02-14 20:30:21.372253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.265 [2024-02-14 20:30:21.372261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.265 [2024-02-14 20:30:21.372267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.265 [2024-02-14 20:30:21.372275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:78256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.265 [2024-02-14 20:30:21.372282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.265 [2024-02-14 20:30:21.372291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:78264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.266 [2024-02-14 20:30:21.372298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.266 [2024-02-14 20:30:21.372306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:78280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.266 [2024-02-14 20:30:21.372313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.266 [2024-02-14 20:30:21.372321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:78288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.266 [2024-02-14 20:30:21.372328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.266 [2024-02-14 20:30:21.372335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:78304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.266 [2024-02-14 20:30:21.372342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.266 [2024-02-14 20:30:21.372350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:78936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.266 [2024-02-14 20:30:21.372356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.266 [2024-02-14 20:30:21.372364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:78944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.266 [2024-02-14 20:30:21.372370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.266 [2024-02-14 20:30:21.372378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:78952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.266 [2024-02-14 20:30:21.372384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.266 [2024-02-14 20:30:21.372392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:78960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.266 [2024-02-14 20:30:21.372398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.266 [2024-02-14 20:30:21.372406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:78968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.266 [2024-02-14 20:30:21.372412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.266 [2024-02-14 20:30:21.372420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:78976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.266 [2024-02-14 20:30:21.372426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.266 [2024-02-14 20:30:21.372434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:78328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.266 [2024-02-14 20:30:21.372441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.266 [2024-02-14 20:30:21.372449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:78344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.266 [2024-02-14 20:30:21.372455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.266 [2024-02-14 20:30:21.372463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:78368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.266 [2024-02-14 20:30:21.372470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.266 [2024-02-14 20:30:21.372478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:78376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.266 [2024-02-14 20:30:21.372484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.266 [2024-02-14 20:30:21.372492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.266 [2024-02-14 20:30:21.372498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.266 [2024-02-14 20:30:21.372506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:78408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.266 [2024-02-14 20:30:21.372512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.266 [2024-02-14 20:30:21.372521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:78424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.266 [2024-02-14 20:30:21.372527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.266 [2024-02-14 20:30:21.372535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:78440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.266 [2024-02-14 20:30:21.372541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.266 [2024-02-14 20:30:21.372550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:78984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.266 [2024-02-14 20:30:21.372556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.266 [2024-02-14 20:30:21.372564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:78992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.266 [2024-02-14 20:30:21.372570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.266 [2024-02-14 20:30:21.372577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:79000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.266 [2024-02-14 20:30:21.372584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.266 [2024-02-14 20:30:21.372591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:79008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.266 [2024-02-14 20:30:21.372597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.266 [2024-02-14 20:30:21.372605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:79016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.266 [2024-02-14 20:30:21.372611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.266 [2024-02-14 20:30:21.372619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:79024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.266 [2024-02-14 20:30:21.372625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.266 [2024-02-14 20:30:21.372633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:79032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.266 [2024-02-14 20:30:21.372639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.266 [2024-02-14 20:30:21.372653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:79040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.266 [2024-02-14 20:30:21.372660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.266 [2024-02-14 20:30:21.372668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:79048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.266 [2024-02-14 20:30:21.372675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.266 [2024-02-14 20:30:21.372682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:79056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.266 [2024-02-14 20:30:21.372689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.266 [2024-02-14 20:30:21.372697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:79064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.266 [2024-02-14 20:30:21.372702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.266 [2024-02-14 20:30:21.372711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:79072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.266 [2024-02-14 20:30:21.372717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.266 [2024-02-14 20:30:21.372725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:79080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.266 [2024-02-14 20:30:21.372732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.266 [2024-02-14 20:30:21.372739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:79088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.266 [2024-02-14 20:30:21.372746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.266 [2024-02-14 20:30:21.372753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:78448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.266 [2024-02-14 20:30:21.372760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.266 [2024-02-14 20:30:21.372767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:78456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.266 [2024-02-14 20:30:21.372774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.266 [2024-02-14 20:30:21.372781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:78472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.266 [2024-02-14 20:30:21.372788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.266 [2024-02-14 20:30:21.372795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:78488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.266 [2024-02-14 20:30:21.372801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.266 [2024-02-14 20:30:21.372809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:78496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.266 [2024-02-14 20:30:21.372815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.266 [2024-02-14 20:30:21.372823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:78504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.267 [2024-02-14 20:30:21.372834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.267 [2024-02-14 20:30:21.372842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:78528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.267 [2024-02-14 20:30:21.372848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.267 [2024-02-14 20:30:21.372856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:78552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.267 [2024-02-14 20:30:21.372862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.267 [2024-02-14 20:30:21.372870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:79096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.267 [2024-02-14 20:30:21.372876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.267 [2024-02-14 20:30:21.372884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:79104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.267 [2024-02-14 20:30:21.372890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.267 [2024-02-14 20:30:21.372898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:79112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.267 [2024-02-14 20:30:21.372905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.267 [2024-02-14 20:30:21.372913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:79120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.267 [2024-02-14 20:30:21.372920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.267 [2024-02-14 20:30:21.372928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:79128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.267 [2024-02-14 20:30:21.372934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.267 [2024-02-14 20:30:21.372942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:79136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.267 [2024-02-14 20:30:21.372948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.267 [2024-02-14 20:30:21.372956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:79144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.267 [2024-02-14 20:30:21.372962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.267 [2024-02-14 20:30:21.372970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:79152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.267 [2024-02-14 20:30:21.372976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.267 [2024-02-14 20:30:21.372984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:79160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.267 [2024-02-14 20:30:21.372990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.267 [2024-02-14 20:30:21.372998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:79168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.267 [2024-02-14 20:30:21.373004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.267 [2024-02-14 20:30:21.373011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:79176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.267 [2024-02-14 20:30:21.373019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.267 [2024-02-14 20:30:21.373027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:79184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.267 [2024-02-14 20:30:21.373033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.267 [2024-02-14 20:30:21.373041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:79192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.267 [2024-02-14 20:30:21.373047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.267 [2024-02-14 20:30:21.373055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:79200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.267 [2024-02-14 20:30:21.373061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.267 [2024-02-14 20:30:21.373068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:79208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.267 [2024-02-14 20:30:21.373075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.267 [2024-02-14 20:30:21.373083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:78568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.267 [2024-02-14 20:30:21.373089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.267 [2024-02-14 20:30:21.373097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:78600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.267 [2024-02-14 20:30:21.373103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.267 [2024-02-14 20:30:21.373111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:78608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.267 [2024-02-14 20:30:21.373118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.267 [2024-02-14 20:30:21.373129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:78616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.267 [2024-02-14 20:30:21.373136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.267 [2024-02-14 20:30:21.373144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:78624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.267 [2024-02-14 20:30:21.373150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.267 [2024-02-14 20:30:21.373158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:78632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.267 [2024-02-14 20:30:21.373164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.267 [2024-02-14 20:30:21.373172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:78640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.267 [2024-02-14 20:30:21.373179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.267 [2024-02-14 20:30:21.373187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:78656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.267 [2024-02-14 20:30:21.373193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.267 [2024-02-14 20:30:21.373202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:79216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.267 [2024-02-14 20:30:21.373208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.267 [2024-02-14 20:30:21.373215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:79224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.267 [2024-02-14 20:30:21.373222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.267 [2024-02-14 20:30:21.373230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:79232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.267 [2024-02-14 20:30:21.373236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.267 [2024-02-14 20:30:21.373243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:79240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.267 [2024-02-14 20:30:21.373249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.267 [2024-02-14 20:30:21.373257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:79248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.267 [2024-02-14 20:30:21.373263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.267 [2024-02-14 20:30:21.373271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:79256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.267 [2024-02-14 20:30:21.373277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.267 [2024-02-14 20:30:21.373285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:79264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.267 [2024-02-14 20:30:21.373291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.267 [2024-02-14 20:30:21.373299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:79272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.267 [2024-02-14 20:30:21.373305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.267 [2024-02-14 20:30:21.373312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:79280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.267 [2024-02-14 20:30:21.373319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.268 [2024-02-14 20:30:21.373328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:79288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.268 [2024-02-14 20:30:21.373334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.268 [2024-02-14 20:30:21.373342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:79296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.268 [2024-02-14 20:30:21.373348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.268 [2024-02-14 20:30:21.373356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:79304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.268 [2024-02-14 20:30:21.373367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.268 [2024-02-14 20:30:21.373375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:79312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.268 [2024-02-14 20:30:21.373382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.268 [2024-02-14 20:30:21.373390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:79320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.268 [2024-02-14 20:30:21.373397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.268 [2024-02-14 20:30:21.373404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:79328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.268 [2024-02-14 20:30:21.373411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.268 [2024-02-14 20:30:21.373419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:78664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.268 [2024-02-14 20:30:21.373425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.268 [2024-02-14 20:30:21.373433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.268 [2024-02-14 20:30:21.373439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.268 [2024-02-14 20:30:21.373448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:78688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.268 [2024-02-14 20:30:21.373454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.268 [2024-02-14 20:30:21.373463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:78704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.268 [2024-02-14 20:30:21.373470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.268 [2024-02-14 20:30:21.373477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:78712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.268 [2024-02-14 20:30:21.373483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.268 [2024-02-14 20:30:21.373491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:78736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.268 [2024-02-14 20:30:21.373497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.268 [2024-02-14 20:30:21.373505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:78768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.268 [2024-02-14 20:30:21.373511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.268 [2024-02-14 20:30:21.373520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:78784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.268 [2024-02-14 20:30:21.373526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.268 [2024-02-14 20:30:21.373534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:79336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.268 [2024-02-14 20:30:21.373540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.268 [2024-02-14 20:30:21.373548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:79344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.268 [2024-02-14 20:30:21.373554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.268 [2024-02-14 20:30:21.373563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:79352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.268 [2024-02-14 20:30:21.373570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.268 [2024-02-14 20:30:21.373578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:79360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.268 [2024-02-14 20:30:21.373584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.268 [2024-02-14 20:30:21.373592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:79368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.268 [2024-02-14 20:30:21.373600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.268 [2024-02-14 20:30:21.373608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:79376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.268 [2024-02-14 20:30:21.373614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.268 [2024-02-14 20:30:21.373622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:79384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.268 [2024-02-14 20:30:21.373629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.268 [2024-02-14 20:30:21.373637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:79392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.268 [2024-02-14 20:30:21.373643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.268 [2024-02-14 20:30:21.373657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:79400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.268 [2024-02-14 20:30:21.373663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.268 [2024-02-14 20:30:21.373671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:79408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.268 [2024-02-14 20:30:21.373677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.268 [2024-02-14 20:30:21.373685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:79416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.268 [2024-02-14 20:30:21.373692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.268 [2024-02-14 20:30:21.373700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:79424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.268 [2024-02-14 20:30:21.373707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.268 [2024-02-14 20:30:21.373715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:78800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.268 [2024-02-14 20:30:21.373723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.268 [2024-02-14 20:30:21.373731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:78808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.268 [2024-02-14 20:30:21.373739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.268 [2024-02-14 20:30:21.373748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:78856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.268 [2024-02-14 20:30:21.373756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.268 [2024-02-14 20:30:21.373765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:78864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.268 [2024-02-14 20:30:21.373774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.268 [2024-02-14 20:30:21.373782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:78872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.268 [2024-02-14 20:30:21.373789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.268 [2024-02-14 20:30:21.373798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:78888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.268 [2024-02-14 20:30:21.373805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.268 [2024-02-14 20:30:21.373814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:78912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.268 [2024-02-14 20:30:21.373822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.268 [2024-02-14 20:30:21.373830] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a59f0 is same with the state(5) to be set 00:29:44.268 [2024-02-14 20:30:21.373838] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:44.268 [2024-02-14 20:30:21.373844] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:44.268 [2024-02-14 20:30:21.373851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78928 len:8 PRP1 0x0 PRP2 0x0 00:29:44.268 [2024-02-14 20:30:21.373858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.268 [2024-02-14 20:30:21.373901] bdev_nvme.c:1576:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x21a59f0 was disconnected and freed. reset controller. 00:29:44.268 [2024-02-14 20:30:21.373944] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.268 [2024-02-14 20:30:21.373953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.268 [2024-02-14 20:30:21.373961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.268 [2024-02-14 20:30:21.373968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.268 [2024-02-14 20:30:21.373975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.268 [2024-02-14 20:30:21.373982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.268 [2024-02-14 20:30:21.373989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.269 [2024-02-14 20:30:21.373995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.269 [2024-02-14 20:30:21.374002] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:44.269 [2024-02-14 20:30:21.376118] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.269 [2024-02-14 20:30:21.376145] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:44.269 [2024-02-14 20:30:21.376810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.269 [2024-02-14 20:30:21.377197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.269 [2024-02-14 20:30:21.377209] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:44.269 [2024-02-14 20:30:21.377216] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:44.269 [2024-02-14 20:30:21.377332] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:44.269 [2024-02-14 20:30:21.377446] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.269 [2024-02-14 20:30:21.377454] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.269 [2024-02-14 20:30:21.377461] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.269 [2024-02-14 20:30:21.379245] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.269 [2024-02-14 20:30:21.388181] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.269 [2024-02-14 20:30:21.388628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.269 [2024-02-14 20:30:21.389063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.269 [2024-02-14 20:30:21.389096] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:44.269 [2024-02-14 20:30:21.389119] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:44.269 [2024-02-14 20:30:21.389293] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:44.269 [2024-02-14 20:30:21.389433] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.269 [2024-02-14 20:30:21.389440] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.269 [2024-02-14 20:30:21.389447] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.269 [2024-02-14 20:30:21.391241] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.269 [2024-02-14 20:30:21.400129] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.269 [2024-02-14 20:30:21.400619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.269 [2024-02-14 20:30:21.400989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.269 [2024-02-14 20:30:21.401022] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:44.269 [2024-02-14 20:30:21.401043] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:44.269 [2024-02-14 20:30:21.401374] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:44.269 [2024-02-14 20:30:21.401605] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.269 [2024-02-14 20:30:21.401630] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.269 [2024-02-14 20:30:21.401675] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.269 [2024-02-14 20:30:21.403811] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.269 [2024-02-14 20:30:21.412070] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.269 [2024-02-14 20:30:21.412607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.269 [2024-02-14 20:30:21.413037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.269 [2024-02-14 20:30:21.413048] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:44.269 [2024-02-14 20:30:21.413057] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:44.269 [2024-02-14 20:30:21.413154] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:44.269 [2024-02-14 20:30:21.413264] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.269 [2024-02-14 20:30:21.413271] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.269 [2024-02-14 20:30:21.413277] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.269 [2024-02-14 20:30:21.415061] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.269 [2024-02-14 20:30:21.423939] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.269 [2024-02-14 20:30:21.424435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.269 [2024-02-14 20:30:21.424790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.269 [2024-02-14 20:30:21.424812] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:44.269 [2024-02-14 20:30:21.424818] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:44.269 [2024-02-14 20:30:21.424929] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:44.269 [2024-02-14 20:30:21.425024] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.269 [2024-02-14 20:30:21.425032] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.269 [2024-02-14 20:30:21.425038] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.269 [2024-02-14 20:30:21.426832] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.269 [2024-02-14 20:30:21.435848] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.269 [2024-02-14 20:30:21.436431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.269 [2024-02-14 20:30:21.436840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.269 [2024-02-14 20:30:21.436876] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:44.269 [2024-02-14 20:30:21.436898] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:44.269 [2024-02-14 20:30:21.437025] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:44.269 [2024-02-14 20:30:21.437121] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.269 [2024-02-14 20:30:21.437129] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.269 [2024-02-14 20:30:21.437135] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.269 [2024-02-14 20:30:21.438785] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.269 [2024-02-14 20:30:21.447651] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.269 [2024-02-14 20:30:21.448196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.269 [2024-02-14 20:30:21.448640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.269 [2024-02-14 20:30:21.448684] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:44.269 [2024-02-14 20:30:21.448706] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:44.269 [2024-02-14 20:30:21.449192] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:44.269 [2024-02-14 20:30:21.449366] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.269 [2024-02-14 20:30:21.449378] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.269 [2024-02-14 20:30:21.449388] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.269 [2024-02-14 20:30:21.452003] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.269 [2024-02-14 20:30:21.460136] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.269 [2024-02-14 20:30:21.460698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.269 [2024-02-14 20:30:21.460952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.269 [2024-02-14 20:30:21.460983] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:44.269 [2024-02-14 20:30:21.461004] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:44.269 [2024-02-14 20:30:21.461259] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:44.269 [2024-02-14 20:30:21.461367] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.270 [2024-02-14 20:30:21.461376] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.270 [2024-02-14 20:30:21.461382] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.270 [2024-02-14 20:30:21.463301] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.270 [2024-02-14 20:30:21.471894] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.270 [2024-02-14 20:30:21.472481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.270 [2024-02-14 20:30:21.472860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.270 [2024-02-14 20:30:21.472892] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:44.270 [2024-02-14 20:30:21.472914] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:44.270 [2024-02-14 20:30:21.473152] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:44.270 [2024-02-14 20:30:21.473276] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.270 [2024-02-14 20:30:21.473284] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.270 [2024-02-14 20:30:21.473290] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.270 [2024-02-14 20:30:21.474876] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.270 [2024-02-14 20:30:21.483608] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.270 [2024-02-14 20:30:21.484190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.270 [2024-02-14 20:30:21.484540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.270 [2024-02-14 20:30:21.484571] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:44.270 [2024-02-14 20:30:21.484593] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:44.270 [2024-02-14 20:30:21.484995] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:44.270 [2024-02-14 20:30:21.485158] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.270 [2024-02-14 20:30:21.485166] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.270 [2024-02-14 20:30:21.485173] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.270 [2024-02-14 20:30:21.486897] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.270 [2024-02-14 20:30:21.495414] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.270 [2024-02-14 20:30:21.496022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.270 [2024-02-14 20:30:21.496405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.270 [2024-02-14 20:30:21.496436] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:44.270 [2024-02-14 20:30:21.496457] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:44.270 [2024-02-14 20:30:21.496903] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:44.270 [2024-02-14 20:30:21.497049] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.270 [2024-02-14 20:30:21.497057] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.270 [2024-02-14 20:30:21.497063] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.270 [2024-02-14 20:30:21.498685] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.270 [2024-02-14 20:30:21.507149] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.270 [2024-02-14 20:30:21.507691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.270 [2024-02-14 20:30:21.508105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.270 [2024-02-14 20:30:21.508139] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:44.270 [2024-02-14 20:30:21.508146] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:44.270 [2024-02-14 20:30:21.508277] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:44.270 [2024-02-14 20:30:21.508367] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.270 [2024-02-14 20:30:21.508374] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.270 [2024-02-14 20:30:21.508380] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.270 [2024-02-14 20:30:21.510161] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.270 [2024-02-14 20:30:21.519069] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.270 [2024-02-14 20:30:21.519666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.270 [2024-02-14 20:30:21.520135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.270 [2024-02-14 20:30:21.520165] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:44.270 [2024-02-14 20:30:21.520186] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:44.270 [2024-02-14 20:30:21.520726] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:44.270 [2024-02-14 20:30:21.521017] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.270 [2024-02-14 20:30:21.521042] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.270 [2024-02-14 20:30:21.521061] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.270 [2024-02-14 20:30:21.523069] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.270 [2024-02-14 20:30:21.530714] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.270 [2024-02-14 20:30:21.531053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.270 [2024-02-14 20:30:21.531472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.270 [2024-02-14 20:30:21.531503] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:44.270 [2024-02-14 20:30:21.531524] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:44.270 [2024-02-14 20:30:21.532020] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:44.270 [2024-02-14 20:30:21.532227] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.270 [2024-02-14 20:30:21.532235] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.270 [2024-02-14 20:30:21.532241] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.270 [2024-02-14 20:30:21.534035] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.270 [2024-02-14 20:30:21.542568] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.270 [2024-02-14 20:30:21.543096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.270 [2024-02-14 20:30:21.543528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.270 [2024-02-14 20:30:21.543559] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:44.270 [2024-02-14 20:30:21.543580] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:44.270 [2024-02-14 20:30:21.543925] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:44.270 [2024-02-14 20:30:21.544306] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.271 [2024-02-14 20:30:21.544330] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.271 [2024-02-14 20:30:21.544350] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.271 [2024-02-14 20:30:21.546178] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.271 [2024-02-14 20:30:21.554394] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.271 [2024-02-14 20:30:21.554987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.271 [2024-02-14 20:30:21.555391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.271 [2024-02-14 20:30:21.555421] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:44.271 [2024-02-14 20:30:21.555442] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:44.271 [2024-02-14 20:30:21.555837] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:44.271 [2024-02-14 20:30:21.556120] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.271 [2024-02-14 20:30:21.556151] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.271 [2024-02-14 20:30:21.556171] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.271 [2024-02-14 20:30:21.558055] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.271 [2024-02-14 20:30:21.566273] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.271 [2024-02-14 20:30:21.566838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.271 [2024-02-14 20:30:21.567313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.271 [2024-02-14 20:30:21.567344] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:44.271 [2024-02-14 20:30:21.567365] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:44.271 [2024-02-14 20:30:21.567707] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:44.271 [2024-02-14 20:30:21.568089] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.271 [2024-02-14 20:30:21.568113] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.271 [2024-02-14 20:30:21.568134] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.271 [2024-02-14 20:30:21.570134] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.271 [2024-02-14 20:30:21.577961] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.271 [2024-02-14 20:30:21.578551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.271 [2024-02-14 20:30:21.579026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.271 [2024-02-14 20:30:21.579059] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:44.271 [2024-02-14 20:30:21.579081] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:44.271 [2024-02-14 20:30:21.579495] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:44.271 [2024-02-14 20:30:21.579606] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.271 [2024-02-14 20:30:21.579613] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.271 [2024-02-14 20:30:21.579619] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.271 [2024-02-14 20:30:21.581445] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.271 [2024-02-14 20:30:21.589681] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.271 [2024-02-14 20:30:21.590128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.271 [2024-02-14 20:30:21.590592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.271 [2024-02-14 20:30:21.590623] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:44.271 [2024-02-14 20:30:21.590654] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:44.271 [2024-02-14 20:30:21.590808] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:44.271 [2024-02-14 20:30:21.590903] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.271 [2024-02-14 20:30:21.590911] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.271 [2024-02-14 20:30:21.590920] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.271 [2024-02-14 20:30:21.592432] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.271 [2024-02-14 20:30:21.601590] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.271 [2024-02-14 20:30:21.602152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.271 [2024-02-14 20:30:21.602602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.271 [2024-02-14 20:30:21.602633] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:44.271 [2024-02-14 20:30:21.602678] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:44.271 [2024-02-14 20:30:21.603059] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:44.271 [2024-02-14 20:30:21.603369] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.271 [2024-02-14 20:30:21.603377] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.271 [2024-02-14 20:30:21.603383] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.271 [2024-02-14 20:30:21.605180] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.271 [2024-02-14 20:30:21.613230] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.271 [2024-02-14 20:30:21.613844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.271 [2024-02-14 20:30:21.614246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.271 [2024-02-14 20:30:21.614277] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:44.271 [2024-02-14 20:30:21.614298] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:44.271 [2024-02-14 20:30:21.614497] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:44.271 [2024-02-14 20:30:21.614588] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.271 [2024-02-14 20:30:21.614595] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.271 [2024-02-14 20:30:21.614601] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.271 [2024-02-14 20:30:21.616316] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.271 [2024-02-14 20:30:21.624898] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.271 [2024-02-14 20:30:21.625339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.271 [2024-02-14 20:30:21.625739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.271 [2024-02-14 20:30:21.625750] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:44.271 [2024-02-14 20:30:21.625757] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:44.271 [2024-02-14 20:30:21.625856] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:44.271 [2024-02-14 20:30:21.625984] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.271 [2024-02-14 20:30:21.625992] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.271 [2024-02-14 20:30:21.625998] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.271 [2024-02-14 20:30:21.627776] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.271 [2024-02-14 20:30:21.636818] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.271 [2024-02-14 20:30:21.637355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.271 [2024-02-14 20:30:21.637773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.271 [2024-02-14 20:30:21.637785] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:44.271 [2024-02-14 20:30:21.637792] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:44.271 [2024-02-14 20:30:21.637878] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:44.271 [2024-02-14 20:30:21.637991] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.271 [2024-02-14 20:30:21.637998] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.271 [2024-02-14 20:30:21.638005] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.271 [2024-02-14 20:30:21.639782] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.272 [2024-02-14 20:30:21.648677] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.272 [2024-02-14 20:30:21.649226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.272 [2024-02-14 20:30:21.649530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.272 [2024-02-14 20:30:21.649541] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:44.272 [2024-02-14 20:30:21.649547] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:44.272 [2024-02-14 20:30:21.649665] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:44.272 [2024-02-14 20:30:21.649764] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.272 [2024-02-14 20:30:21.649771] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.272 [2024-02-14 20:30:21.649777] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.272 [2024-02-14 20:30:21.651449] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.272 [2024-02-14 20:30:21.660596] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.272 [2024-02-14 20:30:21.661185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.272 [2024-02-14 20:30:21.661536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.272 [2024-02-14 20:30:21.661546] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:44.272 [2024-02-14 20:30:21.661553] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:44.272 [2024-02-14 20:30:21.661685] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:44.272 [2024-02-14 20:30:21.661784] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.272 [2024-02-14 20:30:21.661792] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.272 [2024-02-14 20:30:21.661798] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.272 [2024-02-14 20:30:21.663587] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.272 [2024-02-14 20:30:21.672582] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.272 [2024-02-14 20:30:21.673145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.272 [2024-02-14 20:30:21.673570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.272 [2024-02-14 20:30:21.673580] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:44.272 [2024-02-14 20:30:21.673586] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:44.272 [2024-02-14 20:30:21.673732] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:44.272 [2024-02-14 20:30:21.673876] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.272 [2024-02-14 20:30:21.673884] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.272 [2024-02-14 20:30:21.673890] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.272 [2024-02-14 20:30:21.675636] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.532 [2024-02-14 20:30:21.684553] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.532 [2024-02-14 20:30:21.685130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.532 [2024-02-14 20:30:21.685594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.532 [2024-02-14 20:30:21.685625] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:44.532 [2024-02-14 20:30:21.685658] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:44.532 [2024-02-14 20:30:21.685939] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:44.532 [2024-02-14 20:30:21.686271] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.532 [2024-02-14 20:30:21.686295] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.532 [2024-02-14 20:30:21.686315] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.532 [2024-02-14 20:30:21.688399] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.532 [2024-02-14 20:30:21.696557] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.532 [2024-02-14 20:30:21.697090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.532 [2024-02-14 20:30:21.697538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.532 [2024-02-14 20:30:21.697571] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:44.532 [2024-02-14 20:30:21.697578] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:44.532 [2024-02-14 20:30:21.697694] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:44.532 [2024-02-14 20:30:21.697818] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.532 [2024-02-14 20:30:21.697825] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.532 [2024-02-14 20:30:21.697831] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.532 [2024-02-14 20:30:21.699504] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.532 [2024-02-14 20:30:21.708276] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.532 [2024-02-14 20:30:21.708818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.532 [2024-02-14 20:30:21.709169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.532 [2024-02-14 20:30:21.709200] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:44.532 [2024-02-14 20:30:21.709221] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:44.532 [2024-02-14 20:30:21.709452] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:44.532 [2024-02-14 20:30:21.709764] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.532 [2024-02-14 20:30:21.709772] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.532 [2024-02-14 20:30:21.709778] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.532 [2024-02-14 20:30:21.711450] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.532 [2024-02-14 20:30:21.720050] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.532 [2024-02-14 20:30:21.720491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.532 [2024-02-14 20:30:21.720921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.532 [2024-02-14 20:30:21.720955] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:44.532 [2024-02-14 20:30:21.720976] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:44.532 [2024-02-14 20:30:21.721170] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:44.533 [2024-02-14 20:30:21.721309] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.533 [2024-02-14 20:30:21.721317] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.533 [2024-02-14 20:30:21.721323] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.533 [2024-02-14 20:30:21.723065] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.533 [2024-02-14 20:30:21.731979] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.533 [2024-02-14 20:30:21.732281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.533 [2024-02-14 20:30:21.732709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.533 [2024-02-14 20:30:21.732742] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:44.533 [2024-02-14 20:30:21.732764] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:44.533 [2024-02-14 20:30:21.733193] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:44.533 [2024-02-14 20:30:21.733573] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.533 [2024-02-14 20:30:21.733597] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.533 [2024-02-14 20:30:21.733617] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.533 [2024-02-14 20:30:21.735557] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.533 [2024-02-14 20:30:21.743799] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.533 [2024-02-14 20:30:21.744327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.533 [2024-02-14 20:30:21.744800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.533 [2024-02-14 20:30:21.744832] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:44.533 [2024-02-14 20:30:21.744860] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:44.533 [2024-02-14 20:30:21.745240] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:44.533 [2024-02-14 20:30:21.745621] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.533 [2024-02-14 20:30:21.745656] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.533 [2024-02-14 20:30:21.745677] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.533 [2024-02-14 20:30:21.747476] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.533 [2024-02-14 20:30:21.755709] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.533 [2024-02-14 20:30:21.756218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.533 [2024-02-14 20:30:21.756607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.533 [2024-02-14 20:30:21.756637] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:44.533 [2024-02-14 20:30:21.756677] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:44.533 [2024-02-14 20:30:21.756772] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:44.533 [2024-02-14 20:30:21.756882] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.533 [2024-02-14 20:30:21.756890] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.533 [2024-02-14 20:30:21.756896] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.533 [2024-02-14 20:30:21.758677] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.533 [2024-02-14 20:30:21.767820] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.533 [2024-02-14 20:30:21.768335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.533 [2024-02-14 20:30:21.768687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.533 [2024-02-14 20:30:21.768720] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:44.533 [2024-02-14 20:30:21.768741] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:44.533 [2024-02-14 20:30:21.769100] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:44.533 [2024-02-14 20:30:21.769211] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.533 [2024-02-14 20:30:21.769219] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.533 [2024-02-14 20:30:21.769225] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.533 [2024-02-14 20:30:21.771044] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.533 [2024-02-14 20:30:21.779865] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.533 [2024-02-14 20:30:21.780305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.533 [2024-02-14 20:30:21.780664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.533 [2024-02-14 20:30:21.780696] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:44.533 [2024-02-14 20:30:21.780723] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:44.533 [2024-02-14 20:30:21.780813] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:44.533 [2024-02-14 20:30:21.780909] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.533 [2024-02-14 20:30:21.780917] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.533 [2024-02-14 20:30:21.780922] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.533 [2024-02-14 20:30:21.782718] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.533 [2024-02-14 20:30:21.791489] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.533 [2024-02-14 20:30:21.791948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.533 [2024-02-14 20:30:21.792251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.533 [2024-02-14 20:30:21.792290] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:44.533 [2024-02-14 20:30:21.792311] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:44.533 [2024-02-14 20:30:21.792850] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:44.533 [2024-02-14 20:30:21.793007] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.533 [2024-02-14 20:30:21.793015] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.533 [2024-02-14 20:30:21.793021] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.533 [2024-02-14 20:30:21.794641] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.533 [2024-02-14 20:30:21.803249] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.533 [2024-02-14 20:30:21.803689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.533 [2024-02-14 20:30:21.804072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.533 [2024-02-14 20:30:21.804102] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:44.533 [2024-02-14 20:30:21.804124] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:44.533 [2024-02-14 20:30:21.804553] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:44.533 [2024-02-14 20:30:21.804857] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.533 [2024-02-14 20:30:21.804865] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.533 [2024-02-14 20:30:21.804871] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.533 [2024-02-14 20:30:21.806582] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.533 [2024-02-14 20:30:21.814942] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.533 [2024-02-14 20:30:21.815452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.533 [2024-02-14 20:30:21.815852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.533 [2024-02-14 20:30:21.815885] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:44.533 [2024-02-14 20:30:21.815907] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:44.533 [2024-02-14 20:30:21.816076] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:44.533 [2024-02-14 20:30:21.816172] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.533 [2024-02-14 20:30:21.816179] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.533 [2024-02-14 20:30:21.816185] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.533 [2024-02-14 20:30:21.818007] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.533 [2024-02-14 20:30:21.826794] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.533 [2024-02-14 20:30:21.827293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.534 [2024-02-14 20:30:21.827633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.534 [2024-02-14 20:30:21.827678] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:44.534 [2024-02-14 20:30:21.827700] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:44.534 [2024-02-14 20:30:21.828029] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:44.534 [2024-02-14 20:30:21.828288] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.534 [2024-02-14 20:30:21.828297] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.534 [2024-02-14 20:30:21.828302] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.534 [2024-02-14 20:30:21.829938] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.534 [2024-02-14 20:30:21.838748] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.534 [2024-02-14 20:30:21.839322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.534 [2024-02-14 20:30:21.839668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.534 [2024-02-14 20:30:21.839700] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:44.534 [2024-02-14 20:30:21.839721] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:44.534 [2024-02-14 20:30:21.840100] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:44.534 [2024-02-14 20:30:21.840372] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.534 [2024-02-14 20:30:21.840379] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.534 [2024-02-14 20:30:21.840385] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.534 [2024-02-14 20:30:21.842138] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.534 [2024-02-14 20:30:21.850591] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.534 [2024-02-14 20:30:21.851055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.534 [2024-02-14 20:30:21.851417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.534 [2024-02-14 20:30:21.851447] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:44.534 [2024-02-14 20:30:21.851468] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:44.534 [2024-02-14 20:30:21.851764] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:44.534 [2024-02-14 20:30:21.852145] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.534 [2024-02-14 20:30:21.852153] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.534 [2024-02-14 20:30:21.852159] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.534 [2024-02-14 20:30:21.853886] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.534 [2024-02-14 20:30:21.862359] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.534 [2024-02-14 20:30:21.862824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.534 [2024-02-14 20:30:21.863200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.534 [2024-02-14 20:30:21.863232] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:44.534 [2024-02-14 20:30:21.863254] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:44.534 [2024-02-14 20:30:21.863586] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:44.534 [2024-02-14 20:30:21.863989] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.534 [2024-02-14 20:30:21.863997] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.534 [2024-02-14 20:30:21.864003] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.534 [2024-02-14 20:30:21.865836] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.534 [2024-02-14 20:30:21.874323] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.534 [2024-02-14 20:30:21.874818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.534 [2024-02-14 20:30:21.875166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.534 [2024-02-14 20:30:21.875196] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:44.534 [2024-02-14 20:30:21.875217] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:44.534 [2024-02-14 20:30:21.875597] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:44.534 [2024-02-14 20:30:21.875991] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.534 [2024-02-14 20:30:21.876017] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.534 [2024-02-14 20:30:21.876037] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.534 [2024-02-14 20:30:21.878028] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.534 [2024-02-14 20:30:21.886313] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.534 [2024-02-14 20:30:21.886698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.534 [2024-02-14 20:30:21.886990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.534 [2024-02-14 20:30:21.887000] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:44.534 [2024-02-14 20:30:21.887007] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:44.534 [2024-02-14 20:30:21.887120] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:44.534 [2024-02-14 20:30:21.887218] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.534 [2024-02-14 20:30:21.887229] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.534 [2024-02-14 20:30:21.887236] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.534 [2024-02-14 20:30:21.888980] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.534 [2024-02-14 20:30:21.898112] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.534 [2024-02-14 20:30:21.898612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.534 [2024-02-14 20:30:21.898971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.534 [2024-02-14 20:30:21.899002] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:44.534 [2024-02-14 20:30:21.899023] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:44.534 [2024-02-14 20:30:21.899229] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:44.534 [2024-02-14 20:30:21.899328] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.534 [2024-02-14 20:30:21.899336] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.534 [2024-02-14 20:30:21.899342] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.534 [2024-02-14 20:30:21.901096] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.534 [2024-02-14 20:30:21.909970] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.534 [2024-02-14 20:30:21.910491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.534 [2024-02-14 20:30:21.910899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.534 [2024-02-14 20:30:21.910932] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:44.534 [2024-02-14 20:30:21.910953] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:44.534 [2024-02-14 20:30:21.911284] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:44.534 [2024-02-14 20:30:21.911500] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.534 [2024-02-14 20:30:21.911508] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.534 [2024-02-14 20:30:21.911513] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.534 [2024-02-14 20:30:21.913303] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.534 [2024-02-14 20:30:21.921826] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.534 [2024-02-14 20:30:21.922370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.534 [2024-02-14 20:30:21.922784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.534 [2024-02-14 20:30:21.922818] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:44.534 [2024-02-14 20:30:21.922839] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:44.534 [2024-02-14 20:30:21.923121] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:44.534 [2024-02-14 20:30:21.923380] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.534 [2024-02-14 20:30:21.923388] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.535 [2024-02-14 20:30:21.923397] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.535 [2024-02-14 20:30:21.926100] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.535 [2024-02-14 20:30:21.934545] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.535 [2024-02-14 20:30:21.934997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.535 [2024-02-14 20:30:21.935391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.535 [2024-02-14 20:30:21.935422] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:44.535 [2024-02-14 20:30:21.935444] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:44.535 [2024-02-14 20:30:21.935713] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:44.535 [2024-02-14 20:30:21.935803] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.535 [2024-02-14 20:30:21.935812] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.535 [2024-02-14 20:30:21.935818] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.535 [2024-02-14 20:30:21.937815] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.535 [2024-02-14 20:30:21.946445] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.535 [2024-02-14 20:30:21.946985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.797 [2024-02-14 20:30:21.947330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.797 [2024-02-14 20:30:21.947340] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:44.797 [2024-02-14 20:30:21.947347] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:44.797 [2024-02-14 20:30:21.947460] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:44.797 [2024-02-14 20:30:21.947588] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.797 [2024-02-14 20:30:21.947596] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.797 [2024-02-14 20:30:21.947602] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.797 [2024-02-14 20:30:21.949297] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.797 [2024-02-14 20:30:21.958210] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.797 [2024-02-14 20:30:21.958705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.797 [2024-02-14 20:30:21.959054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.797 [2024-02-14 20:30:21.959086] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:44.797 [2024-02-14 20:30:21.959107] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:44.797 [2024-02-14 20:30:21.959481] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:44.797 [2024-02-14 20:30:21.959580] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.797 [2024-02-14 20:30:21.959588] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.797 [2024-02-14 20:30:21.959594] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.797 [2024-02-14 20:30:21.961115] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.797 [2024-02-14 20:30:21.969943] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.797 [2024-02-14 20:30:21.970544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.797 [2024-02-14 20:30:21.970992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.797 [2024-02-14 20:30:21.971027] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:44.797 [2024-02-14 20:30:21.971048] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:44.797 [2024-02-14 20:30:21.971528] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:44.797 [2024-02-14 20:30:21.971731] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.797 [2024-02-14 20:30:21.971739] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.797 [2024-02-14 20:30:21.971745] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.797 [2024-02-14 20:30:21.973426] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.797 [2024-02-14 20:30:21.981951] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.797 [2024-02-14 20:30:21.982415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.797 [2024-02-14 20:30:21.982770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.797 [2024-02-14 20:30:21.982803] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:44.797 [2024-02-14 20:30:21.982824] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:44.797 [2024-02-14 20:30:21.983202] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:44.797 [2024-02-14 20:30:21.983632] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.797 [2024-02-14 20:30:21.983672] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.797 [2024-02-14 20:30:21.983695] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.797 [2024-02-14 20:30:21.985435] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.797 [2024-02-14 20:30:21.993855] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.797 [2024-02-14 20:30:21.994408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.797 [2024-02-14 20:30:21.994797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.797 [2024-02-14 20:30:21.994807] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:44.797 [2024-02-14 20:30:21.994814] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:44.797 [2024-02-14 20:30:21.994924] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:44.797 [2024-02-14 20:30:21.995062] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.797 [2024-02-14 20:30:21.995070] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.797 [2024-02-14 20:30:21.995075] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.797 [2024-02-14 20:30:21.996722] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.797 [2024-02-14 20:30:22.005565] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.797 [2024-02-14 20:30:22.006043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.797 [2024-02-14 20:30:22.006411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.797 [2024-02-14 20:30:22.006442] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:44.797 [2024-02-14 20:30:22.006463] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:44.797 [2024-02-14 20:30:22.006905] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:44.797 [2024-02-14 20:30:22.007485] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.797 [2024-02-14 20:30:22.007510] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.797 [2024-02-14 20:30:22.007533] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.797 [2024-02-14 20:30:22.009263] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.797 [2024-02-14 20:30:22.017458] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.797 [2024-02-14 20:30:22.017960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.797 [2024-02-14 20:30:22.018244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.797 [2024-02-14 20:30:22.018254] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:44.797 [2024-02-14 20:30:22.018261] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:44.797 [2024-02-14 20:30:22.018385] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:44.797 [2024-02-14 20:30:22.018538] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.797 [2024-02-14 20:30:22.018546] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.797 [2024-02-14 20:30:22.018552] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.797 [2024-02-14 20:30:22.020254] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.797 [2024-02-14 20:30:22.029243] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.797 [2024-02-14 20:30:22.029784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.797 [2024-02-14 20:30:22.030192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.798 [2024-02-14 20:30:22.030223] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:44.798 [2024-02-14 20:30:22.030245] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:44.798 [2024-02-14 20:30:22.030623] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:44.798 [2024-02-14 20:30:22.030942] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.798 [2024-02-14 20:30:22.030951] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.798 [2024-02-14 20:30:22.030956] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.798 [2024-02-14 20:30:22.032654] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.798 [2024-02-14 20:30:22.040823] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.798 [2024-02-14 20:30:22.041277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.798 [2024-02-14 20:30:22.041708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.798 [2024-02-14 20:30:22.041742] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:44.798 [2024-02-14 20:30:22.041764] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:44.798 [2024-02-14 20:30:22.042094] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:44.798 [2024-02-14 20:30:22.042475] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.798 [2024-02-14 20:30:22.042499] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.798 [2024-02-14 20:30:22.042519] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.798 [2024-02-14 20:30:22.044388] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.798 [2024-02-14 20:30:22.052792] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.798 [2024-02-14 20:30:22.053348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.798 [2024-02-14 20:30:22.053767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.798 [2024-02-14 20:30:22.053799] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:44.798 [2024-02-14 20:30:22.053821] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:44.798 [2024-02-14 20:30:22.054003] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:44.798 [2024-02-14 20:30:22.054113] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.798 [2024-02-14 20:30:22.054121] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.798 [2024-02-14 20:30:22.054127] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.798 [2024-02-14 20:30:22.055751] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.798 [2024-02-14 20:30:22.064698] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.798 [2024-02-14 20:30:22.065139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.798 [2024-02-14 20:30:22.065504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.798 [2024-02-14 20:30:22.065534] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:44.798 [2024-02-14 20:30:22.065555] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:44.798 [2024-02-14 20:30:22.065898] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:44.798 [2024-02-14 20:30:22.066331] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.798 [2024-02-14 20:30:22.066355] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.798 [2024-02-14 20:30:22.066375] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.798 [2024-02-14 20:30:22.068270] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.798 [2024-02-14 20:30:22.076596] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.798 [2024-02-14 20:30:22.077129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.798 [2024-02-14 20:30:22.077546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.798 [2024-02-14 20:30:22.077584] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:44.798 [2024-02-14 20:30:22.077613] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:44.798 [2024-02-14 20:30:22.077701] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:44.798 [2024-02-14 20:30:22.077811] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.798 [2024-02-14 20:30:22.077818] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.798 [2024-02-14 20:30:22.077824] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.798 [2024-02-14 20:30:22.079469] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.798 [2024-02-14 20:30:22.088394] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.798 [2024-02-14 20:30:22.088923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.798 [2024-02-14 20:30:22.089383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.798 [2024-02-14 20:30:22.089413] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:44.798 [2024-02-14 20:30:22.089435] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:44.798 [2024-02-14 20:30:22.089632] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:44.798 [2024-02-14 20:30:22.089774] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.798 [2024-02-14 20:30:22.089783] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.798 [2024-02-14 20:30:22.089789] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.798 [2024-02-14 20:30:22.091442] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.798 [2024-02-14 20:30:22.100165] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.798 [2024-02-14 20:30:22.100851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.798 [2024-02-14 20:30:22.101295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.798 [2024-02-14 20:30:22.101304] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:44.798 [2024-02-14 20:30:22.101311] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:44.798 [2024-02-14 20:30:22.101450] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:44.798 [2024-02-14 20:30:22.101546] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.798 [2024-02-14 20:30:22.101554] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.798 [2024-02-14 20:30:22.101560] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.798 [2024-02-14 20:30:22.103358] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.798 [2024-02-14 20:30:22.112034] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.798 [2024-02-14 20:30:22.112538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.798 [2024-02-14 20:30:22.112889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.798 [2024-02-14 20:30:22.112921] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:44.798 [2024-02-14 20:30:22.112950] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:44.798 [2024-02-14 20:30:22.113330] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:44.798 [2024-02-14 20:30:22.113721] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.798 [2024-02-14 20:30:22.113748] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.798 [2024-02-14 20:30:22.113767] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.798 [2024-02-14 20:30:22.115437] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.798 [2024-02-14 20:30:22.123903] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.798 [2024-02-14 20:30:22.124303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.798 [2024-02-14 20:30:22.124596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.798 [2024-02-14 20:30:22.124606] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:44.799 [2024-02-14 20:30:22.124612] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:44.799 [2024-02-14 20:30:22.124726] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:44.799 [2024-02-14 20:30:22.124837] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.799 [2024-02-14 20:30:22.124846] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.799 [2024-02-14 20:30:22.124852] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.799 [2024-02-14 20:30:22.126560] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.799 [2024-02-14 20:30:22.135851] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.799 [2024-02-14 20:30:22.136413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.799 [2024-02-14 20:30:22.136885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.799 [2024-02-14 20:30:22.136918] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:44.799 [2024-02-14 20:30:22.136939] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:44.799 [2024-02-14 20:30:22.137070] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:44.799 [2024-02-14 20:30:22.137201] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.799 [2024-02-14 20:30:22.137209] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.799 [2024-02-14 20:30:22.137215] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.799 [2024-02-14 20:30:22.138993] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.799 [2024-02-14 20:30:22.147702] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.799 [2024-02-14 20:30:22.148305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.799 [2024-02-14 20:30:22.148733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.799 [2024-02-14 20:30:22.148765] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:44.799 [2024-02-14 20:30:22.148787] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:44.799 [2024-02-14 20:30:22.149174] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:44.799 [2024-02-14 20:30:22.149554] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.799 [2024-02-14 20:30:22.149578] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.799 [2024-02-14 20:30:22.149598] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.799 [2024-02-14 20:30:22.151323] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.799 [2024-02-14 20:30:22.159645] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.799 [2024-02-14 20:30:22.160145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.799 [2024-02-14 20:30:22.160618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.799 [2024-02-14 20:30:22.160659] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:44.799 [2024-02-14 20:30:22.160681] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:44.799 [2024-02-14 20:30:22.161015] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:44.799 [2024-02-14 20:30:22.161126] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.799 [2024-02-14 20:30:22.161133] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.799 [2024-02-14 20:30:22.161139] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.799 [2024-02-14 20:30:22.162807] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.799 [2024-02-14 20:30:22.171451] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.799 [2024-02-14 20:30:22.172037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.799 [2024-02-14 20:30:22.172476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.799 [2024-02-14 20:30:22.172486] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:44.799 [2024-02-14 20:30:22.172493] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:44.799 [2024-02-14 20:30:22.172617] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:44.799 [2024-02-14 20:30:22.172749] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.799 [2024-02-14 20:30:22.172758] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.799 [2024-02-14 20:30:22.172764] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.799 [2024-02-14 20:30:22.174505] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.799 [2024-02-14 20:30:22.183342] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.799 [2024-02-14 20:30:22.183883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.799 [2024-02-14 20:30:22.184325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.799 [2024-02-14 20:30:22.184358] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:44.799 [2024-02-14 20:30:22.184379] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:44.799 [2024-02-14 20:30:22.184721] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:44.799 [2024-02-14 20:30:22.184984] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.799 [2024-02-14 20:30:22.184992] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.799 [2024-02-14 20:30:22.184998] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.799 [2024-02-14 20:30:22.186913] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.799 [2024-02-14 20:30:22.195168] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.799 [2024-02-14 20:30:22.195702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.799 [2024-02-14 20:30:22.196127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.799 [2024-02-14 20:30:22.196158] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:44.799 [2024-02-14 20:30:22.196180] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:44.799 [2024-02-14 20:30:22.196609] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:44.799 [2024-02-14 20:30:22.196805] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.799 [2024-02-14 20:30:22.196830] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.799 [2024-02-14 20:30:22.196850] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.799 [2024-02-14 20:30:22.198811] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.799 [2024-02-14 20:30:22.207074] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.799 [2024-02-14 20:30:22.207627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.799 [2024-02-14 20:30:22.208078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.799 [2024-02-14 20:30:22.208110] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:44.799 [2024-02-14 20:30:22.208132] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:44.799 [2024-02-14 20:30:22.208462] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:44.799 [2024-02-14 20:30:22.208856] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.799 [2024-02-14 20:30:22.208882] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.799 [2024-02-14 20:30:22.208902] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.799 [2024-02-14 20:30:22.211034] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.096 [2024-02-14 20:30:22.219042] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.096 [2024-02-14 20:30:22.219591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.096 [2024-02-14 20:30:22.220015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.096 [2024-02-14 20:30:22.220026] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:45.096 [2024-02-14 20:30:22.220032] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:45.096 [2024-02-14 20:30:22.220146] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:45.096 [2024-02-14 20:30:22.220259] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.096 [2024-02-14 20:30:22.220269] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.096 [2024-02-14 20:30:22.220276] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.096 [2024-02-14 20:30:22.222012] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.096 [2024-02-14 20:30:22.230917] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.096 [2024-02-14 20:30:22.231472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.096 [2024-02-14 20:30:22.231902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.096 [2024-02-14 20:30:22.231914] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:45.096 [2024-02-14 20:30:22.231920] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:45.096 [2024-02-14 20:30:22.232063] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:45.096 [2024-02-14 20:30:22.232176] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.096 [2024-02-14 20:30:22.232183] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.096 [2024-02-14 20:30:22.232189] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.096 [2024-02-14 20:30:22.234072] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.096 [2024-02-14 20:30:22.242686] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.096 [2024-02-14 20:30:22.243247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.096 [2024-02-14 20:30:22.243740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.096 [2024-02-14 20:30:22.243772] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:45.096 [2024-02-14 20:30:22.243794] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:45.096 [2024-02-14 20:30:22.244223] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:45.096 [2024-02-14 20:30:22.244382] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.096 [2024-02-14 20:30:22.244390] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.096 [2024-02-14 20:30:22.244397] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.096 [2024-02-14 20:30:22.246117] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.096 [2024-02-14 20:30:22.254613] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.096 [2024-02-14 20:30:22.255182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.096 [2024-02-14 20:30:22.255671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.096 [2024-02-14 20:30:22.255703] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:45.096 [2024-02-14 20:30:22.255724] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:45.096 [2024-02-14 20:30:22.256162] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:45.096 [2024-02-14 20:30:22.256272] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.096 [2024-02-14 20:30:22.256280] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.096 [2024-02-14 20:30:22.256288] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.096 [2024-02-14 20:30:22.258021] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.096 [2024-02-14 20:30:22.266537] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.096 [2024-02-14 20:30:22.267093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.096 [2024-02-14 20:30:22.267497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.096 [2024-02-14 20:30:22.267528] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:45.096 [2024-02-14 20:30:22.267550] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:45.096 [2024-02-14 20:30:22.267843] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:45.096 [2024-02-14 20:30:22.268220] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.096 [2024-02-14 20:30:22.268227] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.096 [2024-02-14 20:30:22.268233] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.096 [2024-02-14 20:30:22.269842] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.096 [2024-02-14 20:30:22.278316] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.096 [2024-02-14 20:30:22.278890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.096 [2024-02-14 20:30:22.279398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.096 [2024-02-14 20:30:22.279429] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:45.096 [2024-02-14 20:30:22.279450] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:45.096 [2024-02-14 20:30:22.279743] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:45.096 [2024-02-14 20:30:22.280175] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.096 [2024-02-14 20:30:22.280199] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.096 [2024-02-14 20:30:22.280218] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.096 [2024-02-14 20:30:22.282136] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.096 [2024-02-14 20:30:22.290074] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.096 [2024-02-14 20:30:22.290621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.096 [2024-02-14 20:30:22.291129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.096 [2024-02-14 20:30:22.291160] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:45.096 [2024-02-14 20:30:22.291182] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:45.096 [2024-02-14 20:30:22.291510] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:45.096 [2024-02-14 20:30:22.291953] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.096 [2024-02-14 20:30:22.291978] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.096 [2024-02-14 20:30:22.291998] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.096 [2024-02-14 20:30:22.293990] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.096 [2024-02-14 20:30:22.301843] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.096 [2024-02-14 20:30:22.302417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.096 [2024-02-14 20:30:22.302912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.096 [2024-02-14 20:30:22.302943] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:45.096 [2024-02-14 20:30:22.302964] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:45.096 [2024-02-14 20:30:22.303491] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:45.096 [2024-02-14 20:30:22.303573] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.096 [2024-02-14 20:30:22.303581] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.096 [2024-02-14 20:30:22.303586] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.096 [2024-02-14 20:30:22.305347] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.096 [2024-02-14 20:30:22.313593] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.096 [2024-02-14 20:30:22.314155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.096 [2024-02-14 20:30:22.314660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.096 [2024-02-14 20:30:22.314691] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:45.096 [2024-02-14 20:30:22.314713] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:45.096 [2024-02-14 20:30:22.315093] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:45.096 [2024-02-14 20:30:22.315407] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.096 [2024-02-14 20:30:22.315415] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.096 [2024-02-14 20:30:22.315420] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.096 [2024-02-14 20:30:22.317121] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.096 [2024-02-14 20:30:22.325412] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.096 [2024-02-14 20:30:22.325944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.096 [2024-02-14 20:30:22.326377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.096 [2024-02-14 20:30:22.326387] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:45.096 [2024-02-14 20:30:22.326393] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:45.096 [2024-02-14 20:30:22.326490] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:45.096 [2024-02-14 20:30:22.326642] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.096 [2024-02-14 20:30:22.326655] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.096 [2024-02-14 20:30:22.326661] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.096 [2024-02-14 20:30:22.328378] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.096 [2024-02-14 20:30:22.337181] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.096 [2024-02-14 20:30:22.337717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.096 [2024-02-14 20:30:22.338168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.096 [2024-02-14 20:30:22.338177] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:45.096 [2024-02-14 20:30:22.338183] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:45.096 [2024-02-14 20:30:22.338287] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:45.096 [2024-02-14 20:30:22.338363] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.096 [2024-02-14 20:30:22.338370] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.096 [2024-02-14 20:30:22.338375] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.096 [2024-02-14 20:30:22.339955] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.096 [2024-02-14 20:30:22.349058] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.096 [2024-02-14 20:30:22.349628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.096 [2024-02-14 20:30:22.350108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.096 [2024-02-14 20:30:22.350138] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:45.096 [2024-02-14 20:30:22.350159] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:45.096 [2024-02-14 20:30:22.350355] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:45.096 [2024-02-14 20:30:22.350494] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.096 [2024-02-14 20:30:22.350502] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.096 [2024-02-14 20:30:22.350508] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.096 [2024-02-14 20:30:22.352166] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.096 [2024-02-14 20:30:22.360862] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.096 [2024-02-14 20:30:22.361409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.096 [2024-02-14 20:30:22.361899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.096 [2024-02-14 20:30:22.361931] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:45.096 [2024-02-14 20:30:22.361952] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:45.096 [2024-02-14 20:30:22.362281] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:45.096 [2024-02-14 20:30:22.362547] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.096 [2024-02-14 20:30:22.362555] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.096 [2024-02-14 20:30:22.362561] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.096 [2024-02-14 20:30:22.364237] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.096 [2024-02-14 20:30:22.372720] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.096 [2024-02-14 20:30:22.373285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.096 [2024-02-14 20:30:22.373743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.096 [2024-02-14 20:30:22.373776] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:45.096 [2024-02-14 20:30:22.373797] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:45.096 [2024-02-14 20:30:22.374275] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:45.096 [2024-02-14 20:30:22.374526] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.097 [2024-02-14 20:30:22.374534] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.097 [2024-02-14 20:30:22.374540] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.097 [2024-02-14 20:30:22.376223] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.097 [2024-02-14 20:30:22.384498] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.097 [2024-02-14 20:30:22.385020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.097 [2024-02-14 20:30:22.385467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.097 [2024-02-14 20:30:22.385477] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:45.097 [2024-02-14 20:30:22.385483] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:45.097 [2024-02-14 20:30:22.385597] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:45.097 [2024-02-14 20:30:22.385759] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.097 [2024-02-14 20:30:22.385767] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.097 [2024-02-14 20:30:22.385774] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.097 [2024-02-14 20:30:22.387532] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.097 [2024-02-14 20:30:22.396401] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.097 [2024-02-14 20:30:22.396937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.097 [2024-02-14 20:30:22.397373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.097 [2024-02-14 20:30:22.397403] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:45.097 [2024-02-14 20:30:22.397424] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:45.097 [2024-02-14 20:30:22.397857] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:45.097 [2024-02-14 20:30:22.397956] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.097 [2024-02-14 20:30:22.397965] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.097 [2024-02-14 20:30:22.397971] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.097 [2024-02-14 20:30:22.399674] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.097 [2024-02-14 20:30:22.408215] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.097 [2024-02-14 20:30:22.408766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.097 [2024-02-14 20:30:22.409256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.097 [2024-02-14 20:30:22.409293] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:45.097 [2024-02-14 20:30:22.409314] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:45.097 [2024-02-14 20:30:22.409454] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:45.097 [2024-02-14 20:30:22.409578] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.097 [2024-02-14 20:30:22.409586] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.097 [2024-02-14 20:30:22.409592] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.097 [2024-02-14 20:30:22.411293] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.097 [2024-02-14 20:30:22.419888] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.097 [2024-02-14 20:30:22.420378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.097 [2024-02-14 20:30:22.420849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.097 [2024-02-14 20:30:22.420882] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:45.097 [2024-02-14 20:30:22.420903] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:45.097 [2024-02-14 20:30:22.421283] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:45.097 [2024-02-14 20:30:22.421553] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.097 [2024-02-14 20:30:22.421561] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.097 [2024-02-14 20:30:22.421566] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.097 [2024-02-14 20:30:22.423301] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.097 [2024-02-14 20:30:22.431669] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.097 [2024-02-14 20:30:22.432245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.097 [2024-02-14 20:30:22.432590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.097 [2024-02-14 20:30:22.432621] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:45.097 [2024-02-14 20:30:22.432642] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:45.097 [2024-02-14 20:30:22.432989] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:45.097 [2024-02-14 20:30:22.433320] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.097 [2024-02-14 20:30:22.433345] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.097 [2024-02-14 20:30:22.433366] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.097 [2024-02-14 20:30:22.435279] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.097 [2024-02-14 20:30:22.443274] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.097 [2024-02-14 20:30:22.443863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.097 [2024-02-14 20:30:22.444202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.097 [2024-02-14 20:30:22.444232] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:45.097 [2024-02-14 20:30:22.444260] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:45.097 [2024-02-14 20:30:22.444496] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:45.097 [2024-02-14 20:30:22.444606] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.097 [2024-02-14 20:30:22.444614] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.097 [2024-02-14 20:30:22.444619] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.097 [2024-02-14 20:30:22.446525] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.097 [2024-02-14 20:30:22.455135] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.097 [2024-02-14 20:30:22.455732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.097 [2024-02-14 20:30:22.456220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.097 [2024-02-14 20:30:22.456251] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:45.097 [2024-02-14 20:30:22.456273] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:45.097 [2024-02-14 20:30:22.456602] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:45.097 [2024-02-14 20:30:22.456799] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.097 [2024-02-14 20:30:22.456808] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.097 [2024-02-14 20:30:22.456814] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.097 [2024-02-14 20:30:22.458551] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.097 [2024-02-14 20:30:22.467010] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.097 [2024-02-14 20:30:22.467528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.097 [2024-02-14 20:30:22.467998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.097 [2024-02-14 20:30:22.468032] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:45.097 [2024-02-14 20:30:22.468054] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:45.097 [2024-02-14 20:30:22.468336] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:45.097 [2024-02-14 20:30:22.468671] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.097 [2024-02-14 20:30:22.468679] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.097 [2024-02-14 20:30:22.468687] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.097 [2024-02-14 20:30:22.470233] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.097 [2024-02-14 20:30:22.478897] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.097 [2024-02-14 20:30:22.479415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.097 [2024-02-14 20:30:22.479788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.097 [2024-02-14 20:30:22.479799] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:45.097 [2024-02-14 20:30:22.479806] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:45.097 [2024-02-14 20:30:22.479937] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:45.097 [2024-02-14 20:30:22.480065] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.097 [2024-02-14 20:30:22.480073] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.097 [2024-02-14 20:30:22.480079] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.097 [2024-02-14 20:30:22.481842] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.097 [2024-02-14 20:30:22.490856] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.097 [2024-02-14 20:30:22.491408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.097 [2024-02-14 20:30:22.491825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.097 [2024-02-14 20:30:22.491836] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:45.097 [2024-02-14 20:30:22.491842] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:45.097 [2024-02-14 20:30:22.491942] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:45.097 [2024-02-14 20:30:22.492069] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.097 [2024-02-14 20:30:22.492077] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.097 [2024-02-14 20:30:22.492083] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.097 [2024-02-14 20:30:22.493857] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.097 [2024-02-14 20:30:22.502746] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.097 [2024-02-14 20:30:22.503321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.097 [2024-02-14 20:30:22.503779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.097 [2024-02-14 20:30:22.503811] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:45.097 [2024-02-14 20:30:22.503832] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:45.097 [2024-02-14 20:30:22.504261] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:45.097 [2024-02-14 20:30:22.504592] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.097 [2024-02-14 20:30:22.504616] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.097 [2024-02-14 20:30:22.504636] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.097 [2024-02-14 20:30:22.506525] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.357 [2024-02-14 20:30:22.514651] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.357 [2024-02-14 20:30:22.515171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.357 [2024-02-14 20:30:22.515593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.357 [2024-02-14 20:30:22.515603] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:45.357 [2024-02-14 20:30:22.515609] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:45.358 [2024-02-14 20:30:22.515712] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:45.358 [2024-02-14 20:30:22.515827] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.358 [2024-02-14 20:30:22.515834] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.358 [2024-02-14 20:30:22.515841] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.358 [2024-02-14 20:30:22.517664] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.358 [2024-02-14 20:30:22.526489] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.358 [2024-02-14 20:30:22.527047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.358 [2024-02-14 20:30:22.527479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.358 [2024-02-14 20:30:22.527509] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:45.358 [2024-02-14 20:30:22.527530] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:45.358 [2024-02-14 20:30:22.527673] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:45.358 [2024-02-14 20:30:22.527797] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.358 [2024-02-14 20:30:22.527805] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.358 [2024-02-14 20:30:22.527811] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.358 [2024-02-14 20:30:22.529547] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.358 [2024-02-14 20:30:22.538445] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.358 [2024-02-14 20:30:22.538973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.358 [2024-02-14 20:30:22.539463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.358 [2024-02-14 20:30:22.539493] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:45.358 [2024-02-14 20:30:22.539514] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:45.358 [2024-02-14 20:30:22.539922] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:45.358 [2024-02-14 20:30:22.540061] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.358 [2024-02-14 20:30:22.540069] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.358 [2024-02-14 20:30:22.540075] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.358 [2024-02-14 20:30:22.541882] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.358 [2024-02-14 20:30:22.550407] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.358 [2024-02-14 20:30:22.550902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.358 [2024-02-14 20:30:22.551349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.358 [2024-02-14 20:30:22.551379] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:45.358 [2024-02-14 20:30:22.551401] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:45.358 [2024-02-14 20:30:22.551690] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:45.358 [2024-02-14 20:30:22.551863] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.358 [2024-02-14 20:30:22.551873] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.358 [2024-02-14 20:30:22.551879] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.358 [2024-02-14 20:30:22.553608] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.358 [2024-02-14 20:30:22.562049] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.358 [2024-02-14 20:30:22.562611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.358 [2024-02-14 20:30:22.563050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.358 [2024-02-14 20:30:22.563081] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:45.358 [2024-02-14 20:30:22.563102] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:45.358 [2024-02-14 20:30:22.563432] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:45.358 [2024-02-14 20:30:22.563711] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.358 [2024-02-14 20:30:22.563719] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.358 [2024-02-14 20:30:22.563725] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.358 [2024-02-14 20:30:22.565336] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.358 [2024-02-14 20:30:22.573864] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.358 [2024-02-14 20:30:22.574423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.358 [2024-02-14 20:30:22.574894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.358 [2024-02-14 20:30:22.574926] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:45.358 [2024-02-14 20:30:22.574947] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:45.358 [2024-02-14 20:30:22.575426] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:45.358 [2024-02-14 20:30:22.575766] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.358 [2024-02-14 20:30:22.575789] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.358 [2024-02-14 20:30:22.575795] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.358 [2024-02-14 20:30:22.577559] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.358 [2024-02-14 20:30:22.585627] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.358 [2024-02-14 20:30:22.586124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.358 [2024-02-14 20:30:22.586592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.358 [2024-02-14 20:30:22.586622] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:45.358 [2024-02-14 20:30:22.586643] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:45.358 [2024-02-14 20:30:22.586818] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:45.358 [2024-02-14 20:30:22.586914] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.358 [2024-02-14 20:30:22.586922] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.358 [2024-02-14 20:30:22.586931] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.358 [2024-02-14 20:30:22.588470] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.358 [2024-02-14 20:30:22.597384] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.358 [2024-02-14 20:30:22.597977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.358 [2024-02-14 20:30:22.598465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.358 [2024-02-14 20:30:22.598496] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:45.358 [2024-02-14 20:30:22.598518] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:45.358 [2024-02-14 20:30:22.599009] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:45.358 [2024-02-14 20:30:22.599170] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.358 [2024-02-14 20:30:22.599178] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.358 [2024-02-14 20:30:22.599184] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.358 [2024-02-14 20:30:22.600830] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.358 [2024-02-14 20:30:22.609221] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.358 [2024-02-14 20:30:22.609796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.358 [2024-02-14 20:30:22.610291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.358 [2024-02-14 20:30:22.610321] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:45.358 [2024-02-14 20:30:22.610343] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:45.358 [2024-02-14 20:30:22.610683] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:45.358 [2024-02-14 20:30:22.611066] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.358 [2024-02-14 20:30:22.611090] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.358 [2024-02-14 20:30:22.611110] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.358 [2024-02-14 20:30:22.614064] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.358 [2024-02-14 20:30:22.621930] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.358 [2024-02-14 20:30:22.622519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.358 [2024-02-14 20:30:22.622989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.358 [2024-02-14 20:30:22.623001] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:45.358 [2024-02-14 20:30:22.623008] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:45.358 [2024-02-14 20:30:22.623114] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:45.358 [2024-02-14 20:30:22.623218] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.358 [2024-02-14 20:30:22.623227] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.358 [2024-02-14 20:30:22.623234] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.358 [2024-02-14 20:30:22.625065] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.358 [2024-02-14 20:30:22.633717] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.358 [2024-02-14 20:30:22.634258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.358 [2024-02-14 20:30:22.634707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.358 [2024-02-14 20:30:22.634740] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:45.358 [2024-02-14 20:30:22.634761] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:45.358 [2024-02-14 20:30:22.635182] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:45.358 [2024-02-14 20:30:22.635296] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.358 [2024-02-14 20:30:22.635304] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.358 [2024-02-14 20:30:22.635310] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.358 [2024-02-14 20:30:22.637074] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.358 [2024-02-14 20:30:22.645650] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.358 [2024-02-14 20:30:22.646126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.358 [2024-02-14 20:30:22.646487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.358 [2024-02-14 20:30:22.646497] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:45.358 [2024-02-14 20:30:22.646504] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:45.358 [2024-02-14 20:30:22.646617] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:45.358 [2024-02-14 20:30:22.646750] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.358 [2024-02-14 20:30:22.646759] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.358 [2024-02-14 20:30:22.646765] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.358 [2024-02-14 20:30:22.648598] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.358 [2024-02-14 20:30:22.657531] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.358 [2024-02-14 20:30:22.658076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.358 [2024-02-14 20:30:22.658541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.358 [2024-02-14 20:30:22.658571] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:45.358 [2024-02-14 20:30:22.658592] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:45.358 [2024-02-14 20:30:22.658940] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:45.358 [2024-02-14 20:30:22.659009] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.358 [2024-02-14 20:30:22.659017] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.358 [2024-02-14 20:30:22.659023] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.358 [2024-02-14 20:30:22.660774] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.358 [2024-02-14 20:30:22.669321] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.358 [2024-02-14 20:30:22.669828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.358 [2024-02-14 20:30:22.670282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.358 [2024-02-14 20:30:22.670292] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:45.358 [2024-02-14 20:30:22.670298] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:45.358 [2024-02-14 20:30:22.670397] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:45.358 [2024-02-14 20:30:22.670510] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.358 [2024-02-14 20:30:22.670518] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.358 [2024-02-14 20:30:22.670525] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.358 [2024-02-14 20:30:22.672274] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.358 [2024-02-14 20:30:22.681214] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.358 [2024-02-14 20:30:22.681719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.358 [2024-02-14 20:30:22.682189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.358 [2024-02-14 20:30:22.682220] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:45.358 [2024-02-14 20:30:22.682242] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:45.358 [2024-02-14 20:30:22.682523] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:45.359 [2024-02-14 20:30:22.682704] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.359 [2024-02-14 20:30:22.682717] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.359 [2024-02-14 20:30:22.682727] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.359 [2024-02-14 20:30:22.685672] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.359 [2024-02-14 20:30:22.693768] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.359 [2024-02-14 20:30:22.694389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.359 [2024-02-14 20:30:22.694775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.359 [2024-02-14 20:30:22.694819] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:45.359 [2024-02-14 20:30:22.694840] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:45.359 [2024-02-14 20:30:22.695120] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:45.359 [2024-02-14 20:30:22.695339] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.359 [2024-02-14 20:30:22.695347] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.359 [2024-02-14 20:30:22.695354] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.359 [2024-02-14 20:30:22.697350] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.359 [2024-02-14 20:30:22.705703] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.359 [2024-02-14 20:30:22.706295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.359 [2024-02-14 20:30:22.706794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.359 [2024-02-14 20:30:22.706826] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:45.359 [2024-02-14 20:30:22.706848] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:45.359 [2024-02-14 20:30:22.707078] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:45.359 [2024-02-14 20:30:22.707174] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.359 [2024-02-14 20:30:22.707182] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.359 [2024-02-14 20:30:22.707188] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.359 [2024-02-14 20:30:22.708950] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.359 [2024-02-14 20:30:22.717604] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.359 [2024-02-14 20:30:22.718159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.359 [2024-02-14 20:30:22.718602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.359 [2024-02-14 20:30:22.718612] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:45.359 [2024-02-14 20:30:22.718619] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:45.359 [2024-02-14 20:30:22.718751] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:45.359 [2024-02-14 20:30:22.718894] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.359 [2024-02-14 20:30:22.718902] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.359 [2024-02-14 20:30:22.718907] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.359 [2024-02-14 20:30:22.720594] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.359 [2024-02-14 20:30:22.729497] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.359 [2024-02-14 20:30:22.730071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.359 [2024-02-14 20:30:22.730532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.359 [2024-02-14 20:30:22.730562] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:45.359 [2024-02-14 20:30:22.730583] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:45.359 [2024-02-14 20:30:22.730975] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:45.359 [2024-02-14 20:30:22.731163] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.359 [2024-02-14 20:30:22.731171] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.359 [2024-02-14 20:30:22.731176] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.359 [2024-02-14 20:30:22.732798] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.359 [2024-02-14 20:30:22.741325] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.359 [2024-02-14 20:30:22.741909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.359 [2024-02-14 20:30:22.742401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.359 [2024-02-14 20:30:22.742443] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:45.359 [2024-02-14 20:30:22.742465] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:45.359 [2024-02-14 20:30:22.742616] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:45.359 [2024-02-14 20:30:22.742746] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.359 [2024-02-14 20:30:22.742754] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.359 [2024-02-14 20:30:22.742775] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.359 [2024-02-14 20:30:22.745553] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.359 [2024-02-14 20:30:22.754029] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.359 [2024-02-14 20:30:22.754575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.359 [2024-02-14 20:30:22.755046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.359 [2024-02-14 20:30:22.755079] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:45.359 [2024-02-14 20:30:22.755099] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:45.359 [2024-02-14 20:30:22.755245] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:45.359 [2024-02-14 20:30:22.755351] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.359 [2024-02-14 20:30:22.755359] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.359 [2024-02-14 20:30:22.755366] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.359 [2024-02-14 20:30:22.757255] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.359 [2024-02-14 20:30:22.765749] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.359 [2024-02-14 20:30:22.766305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.359 [2024-02-14 20:30:22.766790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.359 [2024-02-14 20:30:22.766823] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:45.359 [2024-02-14 20:30:22.766844] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:45.359 [2024-02-14 20:30:22.767373] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:45.359 [2024-02-14 20:30:22.767492] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.359 [2024-02-14 20:30:22.767500] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.359 [2024-02-14 20:30:22.767506] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.359 [2024-02-14 20:30:22.769357] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.620 [2024-02-14 20:30:22.777687] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.620 [2024-02-14 20:30:22.778135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.620 [2024-02-14 20:30:22.778565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.620 [2024-02-14 20:30:22.778575] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:45.620 [2024-02-14 20:30:22.778584] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:45.620 [2024-02-14 20:30:22.778703] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:45.620 [2024-02-14 20:30:22.778787] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.620 [2024-02-14 20:30:22.778794] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.620 [2024-02-14 20:30:22.778800] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.620 [2024-02-14 20:30:22.780593] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.620 [2024-02-14 20:30:22.789609] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.620 [2024-02-14 20:30:22.790177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.620 [2024-02-14 20:30:22.790608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.620 [2024-02-14 20:30:22.790639] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:45.620 [2024-02-14 20:30:22.790675] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:45.620 [2024-02-14 20:30:22.790907] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:45.620 [2024-02-14 20:30:22.791251] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.620 [2024-02-14 20:30:22.791259] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.620 [2024-02-14 20:30:22.791265] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.620 [2024-02-14 20:30:22.793047] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.620 [2024-02-14 20:30:22.801502] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.620 [2024-02-14 20:30:22.802034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.620 [2024-02-14 20:30:22.802526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.620 [2024-02-14 20:30:22.802558] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:45.620 [2024-02-14 20:30:22.802579] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:45.620 [2024-02-14 20:30:22.802745] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:45.620 [2024-02-14 20:30:22.802842] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.620 [2024-02-14 20:30:22.802850] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.620 [2024-02-14 20:30:22.802855] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.620 [2024-02-14 20:30:22.804625] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.620 [2024-02-14 20:30:22.813245] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.620 [2024-02-14 20:30:22.813759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.620 [2024-02-14 20:30:22.814252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.620 [2024-02-14 20:30:22.814282] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:45.620 [2024-02-14 20:30:22.814304] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:45.620 [2024-02-14 20:30:22.814736] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:45.620 [2024-02-14 20:30:22.814842] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.620 [2024-02-14 20:30:22.814849] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.620 [2024-02-14 20:30:22.814855] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.620 [2024-02-14 20:30:22.816630] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.620 [2024-02-14 20:30:22.825058] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.620 [2024-02-14 20:30:22.825617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.620 [2024-02-14 20:30:22.825922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.620 [2024-02-14 20:30:22.825932] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:45.620 [2024-02-14 20:30:22.825939] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:45.620 [2024-02-14 20:30:22.826043] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:45.620 [2024-02-14 20:30:22.826160] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.620 [2024-02-14 20:30:22.826168] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.620 [2024-02-14 20:30:22.826173] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.620 [2024-02-14 20:30:22.827891] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.620 [2024-02-14 20:30:22.836861] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.620 [2024-02-14 20:30:22.837395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.620 [2024-02-14 20:30:22.837791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.620 [2024-02-14 20:30:22.837824] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:45.620 [2024-02-14 20:30:22.837845] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:45.620 [2024-02-14 20:30:22.838174] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:45.620 [2024-02-14 20:30:22.838385] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.620 [2024-02-14 20:30:22.838393] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.620 [2024-02-14 20:30:22.838399] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.620 [2024-02-14 20:30:22.840180] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.620 [2024-02-14 20:30:22.848668] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.620 [2024-02-14 20:30:22.849252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.620 [2024-02-14 20:30:22.849743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.620 [2024-02-14 20:30:22.849775] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:45.620 [2024-02-14 20:30:22.849796] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:45.620 [2024-02-14 20:30:22.850127] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:45.620 [2024-02-14 20:30:22.850465] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.620 [2024-02-14 20:30:22.850488] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.620 [2024-02-14 20:30:22.850509] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.620 [2024-02-14 20:30:22.852363] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.620 [2024-02-14 20:30:22.860470] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.620 [2024-02-14 20:30:22.860936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.620 [2024-02-14 20:30:22.861361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.620 [2024-02-14 20:30:22.861392] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:45.620 [2024-02-14 20:30:22.861413] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:45.620 [2024-02-14 20:30:22.861804] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:45.620 [2024-02-14 20:30:22.861915] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.620 [2024-02-14 20:30:22.861923] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.620 [2024-02-14 20:30:22.861929] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.620 [2024-02-14 20:30:22.863667] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.620 [2024-02-14 20:30:22.872308] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.620 [2024-02-14 20:30:22.872849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.620 [2024-02-14 20:30:22.873312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.620 [2024-02-14 20:30:22.873343] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:45.620 [2024-02-14 20:30:22.873364] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:45.620 [2024-02-14 20:30:22.873807] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:45.620 [2024-02-14 20:30:22.873981] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.620 [2024-02-14 20:30:22.873989] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.620 [2024-02-14 20:30:22.873996] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.620 [2024-02-14 20:30:22.876346] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.620 [2024-02-14 20:30:22.884984] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.620 [2024-02-14 20:30:22.885522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.620 [2024-02-14 20:30:22.885945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.620 [2024-02-14 20:30:22.885957] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:45.620 [2024-02-14 20:30:22.885964] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:45.620 [2024-02-14 20:30:22.886069] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:45.620 [2024-02-14 20:30:22.886174] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.620 [2024-02-14 20:30:22.886185] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.620 [2024-02-14 20:30:22.886191] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.620 [2024-02-14 20:30:22.887894] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.620 [2024-02-14 20:30:22.896924] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.620 [2024-02-14 20:30:22.897495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.620 [2024-02-14 20:30:22.897891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.620 [2024-02-14 20:30:22.897923] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:45.620 [2024-02-14 20:30:22.897944] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:45.620 [2024-02-14 20:30:22.898274] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:45.620 [2024-02-14 20:30:22.898650] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.620 [2024-02-14 20:30:22.898658] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.620 [2024-02-14 20:30:22.898664] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.620 [2024-02-14 20:30:22.900323] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.620 [2024-02-14 20:30:22.908768] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.620 [2024-02-14 20:30:22.909329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.620 [2024-02-14 20:30:22.909824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.620 [2024-02-14 20:30:22.909856] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:45.620 [2024-02-14 20:30:22.909878] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:45.620 [2024-02-14 20:30:22.910306] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:45.620 [2024-02-14 20:30:22.910655] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.620 [2024-02-14 20:30:22.910663] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.620 [2024-02-14 20:30:22.910669] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.620 [2024-02-14 20:30:22.912399] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.620 [2024-02-14 20:30:22.920535] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.620 [2024-02-14 20:30:22.921101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.620 [2024-02-14 20:30:22.921593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.620 [2024-02-14 20:30:22.921623] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:45.620 [2024-02-14 20:30:22.921644] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:45.620 [2024-02-14 20:30:22.922036] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:45.620 [2024-02-14 20:30:22.922368] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.620 [2024-02-14 20:30:22.922391] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.620 [2024-02-14 20:30:22.922418] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.620 [2024-02-14 20:30:22.924457] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.620 [2024-02-14 20:30:22.932198] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.620 [2024-02-14 20:30:22.932743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.620 [2024-02-14 20:30:22.933233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.620 [2024-02-14 20:30:22.933263] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:45.620 [2024-02-14 20:30:22.933284] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:45.620 [2024-02-14 20:30:22.933628] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:45.620 [2024-02-14 20:30:22.933770] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.621 [2024-02-14 20:30:22.933779] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.621 [2024-02-14 20:30:22.933784] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.621 [2024-02-14 20:30:22.935549] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.621 [2024-02-14 20:30:22.944011] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.621 [2024-02-14 20:30:22.944590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.621 [2024-02-14 20:30:22.945026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.621 [2024-02-14 20:30:22.945059] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:45.621 [2024-02-14 20:30:22.945082] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:45.621 [2024-02-14 20:30:22.945461] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:45.621 [2024-02-14 20:30:22.945802] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.621 [2024-02-14 20:30:22.945827] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.621 [2024-02-14 20:30:22.945847] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.621 [2024-02-14 20:30:22.947684] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.621 [2024-02-14 20:30:22.955829] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.621 [2024-02-14 20:30:22.956387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.621 [2024-02-14 20:30:22.956878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.621 [2024-02-14 20:30:22.956910] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:45.621 [2024-02-14 20:30:22.956931] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:45.621 [2024-02-14 20:30:22.957212] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:45.621 [2024-02-14 20:30:22.957642] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.621 [2024-02-14 20:30:22.957677] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.621 [2024-02-14 20:30:22.957697] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.621 [2024-02-14 20:30:22.959466] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.621 [2024-02-14 20:30:22.967605] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.621 [2024-02-14 20:30:22.968165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.621 [2024-02-14 20:30:22.968629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.621 [2024-02-14 20:30:22.968672] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:45.621 [2024-02-14 20:30:22.968695] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:45.621 [2024-02-14 20:30:22.969074] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:45.621 [2024-02-14 20:30:22.969454] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.621 [2024-02-14 20:30:22.969478] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.621 [2024-02-14 20:30:22.969498] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.621 [2024-02-14 20:30:22.971565] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.621 [2024-02-14 20:30:22.979558] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.621 [2024-02-14 20:30:22.980146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.621 [2024-02-14 20:30:22.980633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.621 [2024-02-14 20:30:22.980679] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:45.621 [2024-02-14 20:30:22.980702] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:45.621 [2024-02-14 20:30:22.980984] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:45.621 [2024-02-14 20:30:22.981143] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.621 [2024-02-14 20:30:22.981150] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.621 [2024-02-14 20:30:22.981157] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.621 [2024-02-14 20:30:22.982815] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.621 [2024-02-14 20:30:22.991321] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.621 [2024-02-14 20:30:22.991849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.621 [2024-02-14 20:30:22.992284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.621 [2024-02-14 20:30:22.992315] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:45.621 [2024-02-14 20:30:22.992337] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:45.621 [2024-02-14 20:30:22.992778] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:45.621 [2024-02-14 20:30:22.992896] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.621 [2024-02-14 20:30:22.992904] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.621 [2024-02-14 20:30:22.992910] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.621 [2024-02-14 20:30:22.994680] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.621 [2024-02-14 20:30:23.003354] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.621 [2024-02-14 20:30:23.003856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.621 [2024-02-14 20:30:23.004300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.621 [2024-02-14 20:30:23.004330] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:45.621 [2024-02-14 20:30:23.004352] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:45.621 [2024-02-14 20:30:23.004854] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:45.621 [2024-02-14 20:30:23.005263] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.621 [2024-02-14 20:30:23.005271] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.621 [2024-02-14 20:30:23.005277] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.621 [2024-02-14 20:30:23.007047] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.621 [2024-02-14 20:30:23.015299] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.621 [2024-02-14 20:30:23.015863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.621 [2024-02-14 20:30:23.016269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.621 [2024-02-14 20:30:23.016301] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:45.621 [2024-02-14 20:30:23.016322] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:45.621 [2024-02-14 20:30:23.016713] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:45.621 [2024-02-14 20:30:23.016972] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.621 [2024-02-14 20:30:23.016980] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.621 [2024-02-14 20:30:23.016986] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.621 [2024-02-14 20:30:23.018738] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.621 [2024-02-14 20:30:23.027149] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.621 [2024-02-14 20:30:23.027697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.621 [2024-02-14 20:30:23.028111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.621 [2024-02-14 20:30:23.028143] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:45.621 [2024-02-14 20:30:23.028165] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:45.621 [2024-02-14 20:30:23.028595] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:45.621 [2024-02-14 20:30:23.028911] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.621 [2024-02-14 20:30:23.028919] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.621 [2024-02-14 20:30:23.028925] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.621 [2024-02-14 20:30:23.030703] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.882 [2024-02-14 20:30:23.039042] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.882 [2024-02-14 20:30:23.039604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.882 [2024-02-14 20:30:23.039949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.882 [2024-02-14 20:30:23.039961] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:45.882 [2024-02-14 20:30:23.039967] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:45.882 [2024-02-14 20:30:23.040124] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:45.882 [2024-02-14 20:30:23.040222] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.882 [2024-02-14 20:30:23.040230] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.882 [2024-02-14 20:30:23.040236] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.882 [2024-02-14 20:30:23.041936] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.882 [2024-02-14 20:30:23.050821] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.882 [2024-02-14 20:30:23.051397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.882 [2024-02-14 20:30:23.051809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.882 [2024-02-14 20:30:23.051842] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:45.882 [2024-02-14 20:30:23.051863] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:45.882 [2024-02-14 20:30:23.052008] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:45.882 [2024-02-14 20:30:23.052118] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.882 [2024-02-14 20:30:23.052126] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.882 [2024-02-14 20:30:23.052132] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.882 [2024-02-14 20:30:23.053906] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.882 [2024-02-14 20:30:23.062672] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.882 [2024-02-14 20:30:23.063227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.882 [2024-02-14 20:30:23.063671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.882 [2024-02-14 20:30:23.063703] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:45.882 [2024-02-14 20:30:23.063725] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:45.882 [2024-02-14 20:30:23.064167] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:45.882 [2024-02-14 20:30:23.064305] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.882 [2024-02-14 20:30:23.064313] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.882 [2024-02-14 20:30:23.064319] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.882 [2024-02-14 20:30:23.066196] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.882 [2024-02-14 20:30:23.074670] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.882 [2024-02-14 20:30:23.075211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.882 [2024-02-14 20:30:23.075695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.882 [2024-02-14 20:30:23.075736] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:45.882 [2024-02-14 20:30:23.075758] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:45.882 [2024-02-14 20:30:23.076089] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:45.882 [2024-02-14 20:30:23.076303] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.882 [2024-02-14 20:30:23.076310] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.882 [2024-02-14 20:30:23.076315] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.882 [2024-02-14 20:30:23.077976] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.882 [2024-02-14 20:30:23.086506] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.882 [2024-02-14 20:30:23.087124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.882 [2024-02-14 20:30:23.087600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.882 [2024-02-14 20:30:23.087631] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:45.882 [2024-02-14 20:30:23.087668] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:45.882 [2024-02-14 20:30:23.088148] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:45.882 [2024-02-14 20:30:23.088342] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.882 [2024-02-14 20:30:23.088350] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.882 [2024-02-14 20:30:23.088356] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.882 [2024-02-14 20:30:23.090111] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.882 [2024-02-14 20:30:23.098383] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.882 [2024-02-14 20:30:23.098932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.882 [2024-02-14 20:30:23.099399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.882 [2024-02-14 20:30:23.099429] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:45.882 [2024-02-14 20:30:23.099450] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:45.882 [2024-02-14 20:30:23.099744] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:45.882 [2024-02-14 20:30:23.100078] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.882 [2024-02-14 20:30:23.100102] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.882 [2024-02-14 20:30:23.100122] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.882 [2024-02-14 20:30:23.101927] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.882 [2024-02-14 20:30:23.110328] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.882 [2024-02-14 20:30:23.110879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.882 [2024-02-14 20:30:23.111308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.882 [2024-02-14 20:30:23.111339] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:45.882 [2024-02-14 20:30:23.111367] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:45.882 [2024-02-14 20:30:23.111762] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:45.882 [2024-02-14 20:30:23.112244] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.882 [2024-02-14 20:30:23.112267] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.882 [2024-02-14 20:30:23.112288] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.882 [2024-02-14 20:30:23.114032] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.882 [2024-02-14 20:30:23.122273] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.882 [2024-02-14 20:30:23.122868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.882 [2024-02-14 20:30:23.123340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.882 [2024-02-14 20:30:23.123371] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:45.883 [2024-02-14 20:30:23.123393] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:45.883 [2024-02-14 20:30:23.123624] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:45.883 [2024-02-14 20:30:23.124069] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.883 [2024-02-14 20:30:23.124095] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.883 [2024-02-14 20:30:23.124115] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.883 [2024-02-14 20:30:23.126011] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.883 [2024-02-14 20:30:23.134185] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.883 [2024-02-14 20:30:23.134781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.883 [2024-02-14 20:30:23.135188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.883 [2024-02-14 20:30:23.135219] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:45.883 [2024-02-14 20:30:23.135241] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:45.883 [2024-02-14 20:30:23.135360] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:45.883 [2024-02-14 20:30:23.135427] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.883 [2024-02-14 20:30:23.135435] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.883 [2024-02-14 20:30:23.135441] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.883 [2024-02-14 20:30:23.137015] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.883 [2024-02-14 20:30:23.146095] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.883 [2024-02-14 20:30:23.146616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.883 [2024-02-14 20:30:23.147045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.883 [2024-02-14 20:30:23.147077] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:45.883 [2024-02-14 20:30:23.147098] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:45.883 [2024-02-14 20:30:23.147434] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:45.883 [2024-02-14 20:30:23.147631] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.883 [2024-02-14 20:30:23.147643] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.883 [2024-02-14 20:30:23.147659] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.883 [2024-02-14 20:30:23.150481] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.883 [2024-02-14 20:30:23.158840] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.883 [2024-02-14 20:30:23.159158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.883 [2024-02-14 20:30:23.159511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.883 [2024-02-14 20:30:23.159541] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:45.883 [2024-02-14 20:30:23.159562] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:45.883 [2024-02-14 20:30:23.159955] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:45.883 [2024-02-14 20:30:23.160256] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.883 [2024-02-14 20:30:23.160265] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.883 [2024-02-14 20:30:23.160271] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.883 [2024-02-14 20:30:23.162251] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.883 [2024-02-14 20:30:23.170802] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.883 [2024-02-14 20:30:23.171276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.883 [2024-02-14 20:30:23.171682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.883 [2024-02-14 20:30:23.171715] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:45.883 [2024-02-14 20:30:23.171736] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:45.883 [2024-02-14 20:30:23.172118] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:45.883 [2024-02-14 20:30:23.172415] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.883 [2024-02-14 20:30:23.172424] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.883 [2024-02-14 20:30:23.172430] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.883 [2024-02-14 20:30:23.174278] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.883 [2024-02-14 20:30:23.182703] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.883 [2024-02-14 20:30:23.183128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.883 [2024-02-14 20:30:23.183480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.883 [2024-02-14 20:30:23.183510] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:45.883 [2024-02-14 20:30:23.183531] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:45.883 [2024-02-14 20:30:23.183972] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:45.883 [2024-02-14 20:30:23.184312] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.883 [2024-02-14 20:30:23.184320] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.883 [2024-02-14 20:30:23.184326] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.883 [2024-02-14 20:30:23.185957] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.883 [2024-02-14 20:30:23.194587] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.883 [2024-02-14 20:30:23.195122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.883 [2024-02-14 20:30:23.195321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.883 [2024-02-14 20:30:23.195331] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:45.883 [2024-02-14 20:30:23.195338] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:45.883 [2024-02-14 20:30:23.195437] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:45.883 [2024-02-14 20:30:23.195564] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.883 [2024-02-14 20:30:23.195572] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.883 [2024-02-14 20:30:23.195578] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.883 [2024-02-14 20:30:23.197353] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.883 [2024-02-14 20:30:23.206441] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.883 [2024-02-14 20:30:23.206883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.883 [2024-02-14 20:30:23.207236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.883 [2024-02-14 20:30:23.207267] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:45.883 [2024-02-14 20:30:23.207289] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:45.883 [2024-02-14 20:30:23.207536] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:45.883 [2024-02-14 20:30:23.207650] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.883 [2024-02-14 20:30:23.207658] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.883 [2024-02-14 20:30:23.207664] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.883 [2024-02-14 20:30:23.209323] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.883 [2024-02-14 20:30:23.218059] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.883 [2024-02-14 20:30:23.218599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.883 [2024-02-14 20:30:23.218958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.883 [2024-02-14 20:30:23.218990] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:45.883 [2024-02-14 20:30:23.219012] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:45.883 [2024-02-14 20:30:23.219188] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:45.883 [2024-02-14 20:30:23.219372] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.883 [2024-02-14 20:30:23.219388] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.883 [2024-02-14 20:30:23.219397] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.883 [2024-02-14 20:30:23.222559] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.883 [2024-02-14 20:30:23.230276] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.883 [2024-02-14 20:30:23.230874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.883 [2024-02-14 20:30:23.231278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.883 [2024-02-14 20:30:23.231308] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:45.883 [2024-02-14 20:30:23.231329] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:45.883 [2024-02-14 20:30:23.231523] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:45.883 [2024-02-14 20:30:23.231632] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.883 [2024-02-14 20:30:23.231640] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.883 [2024-02-14 20:30:23.231651] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.883 [2024-02-14 20:30:23.233596] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.883 [2024-02-14 20:30:23.242070] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.883 [2024-02-14 20:30:23.242578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.883 [2024-02-14 20:30:23.242932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.883 [2024-02-14 20:30:23.242964] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:45.883 [2024-02-14 20:30:23.242985] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:45.883 [2024-02-14 20:30:23.243363] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:45.883 [2024-02-14 20:30:23.243600] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.883 [2024-02-14 20:30:23.243608] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.883 [2024-02-14 20:30:23.243614] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.883 [2024-02-14 20:30:23.245318] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.883 [2024-02-14 20:30:23.253811] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.883 [2024-02-14 20:30:23.254382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.883 [2024-02-14 20:30:23.254720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.883 [2024-02-14 20:30:23.254752] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:45.883 [2024-02-14 20:30:23.254773] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:45.883 [2024-02-14 20:30:23.255101] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:45.883 [2024-02-14 20:30:23.255465] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.883 [2024-02-14 20:30:23.255473] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.883 [2024-02-14 20:30:23.255481] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.883 [2024-02-14 20:30:23.257096] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.883 [2024-02-14 20:30:23.265812] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.883 [2024-02-14 20:30:23.266233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.883 [2024-02-14 20:30:23.266788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.883 [2024-02-14 20:30:23.266799] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:45.883 [2024-02-14 20:30:23.266805] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:45.883 [2024-02-14 20:30:23.266904] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:45.883 [2024-02-14 20:30:23.267002] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.883 [2024-02-14 20:30:23.267010] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.883 [2024-02-14 20:30:23.267016] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.883 [2024-02-14 20:30:23.268617] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.883 [2024-02-14 20:30:23.277731] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.883 [2024-02-14 20:30:23.278175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.883 [2024-02-14 20:30:23.278587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.883 [2024-02-14 20:30:23.278617] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:45.883 [2024-02-14 20:30:23.278638] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:45.883 [2024-02-14 20:30:23.278801] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:45.883 [2024-02-14 20:30:23.278929] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.883 [2024-02-14 20:30:23.278937] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.883 [2024-02-14 20:30:23.278943] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.883 [2024-02-14 20:30:23.280587] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.883 [2024-02-14 20:30:23.289502] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.883 [2024-02-14 20:30:23.289944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.883 [2024-02-14 20:30:23.290500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.883 [2024-02-14 20:30:23.290531] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:45.883 [2024-02-14 20:30:23.290552] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:45.883 [2024-02-14 20:30:23.290816] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:45.883 [2024-02-14 20:30:23.290941] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.883 [2024-02-14 20:30:23.290949] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.883 [2024-02-14 20:30:23.290955] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.883 [2024-02-14 20:30:23.292683] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.143 [2024-02-14 20:30:23.301312] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.143 [2024-02-14 20:30:23.301753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.143 [2024-02-14 20:30:23.302109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.143 [2024-02-14 20:30:23.302118] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:46.143 [2024-02-14 20:30:23.302125] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:46.143 [2024-02-14 20:30:23.302209] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:46.143 [2024-02-14 20:30:23.302321] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.143 [2024-02-14 20:30:23.302328] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.143 [2024-02-14 20:30:23.302334] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.143 [2024-02-14 20:30:23.304180] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.143 [2024-02-14 20:30:23.313204] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.143 [2024-02-14 20:30:23.313749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.143 [2024-02-14 20:30:23.314108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.143 [2024-02-14 20:30:23.314139] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:46.143 [2024-02-14 20:30:23.314160] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:46.143 [2024-02-14 20:30:23.314489] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:46.143 [2024-02-14 20:30:23.314756] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.143 [2024-02-14 20:30:23.314765] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.143 [2024-02-14 20:30:23.314771] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.143 [2024-02-14 20:30:23.316617] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.143 [2024-02-14 20:30:23.325345] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.143 [2024-02-14 20:30:23.325833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.143 [2024-02-14 20:30:23.326247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.143 [2024-02-14 20:30:23.326277] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:46.143 [2024-02-14 20:30:23.326298] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:46.143 [2024-02-14 20:30:23.326687] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:46.143 [2024-02-14 20:30:23.326891] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.143 [2024-02-14 20:30:23.326900] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.143 [2024-02-14 20:30:23.326906] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.143 [2024-02-14 20:30:23.328574] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.143 [2024-02-14 20:30:23.337111] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.143 [2024-02-14 20:30:23.337517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.143 [2024-02-14 20:30:23.337918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.143 [2024-02-14 20:30:23.337951] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:46.143 [2024-02-14 20:30:23.337972] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:46.143 [2024-02-14 20:30:23.338303] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:46.143 [2024-02-14 20:30:23.338589] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.143 [2024-02-14 20:30:23.338597] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.143 [2024-02-14 20:30:23.338603] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.143 [2024-02-14 20:30:23.340203] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.143 [2024-02-14 20:30:23.349063] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.143 [2024-02-14 20:30:23.349594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.143 [2024-02-14 20:30:23.349951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.143 [2024-02-14 20:30:23.349983] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:46.143 [2024-02-14 20:30:23.350004] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:46.143 [2024-02-14 20:30:23.350334] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:46.143 [2024-02-14 20:30:23.350713] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.143 [2024-02-14 20:30:23.350727] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.143 [2024-02-14 20:30:23.350736] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.143 [2024-02-14 20:30:23.353664] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.143 [2024-02-14 20:30:23.361422] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.143 [2024-02-14 20:30:23.361946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.143 [2024-02-14 20:30:23.362268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.143 [2024-02-14 20:30:23.362300] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:46.143 [2024-02-14 20:30:23.362321] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:46.143 [2024-02-14 20:30:23.362711] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:46.143 [2024-02-14 20:30:23.363005] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.143 [2024-02-14 20:30:23.363014] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.143 [2024-02-14 20:30:23.363020] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.143 [2024-02-14 20:30:23.364923] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.143 [2024-02-14 20:30:23.373142] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.143 [2024-02-14 20:30:23.373732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.143 [2024-02-14 20:30:23.374156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.143 [2024-02-14 20:30:23.374187] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:46.143 [2024-02-14 20:30:23.374209] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:46.143 [2024-02-14 20:30:23.374639] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:46.143 [2024-02-14 20:30:23.374798] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.143 [2024-02-14 20:30:23.374806] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.143 [2024-02-14 20:30:23.374812] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.143 [2024-02-14 20:30:23.376494] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.143 [2024-02-14 20:30:23.384933] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.143 [2024-02-14 20:30:23.385576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.143 [2024-02-14 20:30:23.385971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.143 [2024-02-14 20:30:23.385982] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:46.143 [2024-02-14 20:30:23.385989] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:46.143 [2024-02-14 20:30:23.386116] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:46.143 [2024-02-14 20:30:23.386273] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.143 [2024-02-14 20:30:23.386281] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.143 [2024-02-14 20:30:23.386287] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.143 [2024-02-14 20:30:23.388050] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.143 [2024-02-14 20:30:23.396906] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.143 [2024-02-14 20:30:23.397345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.143 [2024-02-14 20:30:23.397736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.143 [2024-02-14 20:30:23.397768] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:46.143 [2024-02-14 20:30:23.397790] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:46.143 [2024-02-14 20:30:23.397945] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:46.144 [2024-02-14 20:30:23.398027] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.144 [2024-02-14 20:30:23.398035] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.144 [2024-02-14 20:30:23.398041] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.144 [2024-02-14 20:30:23.399771] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.144 [2024-02-14 20:30:23.408725] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.144 [2024-02-14 20:30:23.409200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.144 [2024-02-14 20:30:23.409547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.144 [2024-02-14 20:30:23.409585] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:46.144 [2024-02-14 20:30:23.409607] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:46.144 [2024-02-14 20:30:23.409866] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:46.144 [2024-02-14 20:30:23.409977] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.144 [2024-02-14 20:30:23.409985] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.144 [2024-02-14 20:30:23.409991] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.144 [2024-02-14 20:30:23.411617] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.144 [2024-02-14 20:30:23.420547] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.144 [2024-02-14 20:30:23.420966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.144 [2024-02-14 20:30:23.421327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.144 [2024-02-14 20:30:23.421358] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:46.144 [2024-02-14 20:30:23.421379] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:46.144 [2024-02-14 20:30:23.421721] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:46.144 [2024-02-14 20:30:23.422170] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.144 [2024-02-14 20:30:23.422182] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.144 [2024-02-14 20:30:23.422192] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.144 [2024-02-14 20:30:23.424905] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.144 [2024-02-14 20:30:23.433074] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.144 [2024-02-14 20:30:23.433516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.144 [2024-02-14 20:30:23.433919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.144 [2024-02-14 20:30:23.433954] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:46.144 [2024-02-14 20:30:23.433976] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:46.144 [2024-02-14 20:30:23.434260] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:46.144 [2024-02-14 20:30:23.434417] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.144 [2024-02-14 20:30:23.434425] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.144 [2024-02-14 20:30:23.434432] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.144 [2024-02-14 20:30:23.436497] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.144 [2024-02-14 20:30:23.444912] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.144 [2024-02-14 20:30:23.445357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.144 [2024-02-14 20:30:23.445548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.144 [2024-02-14 20:30:23.445579] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:46.144 [2024-02-14 20:30:23.445607] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:46.144 [2024-02-14 20:30:23.445963] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:46.144 [2024-02-14 20:30:23.446048] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.144 [2024-02-14 20:30:23.446056] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.144 [2024-02-14 20:30:23.446062] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.144 [2024-02-14 20:30:23.447741] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.144 [2024-02-14 20:30:23.456555] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.144 [2024-02-14 20:30:23.457045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.144 [2024-02-14 20:30:23.457444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.144 [2024-02-14 20:30:23.457473] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:46.144 [2024-02-14 20:30:23.457495] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:46.144 [2024-02-14 20:30:23.457887] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:46.144 [2024-02-14 20:30:23.458249] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.144 [2024-02-14 20:30:23.458257] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.144 [2024-02-14 20:30:23.458263] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.144 [2024-02-14 20:30:23.459990] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.144 [2024-02-14 20:30:23.468474] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.144 [2024-02-14 20:30:23.468979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.144 [2024-02-14 20:30:23.469388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.144 [2024-02-14 20:30:23.469419] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:46.144 [2024-02-14 20:30:23.469441] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:46.144 [2024-02-14 20:30:23.469881] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:46.144 [2024-02-14 20:30:23.470116] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.144 [2024-02-14 20:30:23.470150] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.144 [2024-02-14 20:30:23.470157] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.144 [2024-02-14 20:30:23.471929] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.144 [2024-02-14 20:30:23.480367] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.144 [2024-02-14 20:30:23.480925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.144 [2024-02-14 20:30:23.481333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.144 [2024-02-14 20:30:23.481364] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:46.144 [2024-02-14 20:30:23.481386] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:46.144 [2024-02-14 20:30:23.481881] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:46.144 [2024-02-14 20:30:23.482143] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.144 [2024-02-14 20:30:23.482156] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.144 [2024-02-14 20:30:23.482165] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.144 [2024-02-14 20:30:23.484860] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.144 [2024-02-14 20:30:23.492990] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.144 [2024-02-14 20:30:23.493469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.144 [2024-02-14 20:30:23.493870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.144 [2024-02-14 20:30:23.493905] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:46.144 [2024-02-14 20:30:23.493926] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:46.144 [2024-02-14 20:30:23.494126] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:46.144 [2024-02-14 20:30:23.494277] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.144 [2024-02-14 20:30:23.494286] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.144 [2024-02-14 20:30:23.494293] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.144 [2024-02-14 20:30:23.496135] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.144 [2024-02-14 20:30:23.504543] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.144 [2024-02-14 20:30:23.505127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.144 [2024-02-14 20:30:23.505473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.144 [2024-02-14 20:30:23.505504] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:46.144 [2024-02-14 20:30:23.505525] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:46.144 [2024-02-14 20:30:23.505966] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:46.144 [2024-02-14 20:30:23.506269] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.144 [2024-02-14 20:30:23.506277] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.144 [2024-02-14 20:30:23.506283] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.144 [2024-02-14 20:30:23.508040] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.145 [2024-02-14 20:30:23.516499] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.145 [2024-02-14 20:30:23.516995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.145 [2024-02-14 20:30:23.517334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.145 [2024-02-14 20:30:23.517364] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:46.145 [2024-02-14 20:30:23.517386] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:46.145 [2024-02-14 20:30:23.517528] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:46.145 [2024-02-14 20:30:23.517753] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.145 [2024-02-14 20:30:23.517763] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.145 [2024-02-14 20:30:23.517769] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.145 [2024-02-14 20:30:23.519467] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.145 [2024-02-14 20:30:23.528520] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.145 [2024-02-14 20:30:23.529020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.145 [2024-02-14 20:30:23.529324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.145 [2024-02-14 20:30:23.529334] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:46.145 [2024-02-14 20:30:23.529341] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:46.145 [2024-02-14 20:30:23.529455] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:46.145 [2024-02-14 20:30:23.529553] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.145 [2024-02-14 20:30:23.529561] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.145 [2024-02-14 20:30:23.529568] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.145 [2024-02-14 20:30:23.531144] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.145 [2024-02-14 20:30:23.540597] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.145 [2024-02-14 20:30:23.541110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.145 [2024-02-14 20:30:23.541442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.145 [2024-02-14 20:30:23.541453] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:46.145 [2024-02-14 20:30:23.541460] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:46.145 [2024-02-14 20:30:23.541573] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:46.145 [2024-02-14 20:30:23.541662] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.145 [2024-02-14 20:30:23.541669] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.145 [2024-02-14 20:30:23.541675] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.145 [2024-02-14 20:30:23.543305] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.145 [2024-02-14 20:30:23.552610] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.145 [2024-02-14 20:30:23.553152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.145 [2024-02-14 20:30:23.553449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.145 [2024-02-14 20:30:23.553459] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:46.145 [2024-02-14 20:30:23.553466] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:46.145 [2024-02-14 20:30:23.553593] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:46.145 [2024-02-14 20:30:23.553696] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.145 [2024-02-14 20:30:23.553708] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.145 [2024-02-14 20:30:23.553714] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.145 [2024-02-14 20:30:23.555344] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.404 [2024-02-14 20:30:23.564568] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.404 [2024-02-14 20:30:23.565087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.404 [2024-02-14 20:30:23.565444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.404 [2024-02-14 20:30:23.565475] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:46.404 [2024-02-14 20:30:23.565496] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:46.404 [2024-02-14 20:30:23.565836] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:46.404 [2024-02-14 20:30:23.566317] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.404 [2024-02-14 20:30:23.566341] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.404 [2024-02-14 20:30:23.566361] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.404 [2024-02-14 20:30:23.568198] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.404 [2024-02-14 20:30:23.576563] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.404 [2024-02-14 20:30:23.577117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.404 [2024-02-14 20:30:23.577515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.404 [2024-02-14 20:30:23.577546] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:46.404 [2024-02-14 20:30:23.577567] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:46.404 [2024-02-14 20:30:23.578059] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:46.404 [2024-02-14 20:30:23.578232] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.404 [2024-02-14 20:30:23.578240] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.404 [2024-02-14 20:30:23.578246] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.404 [2024-02-14 20:30:23.580143] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.404 [2024-02-14 20:30:23.588361] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.404 [2024-02-14 20:30:23.588944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.404 [2024-02-14 20:30:23.589416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.404 [2024-02-14 20:30:23.589447] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:46.404 [2024-02-14 20:30:23.589468] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:46.404 [2024-02-14 20:30:23.589809] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:46.404 [2024-02-14 20:30:23.590290] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.404 [2024-02-14 20:30:23.590314] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.404 [2024-02-14 20:30:23.590341] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.404 [2024-02-14 20:30:23.592162] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.404 [2024-02-14 20:30:23.600299] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.404 [2024-02-14 20:30:23.600822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.404 [2024-02-14 20:30:23.601244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.404 [2024-02-14 20:30:23.601275] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:46.404 [2024-02-14 20:30:23.601296] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:46.404 [2024-02-14 20:30:23.601453] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:46.404 [2024-02-14 20:30:23.601549] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.404 [2024-02-14 20:30:23.601557] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.404 [2024-02-14 20:30:23.601563] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.404 [2024-02-14 20:30:23.603241] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.404 [2024-02-14 20:30:23.612054] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.404 [2024-02-14 20:30:23.612527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.404 [2024-02-14 20:30:23.612946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.404 [2024-02-14 20:30:23.612978] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:46.404 [2024-02-14 20:30:23.612999] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:46.404 [2024-02-14 20:30:23.613377] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:46.404 [2024-02-14 20:30:23.613614] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.405 [2024-02-14 20:30:23.613622] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.405 [2024-02-14 20:30:23.613628] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.405 [2024-02-14 20:30:23.615372] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.405 [2024-02-14 20:30:23.623826] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.405 [2024-02-14 20:30:23.624436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.405 [2024-02-14 20:30:23.624876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.405 [2024-02-14 20:30:23.624907] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:46.405 [2024-02-14 20:30:23.624928] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:46.405 [2024-02-14 20:30:23.625257] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:46.405 [2024-02-14 20:30:23.625360] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.405 [2024-02-14 20:30:23.625368] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.405 [2024-02-14 20:30:23.625375] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.405 [2024-02-14 20:30:23.627008] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.405 [2024-02-14 20:30:23.635694] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.405 [2024-02-14 20:30:23.636308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.405 [2024-02-14 20:30:23.636778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.405 [2024-02-14 20:30:23.636811] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:46.405 [2024-02-14 20:30:23.636832] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:46.405 [2024-02-14 20:30:23.637212] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:46.405 [2024-02-14 20:30:23.637533] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.405 [2024-02-14 20:30:23.637541] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.405 [2024-02-14 20:30:23.637547] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.405 [2024-02-14 20:30:23.639397] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.405 [2024-02-14 20:30:23.647673] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.405 [2024-02-14 20:30:23.648257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.405 [2024-02-14 20:30:23.648671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.405 [2024-02-14 20:30:23.648714] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:46.405 [2024-02-14 20:30:23.648721] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:46.405 [2024-02-14 20:30:23.648860] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:46.405 [2024-02-14 20:30:23.648984] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.405 [2024-02-14 20:30:23.648992] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.405 [2024-02-14 20:30:23.648998] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.405 [2024-02-14 20:30:23.650754] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.405 [2024-02-14 20:30:23.659611] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.405 [2024-02-14 20:30:23.660209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.405 [2024-02-14 20:30:23.660682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.405 [2024-02-14 20:30:23.660714] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:46.405 [2024-02-14 20:30:23.660735] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:46.405 [2024-02-14 20:30:23.660857] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:46.405 [2024-02-14 20:30:23.660953] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.405 [2024-02-14 20:30:23.660960] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.405 [2024-02-14 20:30:23.660966] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.405 [2024-02-14 20:30:23.662720] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.405 [2024-02-14 20:30:23.671474] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.405 [2024-02-14 20:30:23.672056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.405 [2024-02-14 20:30:23.672473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.405 [2024-02-14 20:30:23.672504] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:46.405 [2024-02-14 20:30:23.672525] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:46.405 [2024-02-14 20:30:23.672693] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:46.405 [2024-02-14 20:30:23.672790] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.405 [2024-02-14 20:30:23.672798] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.405 [2024-02-14 20:30:23.672804] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.405 [2024-02-14 20:30:23.674562] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.405 [2024-02-14 20:30:23.683362] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.405 [2024-02-14 20:30:23.683843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.405 [2024-02-14 20:30:23.684421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.405 [2024-02-14 20:30:23.684451] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:46.405 [2024-02-14 20:30:23.684472] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:46.405 [2024-02-14 20:30:23.684855] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:46.405 [2024-02-14 20:30:23.685039] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.405 [2024-02-14 20:30:23.685051] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.405 [2024-02-14 20:30:23.685060] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.405 [2024-02-14 20:30:23.687755] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.405 [2024-02-14 20:30:23.696128] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.405 [2024-02-14 20:30:23.696622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.405 [2024-02-14 20:30:23.696919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.405 [2024-02-14 20:30:23.696951] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:46.405 [2024-02-14 20:30:23.696972] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:46.405 [2024-02-14 20:30:23.697303] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:46.405 [2024-02-14 20:30:23.697465] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.405 [2024-02-14 20:30:23.697473] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.405 [2024-02-14 20:30:23.697479] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.405 [2024-02-14 20:30:23.699167] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.405 [2024-02-14 20:30:23.708035] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.405 [2024-02-14 20:30:23.708625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.405 [2024-02-14 20:30:23.709109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.405 [2024-02-14 20:30:23.709140] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:46.405 [2024-02-14 20:30:23.709162] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:46.405 [2024-02-14 20:30:23.709541] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:46.405 [2024-02-14 20:30:23.709843] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.405 [2024-02-14 20:30:23.709851] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.405 [2024-02-14 20:30:23.709857] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.405 [2024-02-14 20:30:23.711621] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.405 [2024-02-14 20:30:23.720007] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.405 [2024-02-14 20:30:23.720569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.405 [2024-02-14 20:30:23.720990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.405 [2024-02-14 20:30:23.721021] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:46.405 [2024-02-14 20:30:23.721042] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:46.405 [2024-02-14 20:30:23.721421] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:46.405 [2024-02-14 20:30:23.721792] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.405 [2024-02-14 20:30:23.721800] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.405 [2024-02-14 20:30:23.721806] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.405 [2024-02-14 20:30:23.723501] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.406 [2024-02-14 20:30:23.731846] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.406 [2024-02-14 20:30:23.732398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.406 [2024-02-14 20:30:23.732852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.406 [2024-02-14 20:30:23.732863] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:46.406 [2024-02-14 20:30:23.732870] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:46.406 [2024-02-14 20:30:23.733003] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:46.406 [2024-02-14 20:30:23.733134] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.406 [2024-02-14 20:30:23.733141] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.406 [2024-02-14 20:30:23.733147] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.406 [2024-02-14 20:30:23.734675] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.406 [2024-02-14 20:30:23.743655] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.406 [2024-02-14 20:30:23.744140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.406 [2024-02-14 20:30:23.744607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.406 [2024-02-14 20:30:23.744644] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:46.406 [2024-02-14 20:30:23.744683] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:46.406 [2024-02-14 20:30:23.744822] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:46.406 [2024-02-14 20:30:23.744960] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.406 [2024-02-14 20:30:23.744968] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.406 [2024-02-14 20:30:23.744974] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.406 [2024-02-14 20:30:23.746717] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.406 [2024-02-14 20:30:23.755423] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.406 [2024-02-14 20:30:23.755868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.406 [2024-02-14 20:30:23.756259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.406 [2024-02-14 20:30:23.756289] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:46.406 [2024-02-14 20:30:23.756311] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:46.406 [2024-02-14 20:30:23.756705] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:46.406 [2024-02-14 20:30:23.756819] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.406 [2024-02-14 20:30:23.756827] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.406 [2024-02-14 20:30:23.756832] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.406 [2024-02-14 20:30:23.758456] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.406 [2024-02-14 20:30:23.767330] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.406 [2024-02-14 20:30:23.767828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.406 [2024-02-14 20:30:23.768254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.406 [2024-02-14 20:30:23.768284] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:46.406 [2024-02-14 20:30:23.768305] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:46.406 [2024-02-14 20:30:23.768613] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:46.406 [2024-02-14 20:30:23.768741] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.406 [2024-02-14 20:30:23.768750] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.406 [2024-02-14 20:30:23.768755] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.406 [2024-02-14 20:30:23.770441] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.406 [2024-02-14 20:30:23.779140] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.406 [2024-02-14 20:30:23.779704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.406 [2024-02-14 20:30:23.780175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.406 [2024-02-14 20:30:23.780205] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:46.406 [2024-02-14 20:30:23.780235] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:46.406 [2024-02-14 20:30:23.780614] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:46.406 [2024-02-14 20:30:23.780936] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.406 [2024-02-14 20:30:23.780944] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.406 [2024-02-14 20:30:23.780950] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.406 [2024-02-14 20:30:23.782435] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.406 [2024-02-14 20:30:23.790860] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.406 [2024-02-14 20:30:23.791429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.406 [2024-02-14 20:30:23.791897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.406 [2024-02-14 20:30:23.791932] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:46.406 [2024-02-14 20:30:23.791939] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:46.406 [2024-02-14 20:30:23.792043] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:46.406 [2024-02-14 20:30:23.792161] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.406 [2024-02-14 20:30:23.792168] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.406 [2024-02-14 20:30:23.792174] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.406 [2024-02-14 20:30:23.793794] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.406 [2024-02-14 20:30:23.802739] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.406 [2024-02-14 20:30:23.803310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.406 [2024-02-14 20:30:23.803710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.406 [2024-02-14 20:30:23.803744] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:46.406 [2024-02-14 20:30:23.803765] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:46.406 [2024-02-14 20:30:23.803919] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:46.406 [2024-02-14 20:30:23.804010] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.406 [2024-02-14 20:30:23.804017] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.406 [2024-02-14 20:30:23.804023] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.406 [2024-02-14 20:30:23.805459] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.406 [2024-02-14 20:30:23.814562] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.406 [2024-02-14 20:30:23.815151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.406 [2024-02-14 20:30:23.815571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.406 [2024-02-14 20:30:23.815581] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:46.406 [2024-02-14 20:30:23.815588] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:46.406 [2024-02-14 20:30:23.815695] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:46.406 [2024-02-14 20:30:23.815823] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.406 [2024-02-14 20:30:23.815830] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.406 [2024-02-14 20:30:23.815836] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.406 [2024-02-14 20:30:23.817390] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.666 [2024-02-14 20:30:23.826535] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.666 [2024-02-14 20:30:23.827056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.666 [2024-02-14 20:30:23.827466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.666 [2024-02-14 20:30:23.827497] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:46.666 [2024-02-14 20:30:23.827519] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:46.666 [2024-02-14 20:30:23.827962] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:46.666 [2024-02-14 20:30:23.828120] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.666 [2024-02-14 20:30:23.828128] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.666 [2024-02-14 20:30:23.828134] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.666 [2024-02-14 20:30:23.829978] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.666 [2024-02-14 20:30:23.838207] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.666 [2024-02-14 20:30:23.838762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.666 [2024-02-14 20:30:23.839164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.666 [2024-02-14 20:30:23.839195] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:46.666 [2024-02-14 20:30:23.839217] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:46.666 [2024-02-14 20:30:23.839350] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:46.666 [2024-02-14 20:30:23.839474] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.666 [2024-02-14 20:30:23.839482] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.666 [2024-02-14 20:30:23.839487] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.666 [2024-02-14 20:30:23.841356] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.666 [2024-02-14 20:30:23.849977] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.666 [2024-02-14 20:30:23.850526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.666 [2024-02-14 20:30:23.850955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.666 [2024-02-14 20:30:23.850965] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:46.666 [2024-02-14 20:30:23.850972] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:46.666 [2024-02-14 20:30:23.851049] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:46.666 [2024-02-14 20:30:23.851157] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.666 [2024-02-14 20:30:23.851164] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.666 [2024-02-14 20:30:23.851169] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.666 [2024-02-14 20:30:23.852739] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.666 [2024-02-14 20:30:23.861737] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.666 [2024-02-14 20:30:23.862263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.666 [2024-02-14 20:30:23.862667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.666 [2024-02-14 20:30:23.862699] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:46.666 [2024-02-14 20:30:23.862720] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:46.666 [2024-02-14 20:30:23.863099] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:46.666 [2024-02-14 20:30:23.863263] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.666 [2024-02-14 20:30:23.863271] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.666 [2024-02-14 20:30:23.863276] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.666 [2024-02-14 20:30:23.864869] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.666 [2024-02-14 20:30:23.873592] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.666 [2024-02-14 20:30:23.874157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.666 [2024-02-14 20:30:23.874624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.666 [2024-02-14 20:30:23.874667] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:46.666 [2024-02-14 20:30:23.874690] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:46.666 [2024-02-14 20:30:23.875069] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:46.666 [2024-02-14 20:30:23.875283] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.666 [2024-02-14 20:30:23.875291] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.666 [2024-02-14 20:30:23.875298] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.666 [2024-02-14 20:30:23.877066] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.666 [2024-02-14 20:30:23.885233] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.666 [2024-02-14 20:30:23.885737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.666 [2024-02-14 20:30:23.886011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.666 [2024-02-14 20:30:23.886041] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:46.666 [2024-02-14 20:30:23.886062] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:46.666 [2024-02-14 20:30:23.886442] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:46.666 [2024-02-14 20:30:23.886838] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.666 [2024-02-14 20:30:23.886871] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.667 [2024-02-14 20:30:23.886891] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.667 [2024-02-14 20:30:23.889606] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.667 [2024-02-14 20:30:23.898169] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.667 [2024-02-14 20:30:23.898717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.667 [2024-02-14 20:30:23.899136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.667 [2024-02-14 20:30:23.899147] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:46.667 [2024-02-14 20:30:23.899153] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:46.667 [2024-02-14 20:30:23.899242] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:46.667 [2024-02-14 20:30:23.899362] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.667 [2024-02-14 20:30:23.899370] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.667 [2024-02-14 20:30:23.899378] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.667 [2024-02-14 20:30:23.901299] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.667 [2024-02-14 20:30:23.910123] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.667 [2024-02-14 20:30:23.910468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.667 [2024-02-14 20:30:23.910893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.667 [2024-02-14 20:30:23.910904] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:46.667 [2024-02-14 20:30:23.910910] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:46.667 [2024-02-14 20:30:23.910994] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:46.667 [2024-02-14 20:30:23.911122] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.667 [2024-02-14 20:30:23.911129] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.667 [2024-02-14 20:30:23.911136] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.667 [2024-02-14 20:30:23.912742] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.667 [2024-02-14 20:30:23.921849] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.667 [2024-02-14 20:30:23.922418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.667 [2024-02-14 20:30:23.922881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.667 [2024-02-14 20:30:23.922914] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:46.667 [2024-02-14 20:30:23.922936] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:46.667 [2024-02-14 20:30:23.923141] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:46.667 [2024-02-14 20:30:23.923223] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.667 [2024-02-14 20:30:23.923230] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.667 [2024-02-14 20:30:23.923239] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.667 [2024-02-14 20:30:23.924934] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.667 [2024-02-14 20:30:23.933768] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.667 [2024-02-14 20:30:23.934352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.667 [2024-02-14 20:30:23.934707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.667 [2024-02-14 20:30:23.934718] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:46.667 [2024-02-14 20:30:23.934725] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:46.667 [2024-02-14 20:30:23.934829] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:46.667 [2024-02-14 20:30:23.934906] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.667 [2024-02-14 20:30:23.934913] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.667 [2024-02-14 20:30:23.934918] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.667 [2024-02-14 20:30:23.936535] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.667 [2024-02-14 20:30:23.945696] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.667 [2024-02-14 20:30:23.946264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.667 [2024-02-14 20:30:23.946667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.667 [2024-02-14 20:30:23.946700] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:46.667 [2024-02-14 20:30:23.946722] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:46.667 [2024-02-14 20:30:23.947051] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:46.667 [2024-02-14 20:30:23.947410] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.667 [2024-02-14 20:30:23.947418] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.667 [2024-02-14 20:30:23.947424] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.667 [2024-02-14 20:30:23.949172] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.667 [2024-02-14 20:30:23.957566] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.667 [2024-02-14 20:30:23.958155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.667 [2024-02-14 20:30:23.958625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.667 [2024-02-14 20:30:23.958668] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:46.667 [2024-02-14 20:30:23.958691] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:46.667 [2024-02-14 20:30:23.959019] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:46.667 [2024-02-14 20:30:23.959409] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.667 [2024-02-14 20:30:23.959417] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.667 [2024-02-14 20:30:23.959423] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.667 [2024-02-14 20:30:23.960981] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.667 [2024-02-14 20:30:23.969296] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.667 [2024-02-14 20:30:23.969909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.667 [2024-02-14 20:30:23.970356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.667 [2024-02-14 20:30:23.970387] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:46.667 [2024-02-14 20:30:23.970408] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:46.667 [2024-02-14 20:30:23.970555] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:46.667 [2024-02-14 20:30:23.970652] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.667 [2024-02-14 20:30:23.970660] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.667 [2024-02-14 20:30:23.970666] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.667 [2024-02-14 20:30:23.972367] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.667 [2024-02-14 20:30:23.981132] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.667 [2024-02-14 20:30:23.981608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.667 [2024-02-14 20:30:23.982033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.667 [2024-02-14 20:30:23.982066] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:46.667 [2024-02-14 20:30:23.982088] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:46.667 [2024-02-14 20:30:23.982478] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:46.667 [2024-02-14 20:30:23.982597] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.667 [2024-02-14 20:30:23.982605] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.667 [2024-02-14 20:30:23.982610] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.667 [2024-02-14 20:30:23.984275] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.667 [2024-02-14 20:30:23.992936] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.667 [2024-02-14 20:30:23.993517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.667 [2024-02-14 20:30:23.993942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.667 [2024-02-14 20:30:23.993974] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:46.667 [2024-02-14 20:30:23.993996] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:46.667 [2024-02-14 20:30:23.994526] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:46.667 [2024-02-14 20:30:23.994617] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.667 [2024-02-14 20:30:23.994624] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.667 [2024-02-14 20:30:23.994630] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.667 [2024-02-14 20:30:23.996281] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.667 [2024-02-14 20:30:24.004605] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.668 [2024-02-14 20:30:24.005115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.668 [2024-02-14 20:30:24.005562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.668 [2024-02-14 20:30:24.005592] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:46.668 [2024-02-14 20:30:24.005613] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:46.668 [2024-02-14 20:30:24.005909] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:46.668 [2024-02-14 20:30:24.006226] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.668 [2024-02-14 20:30:24.006234] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.668 [2024-02-14 20:30:24.006240] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.668 [2024-02-14 20:30:24.007997] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.668 [2024-02-14 20:30:24.016559] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.668 [2024-02-14 20:30:24.017097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.668 [2024-02-14 20:30:24.017566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.668 [2024-02-14 20:30:24.017595] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:46.668 [2024-02-14 20:30:24.017626] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:46.668 [2024-02-14 20:30:24.017742] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:46.668 [2024-02-14 20:30:24.017867] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.668 [2024-02-14 20:30:24.017875] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.668 [2024-02-14 20:30:24.017881] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.668 [2024-02-14 20:30:24.019527] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.668 [2024-02-14 20:30:24.028225] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.668 [2024-02-14 20:30:24.028837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.668 [2024-02-14 20:30:24.029304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.668 [2024-02-14 20:30:24.029334] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:46.668 [2024-02-14 20:30:24.029355] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:46.668 [2024-02-14 20:30:24.029761] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:46.668 [2024-02-14 20:30:24.029991] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.668 [2024-02-14 20:30:24.030003] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.668 [2024-02-14 20:30:24.030013] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.668 [2024-02-14 20:30:24.032820] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.668 [2024-02-14 20:30:24.040726] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.668 [2024-02-14 20:30:24.041333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.668 [2024-02-14 20:30:24.041687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.668 [2024-02-14 20:30:24.041720] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:46.668 [2024-02-14 20:30:24.041749] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:46.668 [2024-02-14 20:30:24.041889] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:46.668 [2024-02-14 20:30:24.042013] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.668 [2024-02-14 20:30:24.042021] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.668 [2024-02-14 20:30:24.042028] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.668 [2024-02-14 20:30:24.043994] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.668 [2024-02-14 20:30:24.052551] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.668 [2024-02-14 20:30:24.053143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.668 [2024-02-14 20:30:24.053550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.668 [2024-02-14 20:30:24.053580] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:46.668 [2024-02-14 20:30:24.053601] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:46.668 [2024-02-14 20:30:24.053895] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:46.668 [2024-02-14 20:30:24.054329] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.668 [2024-02-14 20:30:24.054354] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.668 [2024-02-14 20:30:24.054373] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.668 [2024-02-14 20:30:24.056179] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.668 [2024-02-14 20:30:24.064353] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.668 [2024-02-14 20:30:24.064920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.668 [2024-02-14 20:30:24.065315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.668 [2024-02-14 20:30:24.065346] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:46.668 [2024-02-14 20:30:24.065369] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:46.668 [2024-02-14 20:30:24.065499] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:46.668 [2024-02-14 20:30:24.065576] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.668 [2024-02-14 20:30:24.065583] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.668 [2024-02-14 20:30:24.065589] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.668 [2024-02-14 20:30:24.067212] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.668 [2024-02-14 20:30:24.076172] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.668 [2024-02-14 20:30:24.076750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.668 [2024-02-14 20:30:24.077102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.668 [2024-02-14 20:30:24.077115] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:46.668 [2024-02-14 20:30:24.077122] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:46.668 [2024-02-14 20:30:24.077251] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:46.668 [2024-02-14 20:30:24.077334] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.668 [2024-02-14 20:30:24.077342] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.668 [2024-02-14 20:30:24.077348] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.668 [2024-02-14 20:30:24.079163] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.928 [2024-02-14 20:30:24.088089] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.928 [2024-02-14 20:30:24.088632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.928 [2024-02-14 20:30:24.089064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.928 [2024-02-14 20:30:24.089095] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:46.928 [2024-02-14 20:30:24.089116] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:46.928 [2024-02-14 20:30:24.089446] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:46.928 [2024-02-14 20:30:24.089810] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.928 [2024-02-14 20:30:24.089823] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.928 [2024-02-14 20:30:24.089833] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.928 [2024-02-14 20:30:24.092662] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.928 [2024-02-14 20:30:24.100791] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.928 [2024-02-14 20:30:24.101342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.928 [2024-02-14 20:30:24.101705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.928 [2024-02-14 20:30:24.101738] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:46.928 [2024-02-14 20:30:24.101759] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:46.928 [2024-02-14 20:30:24.102139] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:46.928 [2024-02-14 20:30:24.102568] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.928 [2024-02-14 20:30:24.102593] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.928 [2024-02-14 20:30:24.102614] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.928 [2024-02-14 20:30:24.104625] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.928 [2024-02-14 20:30:24.112624] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.928 [2024-02-14 20:30:24.113110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.928 [2024-02-14 20:30:24.113534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.928 [2024-02-14 20:30:24.113544] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:46.928 [2024-02-14 20:30:24.113553] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:46.929 [2024-02-14 20:30:24.113655] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:46.929 [2024-02-14 20:30:24.113780] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.929 [2024-02-14 20:30:24.113787] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.929 [2024-02-14 20:30:24.113793] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.929 [2024-02-14 20:30:24.115546] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.929 [2024-02-14 20:30:24.124232] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.929 [2024-02-14 20:30:24.124786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.929 [2024-02-14 20:30:24.125077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.929 [2024-02-14 20:30:24.125086] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:46.929 [2024-02-14 20:30:24.125093] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:46.929 [2024-02-14 20:30:24.125210] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:46.929 [2024-02-14 20:30:24.125314] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.929 [2024-02-14 20:30:24.125321] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.929 [2024-02-14 20:30:24.125327] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.929 [2024-02-14 20:30:24.127094] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.929 [2024-02-14 20:30:24.136002] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.929 [2024-02-14 20:30:24.136560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.929 [2024-02-14 20:30:24.136982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.929 [2024-02-14 20:30:24.136994] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:46.929 [2024-02-14 20:30:24.137001] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:46.929 [2024-02-14 20:30:24.137113] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:46.929 [2024-02-14 20:30:24.137208] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.929 [2024-02-14 20:30:24.137215] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.929 [2024-02-14 20:30:24.137221] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.929 [2024-02-14 20:30:24.139120] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.929 [2024-02-14 20:30:24.147961] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.929 [2024-02-14 20:30:24.148483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.929 [2024-02-14 20:30:24.148779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.929 [2024-02-14 20:30:24.148790] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:46.929 [2024-02-14 20:30:24.148819] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:46.929 [2024-02-14 20:30:24.149207] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:46.929 [2024-02-14 20:30:24.149588] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.929 [2024-02-14 20:30:24.149613] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.929 [2024-02-14 20:30:24.149633] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.929 [2024-02-14 20:30:24.151561] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.929 [2024-02-14 20:30:24.160006] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.929 [2024-02-14 20:30:24.160575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.929 [2024-02-14 20:30:24.161018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.929 [2024-02-14 20:30:24.161051] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:46.929 [2024-02-14 20:30:24.161074] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:46.929 [2024-02-14 20:30:24.161404] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:46.929 [2024-02-14 20:30:24.161795] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.929 [2024-02-14 20:30:24.161820] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.929 [2024-02-14 20:30:24.161840] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.929 [2024-02-14 20:30:24.163610] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.929 [2024-02-14 20:30:24.171746] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.929 [2024-02-14 20:30:24.172297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.929 [2024-02-14 20:30:24.172769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.929 [2024-02-14 20:30:24.172805] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:46.929 [2024-02-14 20:30:24.172812] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:46.929 [2024-02-14 20:30:24.172922] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:46.929 [2024-02-14 20:30:24.173061] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.929 [2024-02-14 20:30:24.173069] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.929 [2024-02-14 20:30:24.173075] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.929 [2024-02-14 20:30:24.174806] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.929 [2024-02-14 20:30:24.183611] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.929 [2024-02-14 20:30:24.184126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.929 [2024-02-14 20:30:24.184616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.929 [2024-02-14 20:30:24.184668] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:46.929 [2024-02-14 20:30:24.184676] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:46.929 [2024-02-14 20:30:24.184786] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:46.929 [2024-02-14 20:30:24.184885] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.929 [2024-02-14 20:30:24.184892] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.929 [2024-02-14 20:30:24.184897] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.929 [2024-02-14 20:30:24.186584] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.929 [2024-02-14 20:30:24.195456] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.929 [2024-02-14 20:30:24.195992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.929 [2024-02-14 20:30:24.196402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.929 [2024-02-14 20:30:24.196434] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:46.929 [2024-02-14 20:30:24.196456] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:46.929 [2024-02-14 20:30:24.196635] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:46.929 [2024-02-14 20:30:24.196750] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.929 [2024-02-14 20:30:24.196758] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.929 [2024-02-14 20:30:24.196764] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.929 [2024-02-14 20:30:24.198371] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.929 [2024-02-14 20:30:24.207307] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.929 [2024-02-14 20:30:24.207835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.929 [2024-02-14 20:30:24.208270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.929 [2024-02-14 20:30:24.208301] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:46.929 [2024-02-14 20:30:24.208322] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:46.929 [2024-02-14 20:30:24.208686] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:46.929 [2024-02-14 20:30:24.208811] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.929 [2024-02-14 20:30:24.208819] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.929 [2024-02-14 20:30:24.208825] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.929 [2024-02-14 20:30:24.210577] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.929 [2024-02-14 20:30:24.219089] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.929 [2024-02-14 20:30:24.219656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.929 [2024-02-14 20:30:24.220078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.929 [2024-02-14 20:30:24.220109] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:46.929 [2024-02-14 20:30:24.220131] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:46.929 [2024-02-14 20:30:24.220411] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:46.929 [2024-02-14 20:30:24.220728] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.929 [2024-02-14 20:30:24.220739] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.930 [2024-02-14 20:30:24.220745] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.930 [2024-02-14 20:30:24.222537] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.930 [2024-02-14 20:30:24.230945] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.930 [2024-02-14 20:30:24.231408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.930 [2024-02-14 20:30:24.231832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.930 [2024-02-14 20:30:24.231842] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:46.930 [2024-02-14 20:30:24.231848] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:46.930 [2024-02-14 20:30:24.231979] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:46.930 [2024-02-14 20:30:24.232070] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.930 [2024-02-14 20:30:24.232077] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.930 [2024-02-14 20:30:24.232083] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.930 [2024-02-14 20:30:24.233700] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.930 [2024-02-14 20:30:24.242747] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.930 [2024-02-14 20:30:24.243276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.930 [2024-02-14 20:30:24.243764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.930 [2024-02-14 20:30:24.243797] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:46.930 [2024-02-14 20:30:24.243819] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:46.930 [2024-02-14 20:30:24.244198] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:46.930 [2024-02-14 20:30:24.244578] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.930 [2024-02-14 20:30:24.244602] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.930 [2024-02-14 20:30:24.244622] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.930 [2024-02-14 20:30:24.246569] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.930 [2024-02-14 20:30:24.254575] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.930 [2024-02-14 20:30:24.255089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.930 [2024-02-14 20:30:24.255581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.930 [2024-02-14 20:30:24.255612] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:46.930 [2024-02-14 20:30:24.255632] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:46.930 [2024-02-14 20:30:24.255974] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:46.930 [2024-02-14 20:30:24.256277] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.930 [2024-02-14 20:30:24.256285] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.930 [2024-02-14 20:30:24.256293] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.930 [2024-02-14 20:30:24.257950] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.930 [2024-02-14 20:30:24.266458] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.930 [2024-02-14 20:30:24.266994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.930 [2024-02-14 20:30:24.267430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.930 [2024-02-14 20:30:24.267460] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:46.930 [2024-02-14 20:30:24.267482] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:46.930 [2024-02-14 20:30:24.267830] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:46.930 [2024-02-14 20:30:24.267955] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.930 [2024-02-14 20:30:24.267962] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.930 [2024-02-14 20:30:24.267968] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.930 [2024-02-14 20:30:24.269572] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.930 [2024-02-14 20:30:24.278354] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.930 [2024-02-14 20:30:24.278902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.930 [2024-02-14 20:30:24.279392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.930 [2024-02-14 20:30:24.279423] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:46.930 [2024-02-14 20:30:24.279444] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:46.930 [2024-02-14 20:30:24.279858] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:46.930 [2024-02-14 20:30:24.279997] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.930 [2024-02-14 20:30:24.280005] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.930 [2024-02-14 20:30:24.280011] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.930 [2024-02-14 20:30:24.281726] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.930 [2024-02-14 20:30:24.289960] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.930 [2024-02-14 20:30:24.290488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.930 [2024-02-14 20:30:24.290953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.930 [2024-02-14 20:30:24.290986] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:46.930 [2024-02-14 20:30:24.291007] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:46.930 [2024-02-14 20:30:24.291382] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:46.930 [2024-02-14 20:30:24.291450] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.930 [2024-02-14 20:30:24.291457] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.930 [2024-02-14 20:30:24.291463] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.930 [2024-02-14 20:30:24.293169] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.930 [2024-02-14 20:30:24.301575] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.930 [2024-02-14 20:30:24.302098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.930 [2024-02-14 20:30:24.302562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.930 [2024-02-14 20:30:24.302593] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:46.930 [2024-02-14 20:30:24.302614] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:46.930 [2024-02-14 20:30:24.302907] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:46.930 [2024-02-14 20:30:24.303130] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.930 [2024-02-14 20:30:24.303137] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.930 [2024-02-14 20:30:24.303143] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.930 [2024-02-14 20:30:24.304781] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.930 [2024-02-14 20:30:24.313327] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.930 [2024-02-14 20:30:24.313813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.930 [2024-02-14 20:30:24.314185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.930 [2024-02-14 20:30:24.314216] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:46.930 [2024-02-14 20:30:24.314237] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:46.930 [2024-02-14 20:30:24.314617] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:46.930 [2024-02-14 20:30:24.315111] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.930 [2024-02-14 20:30:24.315136] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.930 [2024-02-14 20:30:24.315156] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.930 [2024-02-14 20:30:24.316832] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.930 [2024-02-14 20:30:24.325046] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.930 [2024-02-14 20:30:24.325607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.930 [2024-02-14 20:30:24.326036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.930 [2024-02-14 20:30:24.326046] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:46.930 [2024-02-14 20:30:24.326053] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:46.930 [2024-02-14 20:30:24.326163] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:46.930 [2024-02-14 20:30:24.326272] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.930 [2024-02-14 20:30:24.326280] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.930 [2024-02-14 20:30:24.326285] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.930 [2024-02-14 20:30:24.327866] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.930 [2024-02-14 20:30:24.336788] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.930 [2024-02-14 20:30:24.337342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.930 [2024-02-14 20:30:24.337765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.931 [2024-02-14 20:30:24.337776] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:46.931 [2024-02-14 20:30:24.337782] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:46.931 [2024-02-14 20:30:24.337892] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:46.931 [2024-02-14 20:30:24.338002] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.931 [2024-02-14 20:30:24.338010] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.931 [2024-02-14 20:30:24.338016] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.931 [2024-02-14 20:30:24.339718] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.191 [2024-02-14 20:30:24.348819] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.191 [2024-02-14 20:30:24.349357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.191 [2024-02-14 20:30:24.349801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.191 [2024-02-14 20:30:24.349811] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:47.191 [2024-02-14 20:30:24.349818] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:47.191 [2024-02-14 20:30:24.349931] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:47.191 [2024-02-14 20:30:24.350044] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.191 [2024-02-14 20:30:24.350051] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.191 [2024-02-14 20:30:24.350057] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.191 [2024-02-14 20:30:24.351769] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.191 [2024-02-14 20:30:24.360602] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.191 [2024-02-14 20:30:24.361171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.191 [2024-02-14 20:30:24.361639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.191 [2024-02-14 20:30:24.361681] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:47.191 [2024-02-14 20:30:24.361704] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:47.191 [2024-02-14 20:30:24.362033] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:47.191 [2024-02-14 20:30:24.362385] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.191 [2024-02-14 20:30:24.362393] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.191 [2024-02-14 20:30:24.362398] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.191 [2024-02-14 20:30:24.364087] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.191 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1959349 Killed "${NVMF_APP[@]}" "$@" 00:29:47.191 20:30:24 -- host/bdevperf.sh@36 -- # tgt_init 00:29:47.191 20:30:24 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:47.192 20:30:24 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:47.192 20:30:24 -- common/autotest_common.sh@710 -- # xtrace_disable 00:29:47.192 20:30:24 -- common/autotest_common.sh@10 -- # set +x 00:29:47.192 [2024-02-14 20:30:24.372621] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.192 [2024-02-14 20:30:24.373181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.192 [2024-02-14 20:30:24.373531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.192 [2024-02-14 20:30:24.373542] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:47.192 [2024-02-14 20:30:24.373548] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:47.192 [2024-02-14 20:30:24.373681] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:47.192 [2024-02-14 20:30:24.373765] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.192 [2024-02-14 20:30:24.373772] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.192 [2024-02-14 20:30:24.373778] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.192 20:30:24 -- nvmf/common.sh@469 -- # nvmfpid=1960762 00:29:47.192 20:30:24 -- nvmf/common.sh@470 -- # waitforlisten 1960762 00:29:47.192 20:30:24 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:47.192 [2024-02-14 20:30:24.375437] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.192 20:30:24 -- common/autotest_common.sh@817 -- # '[' -z 1960762 ']' 00:29:47.192 20:30:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:47.192 20:30:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:47.192 20:30:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:47.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:47.192 20:30:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:47.192 20:30:24 -- common/autotest_common.sh@10 -- # set +x 00:29:47.192 [2024-02-14 20:30:24.384536] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.192 [2024-02-14 20:30:24.385100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.192 [2024-02-14 20:30:24.385496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.192 [2024-02-14 20:30:24.385506] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:47.192 [2024-02-14 20:30:24.385513] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:47.192 [2024-02-14 20:30:24.385643] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:47.192 [2024-02-14 20:30:24.385778] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.192 [2024-02-14 20:30:24.385787] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.192 [2024-02-14 20:30:24.385794] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.192 [2024-02-14 20:30:24.387569] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.192 [2024-02-14 20:30:24.396406] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.192 [2024-02-14 20:30:24.396808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.192 [2024-02-14 20:30:24.397242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.192 [2024-02-14 20:30:24.397254] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:47.192 [2024-02-14 20:30:24.397261] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:47.192 [2024-02-14 20:30:24.397404] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:47.192 [2024-02-14 20:30:24.397531] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.192 [2024-02-14 20:30:24.397539] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.192 [2024-02-14 20:30:24.397545] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.192 [2024-02-14 20:30:24.399279] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.192 [2024-02-14 20:30:24.408237] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.192 [2024-02-14 20:30:24.408830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.192 [2024-02-14 20:30:24.409261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.192 [2024-02-14 20:30:24.409271] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:47.192 [2024-02-14 20:30:24.409278] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:47.192 [2024-02-14 20:30:24.409374] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:47.192 [2024-02-14 20:30:24.409469] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.192 [2024-02-14 20:30:24.409476] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.192 [2024-02-14 20:30:24.409482] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.192 [2024-02-14 20:30:24.411343] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.192 [2024-02-14 20:30:24.418148] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:29:47.192 [2024-02-14 20:30:24.418191] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:47.192 [2024-02-14 20:30:24.420208] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.192 [2024-02-14 20:30:24.420770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.192 [2024-02-14 20:30:24.421191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.192 [2024-02-14 20:30:24.421201] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:47.192 [2024-02-14 20:30:24.421208] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:47.192 [2024-02-14 20:30:24.421291] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:47.192 [2024-02-14 20:30:24.421401] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.192 [2024-02-14 20:30:24.421413] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.192 [2024-02-14 20:30:24.421419] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.192 [2024-02-14 20:30:24.423078] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.192 [2024-02-14 20:30:24.431998] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.192 [2024-02-14 20:30:24.432549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.192 [2024-02-14 20:30:24.432894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.192 [2024-02-14 20:30:24.432904] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:47.192 [2024-02-14 20:30:24.432911] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:47.192 [2024-02-14 20:30:24.433022] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:47.192 [2024-02-14 20:30:24.433104] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.192 [2024-02-14 20:30:24.433112] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.192 [2024-02-14 20:30:24.433118] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.192 [2024-02-14 20:30:24.434660] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.192 [2024-02-14 20:30:24.444042] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.192 [2024-02-14 20:30:24.444556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.192 [2024-02-14 20:30:24.444973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.192 [2024-02-14 20:30:24.444983] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:47.192 [2024-02-14 20:30:24.444990] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:47.192 [2024-02-14 20:30:24.445085] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:47.192 [2024-02-14 20:30:24.445181] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.192 [2024-02-14 20:30:24.445188] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.192 [2024-02-14 20:30:24.445195] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.192 EAL: No free 2048 kB hugepages reported on node 1 00:29:47.192 [2024-02-14 20:30:24.446840] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.192 [2024-02-14 20:30:24.455965] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.192 [2024-02-14 20:30:24.456530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.192 [2024-02-14 20:30:24.456979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.192 [2024-02-14 20:30:24.456991] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:47.192 [2024-02-14 20:30:24.456998] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:47.192 [2024-02-14 20:30:24.457110] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:47.192 [2024-02-14 20:30:24.457220] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.192 [2024-02-14 20:30:24.457228] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.192 [2024-02-14 20:30:24.457234] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.192 [2024-02-14 20:30:24.459000] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.193 [2024-02-14 20:30:24.467957] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.193 [2024-02-14 20:30:24.468438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.193 [2024-02-14 20:30:24.468841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.193 [2024-02-14 20:30:24.468852] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:47.193 [2024-02-14 20:30:24.468859] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:47.193 [2024-02-14 20:30:24.468999] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:47.193 [2024-02-14 20:30:24.469109] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.193 [2024-02-14 20:30:24.469117] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.193 [2024-02-14 20:30:24.469123] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.193 [2024-02-14 20:30:24.470955] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.193 [2024-02-14 20:30:24.479707] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.193 [2024-02-14 20:30:24.480276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.193 [2024-02-14 20:30:24.480709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.193 [2024-02-14 20:30:24.480720] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:47.193 [2024-02-14 20:30:24.480726] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:47.193 [2024-02-14 20:30:24.480840] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:47.193 [2024-02-14 20:30:24.480938] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.193 [2024-02-14 20:30:24.480946] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.193 [2024-02-14 20:30:24.480952] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.193 [2024-02-14 20:30:24.482261] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:47.193 [2024-02-14 20:30:24.482784] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.193 [2024-02-14 20:30:24.491648] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.193 [2024-02-14 20:30:24.492244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.193 [2024-02-14 20:30:24.492674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.193 [2024-02-14 20:30:24.492685] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:47.193 [2024-02-14 20:30:24.492692] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:47.193 [2024-02-14 20:30:24.492808] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:47.193 [2024-02-14 20:30:24.492936] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.193 [2024-02-14 20:30:24.492944] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.193 [2024-02-14 20:30:24.492951] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.193 [2024-02-14 20:30:24.494787] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.193 [2024-02-14 20:30:24.503323] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.193 [2024-02-14 20:30:24.503978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.193 [2024-02-14 20:30:24.504348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.193 [2024-02-14 20:30:24.504362] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:47.193 [2024-02-14 20:30:24.504369] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:47.193 [2024-02-14 20:30:24.504479] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:47.193 [2024-02-14 20:30:24.504603] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.193 [2024-02-14 20:30:24.504611] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.193 [2024-02-14 20:30:24.504617] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.193 [2024-02-14 20:30:24.506462] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.193 [2024-02-14 20:30:24.515035] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.193 [2024-02-14 20:30:24.515579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.193 [2024-02-14 20:30:24.516011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.193 [2024-02-14 20:30:24.516021] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:47.193 [2024-02-14 20:30:24.516028] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:47.193 [2024-02-14 20:30:24.516113] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:47.193 [2024-02-14 20:30:24.516197] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.193 [2024-02-14 20:30:24.516204] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.193 [2024-02-14 20:30:24.516210] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.193 [2024-02-14 20:30:24.517956] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.193 [2024-02-14 20:30:24.526888] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.193 [2024-02-14 20:30:24.527466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.193 [2024-02-14 20:30:24.527840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.193 [2024-02-14 20:30:24.527851] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:47.193 [2024-02-14 20:30:24.527860] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:47.193 [2024-02-14 20:30:24.528005] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:47.193 [2024-02-14 20:30:24.528089] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.193 [2024-02-14 20:30:24.528097] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.193 [2024-02-14 20:30:24.528105] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.193 [2024-02-14 20:30:24.529924] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.193 [2024-02-14 20:30:24.538892] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.193 [2024-02-14 20:30:24.539463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.193 [2024-02-14 20:30:24.539890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.193 [2024-02-14 20:30:24.539901] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:47.193 [2024-02-14 20:30:24.539914] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:47.193 [2024-02-14 20:30:24.540059] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:47.193 [2024-02-14 20:30:24.540186] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.193 [2024-02-14 20:30:24.540194] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.193 [2024-02-14 20:30:24.540201] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.193 [2024-02-14 20:30:24.542081] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.193 [2024-02-14 20:30:24.550877] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.193 [2024-02-14 20:30:24.551472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.193 [2024-02-14 20:30:24.551892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.193 [2024-02-14 20:30:24.551903] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:47.193 [2024-02-14 20:30:24.551910] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:47.193 [2024-02-14 20:30:24.552024] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:47.193 [2024-02-14 20:30:24.552181] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.193 [2024-02-14 20:30:24.552189] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.193 [2024-02-14 20:30:24.552196] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.193 [2024-02-14 20:30:24.553842] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.193 [2024-02-14 20:30:24.560243] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:47.193 [2024-02-14 20:30:24.560343] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:47.193 [2024-02-14 20:30:24.560351] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:47.193 [2024-02-14 20:30:24.560357] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:47.193 [2024-02-14 20:30:24.560393] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:47.193 [2024-02-14 20:30:24.560479] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:47.193 [2024-02-14 20:30:24.560480] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:47.193 [2024-02-14 20:30:24.562910] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.193 [2024-02-14 20:30:24.563455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.193 [2024-02-14 20:30:24.563838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.193 [2024-02-14 20:30:24.563849] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:47.193 [2024-02-14 20:30:24.563856] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:47.193 [2024-02-14 20:30:24.564000] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:47.193 [2024-02-14 20:30:24.564114] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.194 [2024-02-14 20:30:24.564122] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.194 [2024-02-14 20:30:24.564133] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.194 [2024-02-14 20:30:24.565798] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.194 [2024-02-14 20:30:24.575079] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.194 [2024-02-14 20:30:24.575695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.194 [2024-02-14 20:30:24.576043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.194 [2024-02-14 20:30:24.576053] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:47.194 [2024-02-14 20:30:24.576061] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:47.194 [2024-02-14 20:30:24.576220] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:47.194 [2024-02-14 20:30:24.576349] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.194 [2024-02-14 20:30:24.576358] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.194 [2024-02-14 20:30:24.576365] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.194 [2024-02-14 20:30:24.578171] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.194 [2024-02-14 20:30:24.587113] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.194 [2024-02-14 20:30:24.587708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.194 [2024-02-14 20:30:24.588054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.194 [2024-02-14 20:30:24.588064] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:47.194 [2024-02-14 20:30:24.588071] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:47.194 [2024-02-14 20:30:24.588157] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:47.194 [2024-02-14 20:30:24.588286] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.194 [2024-02-14 20:30:24.588294] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.194 [2024-02-14 20:30:24.588300] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.194 [2024-02-14 20:30:24.589993] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.194 [2024-02-14 20:30:24.599025] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.194 [2024-02-14 20:30:24.599527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.194 [2024-02-14 20:30:24.599934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.194 [2024-02-14 20:30:24.599946] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:47.194 [2024-02-14 20:30:24.599953] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:47.194 [2024-02-14 20:30:24.600054] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:47.194 [2024-02-14 20:30:24.600123] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.194 [2024-02-14 20:30:24.600131] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.194 [2024-02-14 20:30:24.600138] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.194 [2024-02-14 20:30:24.601976] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.455 [2024-02-14 20:30:24.610966] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.455 [2024-02-14 20:30:24.611559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.455 [2024-02-14 20:30:24.611987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.455 [2024-02-14 20:30:24.611998] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:47.455 [2024-02-14 20:30:24.612006] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:47.455 [2024-02-14 20:30:24.612151] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:47.455 [2024-02-14 20:30:24.612265] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.455 [2024-02-14 20:30:24.612272] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.455 [2024-02-14 20:30:24.612279] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.455 [2024-02-14 20:30:24.614021] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.455 [2024-02-14 20:30:24.623024] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.455 [2024-02-14 20:30:24.623652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.455 [2024-02-14 20:30:24.624079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.455 [2024-02-14 20:30:24.624089] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:47.455 [2024-02-14 20:30:24.624097] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:47.455 [2024-02-14 20:30:24.624211] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:47.455 [2024-02-14 20:30:24.624309] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.455 [2024-02-14 20:30:24.624317] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.455 [2024-02-14 20:30:24.624324] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.455 [2024-02-14 20:30:24.626032] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.455 [2024-02-14 20:30:24.634861] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.455 [2024-02-14 20:30:24.635398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.455 [2024-02-14 20:30:24.635800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.455 [2024-02-14 20:30:24.635812] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:47.455 [2024-02-14 20:30:24.635819] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:47.455 [2024-02-14 20:30:24.635947] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:47.455 [2024-02-14 20:30:24.636032] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.455 [2024-02-14 20:30:24.636040] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.455 [2024-02-14 20:30:24.636047] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.455 [2024-02-14 20:30:24.637721] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.455 [2024-02-14 20:30:24.646977] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.455 [2024-02-14 20:30:24.647456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.455 [2024-02-14 20:30:24.647764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.455 [2024-02-14 20:30:24.647776] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:47.455 [2024-02-14 20:30:24.647783] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:47.455 [2024-02-14 20:30:24.647896] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:47.455 [2024-02-14 20:30:24.648009] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.455 [2024-02-14 20:30:24.648017] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.455 [2024-02-14 20:30:24.648023] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.455 [2024-02-14 20:30:24.649756] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.455 [2024-02-14 20:30:24.658921] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.455 [2024-02-14 20:30:24.659299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.455 [2024-02-14 20:30:24.659655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.455 [2024-02-14 20:30:24.659666] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:47.455 [2024-02-14 20:30:24.659673] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:47.455 [2024-02-14 20:30:24.659757] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:47.455 [2024-02-14 20:30:24.659870] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.455 [2024-02-14 20:30:24.659881] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.455 [2024-02-14 20:30:24.659888] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.455 [2024-02-14 20:30:24.661679] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.455 [2024-02-14 20:30:24.670879] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.455 [2024-02-14 20:30:24.671392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.455 [2024-02-14 20:30:24.671717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.455 [2024-02-14 20:30:24.671728] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:47.455 [2024-02-14 20:30:24.671735] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:47.455 [2024-02-14 20:30:24.671804] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:47.455 [2024-02-14 20:30:24.671902] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.455 [2024-02-14 20:30:24.671910] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.455 [2024-02-14 20:30:24.671916] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.455 [2024-02-14 20:30:24.673695] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.455 [2024-02-14 20:30:24.683034] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.455 [2024-02-14 20:30:24.683538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.455 [2024-02-14 20:30:24.683884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.455 [2024-02-14 20:30:24.683895] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:47.455 [2024-02-14 20:30:24.683902] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:47.455 [2024-02-14 20:30:24.684016] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:47.455 [2024-02-14 20:30:24.684085] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.455 [2024-02-14 20:30:24.684092] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.455 [2024-02-14 20:30:24.684099] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.455 [2024-02-14 20:30:24.685746] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.455 [2024-02-14 20:30:24.695007] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.455 [2024-02-14 20:30:24.695553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.455 [2024-02-14 20:30:24.695905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.455 [2024-02-14 20:30:24.695917] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:47.455 [2024-02-14 20:30:24.695924] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:47.455 [2024-02-14 20:30:24.695993] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:47.455 [2024-02-14 20:30:24.696135] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.455 [2024-02-14 20:30:24.696144] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.455 [2024-02-14 20:30:24.696150] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.455 [2024-02-14 20:30:24.697943] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.455 [2024-02-14 20:30:24.707003] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.455 [2024-02-14 20:30:24.707414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.455 [2024-02-14 20:30:24.707823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.456 [2024-02-14 20:30:24.707835] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:47.456 [2024-02-14 20:30:24.707843] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:47.456 [2024-02-14 20:30:24.707987] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:47.456 [2024-02-14 20:30:24.708120] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.456 [2024-02-14 20:30:24.708130] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.456 [2024-02-14 20:30:24.708137] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.456 [2024-02-14 20:30:24.709919] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.456 [2024-02-14 20:30:24.719086] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.456 [2024-02-14 20:30:24.719703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.456 [2024-02-14 20:30:24.719999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.456 [2024-02-14 20:30:24.720013] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:47.456 [2024-02-14 20:30:24.720019] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:47.456 [2024-02-14 20:30:24.720089] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:47.456 [2024-02-14 20:30:24.720202] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.456 [2024-02-14 20:30:24.720209] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.456 [2024-02-14 20:30:24.720215] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.456 [2024-02-14 20:30:24.721934] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.456 [2024-02-14 20:30:24.731058] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.456 [2024-02-14 20:30:24.731547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.456 [2024-02-14 20:30:24.731993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.456 [2024-02-14 20:30:24.732004] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:47.456 [2024-02-14 20:30:24.732011] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:47.456 [2024-02-14 20:30:24.732124] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:47.456 [2024-02-14 20:30:24.732208] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.456 [2024-02-14 20:30:24.732215] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.456 [2024-02-14 20:30:24.732221] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.456 [2024-02-14 20:30:24.734057] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.456 [2024-02-14 20:30:24.743165] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.456 [2024-02-14 20:30:24.743702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.456 [2024-02-14 20:30:24.744050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.456 [2024-02-14 20:30:24.744061] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:47.456 [2024-02-14 20:30:24.744068] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:47.456 [2024-02-14 20:30:24.744181] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:47.456 [2024-02-14 20:30:24.744265] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.456 [2024-02-14 20:30:24.744272] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.456 [2024-02-14 20:30:24.744279] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.456 [2024-02-14 20:30:24.746174] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.456 [2024-02-14 20:30:24.755054] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.456 [2024-02-14 20:30:24.755541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.456 [2024-02-14 20:30:24.755950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.456 [2024-02-14 20:30:24.755961] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:47.456 [2024-02-14 20:30:24.755970] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:47.456 [2024-02-14 20:30:24.756054] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:47.456 [2024-02-14 20:30:24.756167] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.456 [2024-02-14 20:30:24.756175] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.456 [2024-02-14 20:30:24.756181] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.456 [2024-02-14 20:30:24.758032] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.456 [2024-02-14 20:30:24.767040] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.456 [2024-02-14 20:30:24.767488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.456 [2024-02-14 20:30:24.767935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.456 [2024-02-14 20:30:24.767947] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:47.456 [2024-02-14 20:30:24.767955] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:47.456 [2024-02-14 20:30:24.768084] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:47.456 [2024-02-14 20:30:24.768240] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.456 [2024-02-14 20:30:24.768248] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.456 [2024-02-14 20:30:24.768255] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.456 [2024-02-14 20:30:24.770062] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.456 [2024-02-14 20:30:24.778996] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.456 [2024-02-14 20:30:24.779574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.456 [2024-02-14 20:30:24.779975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.456 [2024-02-14 20:30:24.779986] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:47.456 [2024-02-14 20:30:24.779992] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:47.456 [2024-02-14 20:30:24.780120] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:47.456 [2024-02-14 20:30:24.780233] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.456 [2024-02-14 20:30:24.780241] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.456 [2024-02-14 20:30:24.780247] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.456 [2024-02-14 20:30:24.781967] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.456 [2024-02-14 20:30:24.790830] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.456 [2024-02-14 20:30:24.791346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.456 [2024-02-14 20:30:24.791770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.456 [2024-02-14 20:30:24.791781] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:47.456 [2024-02-14 20:30:24.791787] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:47.456 [2024-02-14 20:30:24.791919] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:47.456 [2024-02-14 20:30:24.792032] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.456 [2024-02-14 20:30:24.792041] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.456 [2024-02-14 20:30:24.792047] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.456 [2024-02-14 20:30:24.793798] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.456 [2024-02-14 20:30:24.802767] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.456 [2024-02-14 20:30:24.803261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.456 [2024-02-14 20:30:24.803688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.456 [2024-02-14 20:30:24.803699] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:47.456 [2024-02-14 20:30:24.803706] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:47.456 [2024-02-14 20:30:24.803835] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:47.456 [2024-02-14 20:30:24.803947] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.456 [2024-02-14 20:30:24.803956] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.456 [2024-02-14 20:30:24.803962] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.456 [2024-02-14 20:30:24.805747] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.456 [2024-02-14 20:30:24.814792] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.456 [2024-02-14 20:30:24.815200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.456 [2024-02-14 20:30:24.815664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.456 [2024-02-14 20:30:24.815675] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:47.456 [2024-02-14 20:30:24.815682] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:47.456 [2024-02-14 20:30:24.815810] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:47.457 [2024-02-14 20:30:24.815922] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.457 [2024-02-14 20:30:24.815930] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.457 [2024-02-14 20:30:24.815936] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.457 [2024-02-14 20:30:24.817476] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.457 [2024-02-14 20:30:24.826683] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.457 [2024-02-14 20:30:24.827103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.457 [2024-02-14 20:30:24.827563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.457 [2024-02-14 20:30:24.827573] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:47.457 [2024-02-14 20:30:24.827580] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:47.457 [2024-02-14 20:30:24.827684] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:47.457 [2024-02-14 20:30:24.827801] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.457 [2024-02-14 20:30:24.827809] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.457 [2024-02-14 20:30:24.827815] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.457 [2024-02-14 20:30:24.829754] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.457 [2024-02-14 20:30:24.838625] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.457 [2024-02-14 20:30:24.839067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.457 [2024-02-14 20:30:24.839523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.457 [2024-02-14 20:30:24.839534] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:47.457 [2024-02-14 20:30:24.839540] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:47.457 [2024-02-14 20:30:24.839639] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:47.457 [2024-02-14 20:30:24.839756] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.457 [2024-02-14 20:30:24.839765] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.457 [2024-02-14 20:30:24.839771] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.457 [2024-02-14 20:30:24.841445] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.457 [2024-02-14 20:30:24.850551] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.457 [2024-02-14 20:30:24.851055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.457 [2024-02-14 20:30:24.851349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.457 [2024-02-14 20:30:24.851359] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:47.457 [2024-02-14 20:30:24.851366] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:47.457 [2024-02-14 20:30:24.851449] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:47.457 [2024-02-14 20:30:24.851562] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.457 [2024-02-14 20:30:24.851569] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.457 [2024-02-14 20:30:24.851574] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.457 [2024-02-14 20:30:24.853411] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.457 [2024-02-14 20:30:24.862658] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.457 [2024-02-14 20:30:24.863151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.457 [2024-02-14 20:30:24.863577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.457 [2024-02-14 20:30:24.863587] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:47.457 [2024-02-14 20:30:24.863593] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:47.457 [2024-02-14 20:30:24.863741] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:47.457 [2024-02-14 20:30:24.863869] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.457 [2024-02-14 20:30:24.863880] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.457 [2024-02-14 20:30:24.863886] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.457 [2024-02-14 20:30:24.865750] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.717 [2024-02-14 20:30:24.874767] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.717 [2024-02-14 20:30:24.875174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.717 [2024-02-14 20:30:24.875476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.717 [2024-02-14 20:30:24.875486] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:47.717 [2024-02-14 20:30:24.875492] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:47.717 [2024-02-14 20:30:24.875591] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:47.717 [2024-02-14 20:30:24.875724] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.717 [2024-02-14 20:30:24.875732] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.717 [2024-02-14 20:30:24.875738] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.717 [2024-02-14 20:30:24.877572] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.717 [2024-02-14 20:30:24.886542] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.717 [2024-02-14 20:30:24.886959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.718 [2024-02-14 20:30:24.887311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.718 [2024-02-14 20:30:24.887321] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:47.718 [2024-02-14 20:30:24.887328] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:47.718 [2024-02-14 20:30:24.887456] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:47.718 [2024-02-14 20:30:24.887524] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.718 [2024-02-14 20:30:24.887532] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.718 [2024-02-14 20:30:24.887539] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.718 [2024-02-14 20:30:24.889084] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.718 [2024-02-14 20:30:24.898380] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.718 [2024-02-14 20:30:24.898921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.718 [2024-02-14 20:30:24.899227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.718 [2024-02-14 20:30:24.899237] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:47.718 [2024-02-14 20:30:24.899243] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:47.718 [2024-02-14 20:30:24.899342] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:47.718 [2024-02-14 20:30:24.899455] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.718 [2024-02-14 20:30:24.899462] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.718 [2024-02-14 20:30:24.899471] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.718 [2024-02-14 20:30:24.901220] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.718 [2024-02-14 20:30:24.910377] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.718 [2024-02-14 20:30:24.910868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.718 [2024-02-14 20:30:24.911270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.718 [2024-02-14 20:30:24.911281] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:47.718 [2024-02-14 20:30:24.911288] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:47.718 [2024-02-14 20:30:24.911386] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:47.718 [2024-02-14 20:30:24.911499] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.718 [2024-02-14 20:30:24.911507] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.718 [2024-02-14 20:30:24.911513] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.718 [2024-02-14 20:30:24.913249] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.718 [2024-02-14 20:30:24.922215] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.718 [2024-02-14 20:30:24.922902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.718 [2024-02-14 20:30:24.923250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.718 [2024-02-14 20:30:24.923260] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:47.718 [2024-02-14 20:30:24.923267] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:47.718 [2024-02-14 20:30:24.923366] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:47.718 [2024-02-14 20:30:24.923479] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.718 [2024-02-14 20:30:24.923487] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.718 [2024-02-14 20:30:24.923493] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.718 [2024-02-14 20:30:24.925055] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.718 [2024-02-14 20:30:24.934184] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.718 [2024-02-14 20:30:24.934698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.718 [2024-02-14 20:30:24.935046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.718 [2024-02-14 20:30:24.935057] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:47.718 [2024-02-14 20:30:24.935064] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:47.718 [2024-02-14 20:30:24.935192] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:47.718 [2024-02-14 20:30:24.935319] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.718 [2024-02-14 20:30:24.935327] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.718 [2024-02-14 20:30:24.935334] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.718 [2024-02-14 20:30:24.937262] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.718 [2024-02-14 20:30:24.946118] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.718 [2024-02-14 20:30:24.946705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.718 [2024-02-14 20:30:24.947011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.718 [2024-02-14 20:30:24.947021] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:47.718 [2024-02-14 20:30:24.947028] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:47.718 [2024-02-14 20:30:24.947112] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:47.718 [2024-02-14 20:30:24.947239] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.718 [2024-02-14 20:30:24.947247] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.718 [2024-02-14 20:30:24.947253] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.718 [2024-02-14 20:30:24.949003] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.718 [2024-02-14 20:30:24.958158] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.718 [2024-02-14 20:30:24.958668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.718 [2024-02-14 20:30:24.959024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.718 [2024-02-14 20:30:24.959034] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:47.718 [2024-02-14 20:30:24.959041] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:47.718 [2024-02-14 20:30:24.959140] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:47.718 [2024-02-14 20:30:24.959283] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.718 [2024-02-14 20:30:24.959291] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.718 [2024-02-14 20:30:24.959297] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.718 [2024-02-14 20:30:24.960991] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.718 [2024-02-14 20:30:24.970140] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.718 [2024-02-14 20:30:24.970699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.718 [2024-02-14 20:30:24.971118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.718 [2024-02-14 20:30:24.971128] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:47.718 [2024-02-14 20:30:24.971135] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:47.718 [2024-02-14 20:30:24.971234] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:47.718 [2024-02-14 20:30:24.971361] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.718 [2024-02-14 20:30:24.971369] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.718 [2024-02-14 20:30:24.971376] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.718 [2024-02-14 20:30:24.973070] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.718 [2024-02-14 20:30:24.982233] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.718 [2024-02-14 20:30:24.982790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.718 [2024-02-14 20:30:24.983108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.718 [2024-02-14 20:30:24.983119] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:47.718 [2024-02-14 20:30:24.983126] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:47.718 [2024-02-14 20:30:24.983261] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:47.718 [2024-02-14 20:30:24.983374] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.718 [2024-02-14 20:30:24.983383] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.718 [2024-02-14 20:30:24.983389] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.718 [2024-02-14 20:30:24.985053] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.718 [2024-02-14 20:30:24.994288] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.718 [2024-02-14 20:30:24.994832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.718 [2024-02-14 20:30:24.995229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.718 [2024-02-14 20:30:24.995239] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:47.719 [2024-02-14 20:30:24.995245] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:47.719 [2024-02-14 20:30:24.995388] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:47.719 [2024-02-14 20:30:24.995471] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.719 [2024-02-14 20:30:24.995478] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.719 [2024-02-14 20:30:24.995485] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.719 [2024-02-14 20:30:24.997234] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.719 [2024-02-14 20:30:25.006455] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.719 [2024-02-14 20:30:25.006947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.719 [2024-02-14 20:30:25.007349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.719 [2024-02-14 20:30:25.007359] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:47.719 [2024-02-14 20:30:25.007365] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:47.719 [2024-02-14 20:30:25.007464] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:47.719 [2024-02-14 20:30:25.007592] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.719 [2024-02-14 20:30:25.007600] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.719 [2024-02-14 20:30:25.007605] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.719 [2024-02-14 20:30:25.009265] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.719 [2024-02-14 20:30:25.018450] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.719 [2024-02-14 20:30:25.018964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.719 [2024-02-14 20:30:25.019412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.719 [2024-02-14 20:30:25.019422] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:47.719 [2024-02-14 20:30:25.019429] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:47.719 [2024-02-14 20:30:25.019542] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:47.719 [2024-02-14 20:30:25.019626] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.719 [2024-02-14 20:30:25.019633] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.719 [2024-02-14 20:30:25.019639] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.719 [2024-02-14 20:30:25.021487] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.719 [2024-02-14 20:30:25.030354] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.719 [2024-02-14 20:30:25.030905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.719 [2024-02-14 20:30:25.031336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.719 [2024-02-14 20:30:25.031346] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:47.719 [2024-02-14 20:30:25.031353] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:47.719 [2024-02-14 20:30:25.031452] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:47.719 [2024-02-14 20:30:25.031602] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.719 [2024-02-14 20:30:25.031611] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.719 [2024-02-14 20:30:25.031617] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.719 [2024-02-14 20:30:25.033247] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.719 [2024-02-14 20:30:25.042194] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.719 [2024-02-14 20:30:25.042709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.719 [2024-02-14 20:30:25.043152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.719 [2024-02-14 20:30:25.043162] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:47.719 [2024-02-14 20:30:25.043169] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:47.719 [2024-02-14 20:30:25.043268] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:47.719 [2024-02-14 20:30:25.043351] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.719 [2024-02-14 20:30:25.043358] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.719 [2024-02-14 20:30:25.043364] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.719 [2024-02-14 20:30:25.045126] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.719 [2024-02-14 20:30:25.054299] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.719 [2024-02-14 20:30:25.054811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.719 [2024-02-14 20:30:25.055231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.719 [2024-02-14 20:30:25.055243] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:47.719 [2024-02-14 20:30:25.055250] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:47.719 [2024-02-14 20:30:25.055364] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:47.719 [2024-02-14 20:30:25.055505] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.719 [2024-02-14 20:30:25.055513] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.719 [2024-02-14 20:30:25.055519] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.719 [2024-02-14 20:30:25.057197] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.719 [2024-02-14 20:30:25.066249] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.719 [2024-02-14 20:30:25.066818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.719 [2024-02-14 20:30:25.067240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.719 [2024-02-14 20:30:25.067250] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:47.719 [2024-02-14 20:30:25.067256] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:47.719 [2024-02-14 20:30:25.067369] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:47.719 [2024-02-14 20:30:25.067482] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.719 [2024-02-14 20:30:25.067490] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.719 [2024-02-14 20:30:25.067496] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.719 [2024-02-14 20:30:25.069378] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.719 [2024-02-14 20:30:25.078158] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.719 [2024-02-14 20:30:25.078671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.719 [2024-02-14 20:30:25.079079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.719 [2024-02-14 20:30:25.079089] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:47.719 [2024-02-14 20:30:25.079096] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:47.719 [2024-02-14 20:30:25.079224] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:47.719 [2024-02-14 20:30:25.079381] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.719 [2024-02-14 20:30:25.079389] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.719 [2024-02-14 20:30:25.079395] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.719 [2024-02-14 20:30:25.081201] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.719 [2024-02-14 20:30:25.090157] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.719 [2024-02-14 20:30:25.090683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.719 [2024-02-14 20:30:25.091048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.719 [2024-02-14 20:30:25.091058] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:47.719 [2024-02-14 20:30:25.091067] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:47.719 [2024-02-14 20:30:25.091180] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:47.719 [2024-02-14 20:30:25.091308] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.719 [2024-02-14 20:30:25.091316] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.719 [2024-02-14 20:30:25.091323] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.719 [2024-02-14 20:30:25.093116] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.719 [2024-02-14 20:30:25.102325] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.719 [2024-02-14 20:30:25.102879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.719 [2024-02-14 20:30:25.103306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.719 [2024-02-14 20:30:25.103316] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:47.719 [2024-02-14 20:30:25.103323] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:47.719 [2024-02-14 20:30:25.103436] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:47.719 [2024-02-14 20:30:25.103534] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.720 [2024-02-14 20:30:25.103542] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.720 [2024-02-14 20:30:25.103548] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.720 [2024-02-14 20:30:25.105230] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.720 [2024-02-14 20:30:25.114184] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.720 [2024-02-14 20:30:25.114701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.720 [2024-02-14 20:30:25.115128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.720 [2024-02-14 20:30:25.115138] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:47.720 [2024-02-14 20:30:25.115144] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:47.720 [2024-02-14 20:30:25.115272] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:47.720 [2024-02-14 20:30:25.115385] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.720 [2024-02-14 20:30:25.115394] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.720 [2024-02-14 20:30:25.115400] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.720 [2024-02-14 20:30:25.117209] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.720 [2024-02-14 20:30:25.126269] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.720 [2024-02-14 20:30:25.126832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.720 [2024-02-14 20:30:25.127231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.720 [2024-02-14 20:30:25.127241] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:47.720 [2024-02-14 20:30:25.127249] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:47.720 [2024-02-14 20:30:25.127365] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:47.720 [2024-02-14 20:30:25.127493] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.720 [2024-02-14 20:30:25.127501] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.720 [2024-02-14 20:30:25.127507] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.720 [2024-02-14 20:30:25.129181] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.980 [2024-02-14 20:30:25.138398] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.980 [2024-02-14 20:30:25.138953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.980 [2024-02-14 20:30:25.139371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.980 [2024-02-14 20:30:25.139381] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:47.980 [2024-02-14 20:30:25.139387] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:47.980 [2024-02-14 20:30:25.139515] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:47.980 [2024-02-14 20:30:25.139629] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.980 [2024-02-14 20:30:25.139637] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.980 [2024-02-14 20:30:25.139643] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.980 [2024-02-14 20:30:25.141317] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.980 [2024-02-14 20:30:25.150260] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.980 [2024-02-14 20:30:25.150774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.980 [2024-02-14 20:30:25.151193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.980 [2024-02-14 20:30:25.151204] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:47.980 [2024-02-14 20:30:25.151210] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:47.980 [2024-02-14 20:30:25.151324] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:47.980 [2024-02-14 20:30:25.151451] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.980 [2024-02-14 20:30:25.151459] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.980 [2024-02-14 20:30:25.151465] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.980 [2024-02-14 20:30:25.153226] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.980 [2024-02-14 20:30:25.162299] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.980 [2024-02-14 20:30:25.162876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.980 [2024-02-14 20:30:25.163232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.980 [2024-02-14 20:30:25.163243] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:47.980 [2024-02-14 20:30:25.163250] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:47.980 [2024-02-14 20:30:25.163378] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:47.980 [2024-02-14 20:30:25.163494] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.980 [2024-02-14 20:30:25.163502] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.980 [2024-02-14 20:30:25.163508] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.980 [2024-02-14 20:30:25.165276] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.980 [2024-02-14 20:30:25.174202] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.980 [2024-02-14 20:30:25.174691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.980 [2024-02-14 20:30:25.175066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.980 [2024-02-14 20:30:25.175077] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:47.980 [2024-02-14 20:30:25.175083] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:47.980 [2024-02-14 20:30:25.175240] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:47.980 [2024-02-14 20:30:25.175339] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.980 [2024-02-14 20:30:25.175347] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.980 [2024-02-14 20:30:25.175354] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.980 [2024-02-14 20:30:25.177178] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.980 [2024-02-14 20:30:25.186218] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.980 [2024-02-14 20:30:25.186702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.980 [2024-02-14 20:30:25.187117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.980 [2024-02-14 20:30:25.187127] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:47.980 [2024-02-14 20:30:25.187134] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:47.980 [2024-02-14 20:30:25.187233] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:47.980 [2024-02-14 20:30:25.187331] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.980 [2024-02-14 20:30:25.187339] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.980 [2024-02-14 20:30:25.187346] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.980 [2024-02-14 20:30:25.189185] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.980 [2024-02-14 20:30:25.198200] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.980 [2024-02-14 20:30:25.198773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.980 [2024-02-14 20:30:25.199375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.980 [2024-02-14 20:30:25.199385] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:47.980 [2024-02-14 20:30:25.199392] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:47.981 [2024-02-14 20:30:25.199505] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:47.981 [2024-02-14 20:30:25.199603] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.981 [2024-02-14 20:30:25.199614] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.981 [2024-02-14 20:30:25.199620] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.981 [2024-02-14 20:30:25.201383] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.981 [2024-02-14 20:30:25.210149] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.981 [2024-02-14 20:30:25.210696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.981 [2024-02-14 20:30:25.211046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.981 [2024-02-14 20:30:25.211056] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:47.981 [2024-02-14 20:30:25.211063] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:47.981 [2024-02-14 20:30:25.211147] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:47.981 [2024-02-14 20:30:25.211259] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.981 [2024-02-14 20:30:25.211266] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.981 [2024-02-14 20:30:25.211272] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.981 [2024-02-14 20:30:25.213079] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.981 [2024-02-14 20:30:25.221940] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.981 [2024-02-14 20:30:25.222481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.981 [2024-02-14 20:30:25.222846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.981 [2024-02-14 20:30:25.222857] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:47.981 [2024-02-14 20:30:25.222863] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:47.981 [2024-02-14 20:30:25.222932] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:47.981 [2024-02-14 20:30:25.223030] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.981 [2024-02-14 20:30:25.223038] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.981 [2024-02-14 20:30:25.223044] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.981 20:30:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:47.981 20:30:25 -- common/autotest_common.sh@850 -- # return 0 00:29:47.981 [2024-02-14 20:30:25.224719] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.981 20:30:25 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:47.981 20:30:25 -- common/autotest_common.sh@716 -- # xtrace_disable 00:29:47.981 20:30:25 -- common/autotest_common.sh@10 -- # set +x 00:29:47.981 [2024-02-14 20:30:25.233782] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.981 [2024-02-14 20:30:25.234302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.981 [2024-02-14 20:30:25.234703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.981 [2024-02-14 20:30:25.234713] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:47.981 [2024-02-14 20:30:25.234720] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:47.981 [2024-02-14 20:30:25.234863] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:47.981 [2024-02-14 20:30:25.234935] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.981 [2024-02-14 20:30:25.234943] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.981 [2024-02-14 20:30:25.234951] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.981 [2024-02-14 20:30:25.236773] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.981 [2024-02-14 20:30:25.245643] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.981 [2024-02-14 20:30:25.246131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.981 [2024-02-14 20:30:25.246481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.981 [2024-02-14 20:30:25.246492] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:47.981 [2024-02-14 20:30:25.246498] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:47.981 [2024-02-14 20:30:25.246582] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:47.981 [2024-02-14 20:30:25.246715] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.981 [2024-02-14 20:30:25.246723] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.981 [2024-02-14 20:30:25.246729] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.981 [2024-02-14 20:30:25.248503] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.981 [2024-02-14 20:30:25.257553] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.981 [2024-02-14 20:30:25.258215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.981 [2024-02-14 20:30:25.258640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.981 [2024-02-14 20:30:25.258657] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:47.981 [2024-02-14 20:30:25.258664] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:47.981 [2024-02-14 20:30:25.258806] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:47.981 [2024-02-14 20:30:25.258948] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.981 [2024-02-14 20:30:25.258955] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.981 [2024-02-14 20:30:25.258961] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.981 20:30:25 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:47.981 20:30:25 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:47.981 20:30:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:47.981 20:30:25 -- common/autotest_common.sh@10 -- # set +x 00:29:47.981 [2024-02-14 20:30:25.260651] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.981 [2024-02-14 20:30:25.264622] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:47.981 20:30:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:47.981 [2024-02-14 20:30:25.269620] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.981 20:30:25 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:47.981 [2024-02-14 20:30:25.270154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.981 20:30:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:47.981 20:30:25 -- common/autotest_common.sh@10 -- # set +x 00:29:47.981 [2024-02-14 20:30:25.270512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.981 [2024-02-14 20:30:25.270523] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:47.981 [2024-02-14 20:30:25.270530] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:47.981 [2024-02-14 20:30:25.270692] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:47.981 [2024-02-14 20:30:25.270868] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.981 [2024-02-14 20:30:25.270876] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.981 [2024-02-14 20:30:25.270882] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.981 [2024-02-14 20:30:25.272763] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.981 [2024-02-14 20:30:25.281581] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.981 [2024-02-14 20:30:25.282060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.982 [2024-02-14 20:30:25.282460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.982 [2024-02-14 20:30:25.282470] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:47.982 [2024-02-14 20:30:25.282477] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:47.982 [2024-02-14 20:30:25.282561] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:47.982 [2024-02-14 20:30:25.282678] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.982 [2024-02-14 20:30:25.282685] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.982 [2024-02-14 20:30:25.282692] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.982 [2024-02-14 20:30:25.284262] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.982 [2024-02-14 20:30:25.293391] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.982 [2024-02-14 20:30:25.293984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.982 [2024-02-14 20:30:25.294413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.982 [2024-02-14 20:30:25.294423] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:47.982 [2024-02-14 20:30:25.294430] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:47.982 [2024-02-14 20:30:25.294545] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:47.982 [2024-02-14 20:30:25.294678] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.982 [2024-02-14 20:30:25.294687] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.982 [2024-02-14 20:30:25.294694] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.982 [2024-02-14 20:30:25.296629] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.982 Malloc0 00:29:47.982 20:30:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:47.982 20:30:25 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:47.982 20:30:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:47.982 20:30:25 -- common/autotest_common.sh@10 -- # set +x 00:29:47.982 [2024-02-14 20:30:25.305298] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.982 [2024-02-14 20:30:25.305885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.982 [2024-02-14 20:30:25.306219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.982 [2024-02-14 20:30:25.306229] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:47.982 [2024-02-14 20:30:25.306236] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:47.982 [2024-02-14 20:30:25.306379] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:47.982 [2024-02-14 20:30:25.306492] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.982 [2024-02-14 20:30:25.306500] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.982 [2024-02-14 20:30:25.306507] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.982 [2024-02-14 20:30:25.308199] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.982 20:30:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:47.982 20:30:25 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:47.982 20:30:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:47.982 20:30:25 -- common/autotest_common.sh@10 -- # set +x 00:29:47.982 [2024-02-14 20:30:25.317136] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.982 [2024-02-14 20:30:25.317537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.982 [2024-02-14 20:30:25.317729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.982 [2024-02-14 20:30:25.317740] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218b630 with addr=10.0.0.2, port=4420 00:29:47.982 [2024-02-14 20:30:25.317747] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b630 is same with the state(5) to be set 00:29:47.982 [2024-02-14 20:30:25.317861] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b630 (9): Bad file descriptor 00:29:47.982 [2024-02-14 20:30:25.317974] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.982 [2024-02-14 20:30:25.317982] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.982 [2024-02-14 20:30:25.317988] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.982 20:30:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:47.982 20:30:25 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:47.982 [2024-02-14 20:30:25.319603] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.982 20:30:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:47.982 20:30:25 -- common/autotest_common.sh@10 -- # set +x 00:29:47.982 [2024-02-14 20:30:25.322739] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:47.982 20:30:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:47.982 20:30:25 -- host/bdevperf.sh@38 -- # wait 1959830 00:29:47.982 [2024-02-14 20:30:25.329239] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.241 [2024-02-14 20:30:25.402582] bdev_nvme.c:2026:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:58.218 00:29:58.218 Latency(us) 00:29:58.218 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:58.218 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:58.218 Verification LBA range: start 0x0 length 0x4000 00:29:58.218 Nvme1n1 : 15.00 12599.81 49.22 18928.95 0.00 4048.24 975.24 20846.69 00:29:58.218 =================================================================================================================== 00:29:58.218 Total : 12599.81 49.22 18928.95 0.00 4048.24 975.24 20846.69 00:29:58.218 [2024-02-14 20:30:33.917471] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:29:58.218 20:30:34 -- host/bdevperf.sh@39 -- # sync 00:29:58.218 20:30:34 -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:58.218 20:30:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:58.218 20:30:34 -- common/autotest_common.sh@10 -- # set +x 00:29:58.218 20:30:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:58.218 20:30:34 -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:29:58.218 20:30:34 -- host/bdevperf.sh@44 -- # nvmftestfini 00:29:58.218 20:30:34 -- nvmf/common.sh@476 -- # nvmfcleanup 00:29:58.218 20:30:34 -- nvmf/common.sh@116 -- # sync 00:29:58.218 20:30:34 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:29:58.218 20:30:34 -- nvmf/common.sh@119 -- # set +e 00:29:58.218 20:30:34 -- nvmf/common.sh@120 -- # for i in {1..20} 00:29:58.218 20:30:34 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:29:58.218 rmmod nvme_tcp 00:29:58.218 rmmod nvme_fabrics 00:29:58.218 rmmod nvme_keyring 00:29:58.218 20:30:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:29:58.218 20:30:34 -- nvmf/common.sh@123 -- # set -e 00:29:58.218 20:30:34 -- nvmf/common.sh@124 -- # return 0 00:29:58.218 20:30:34 -- nvmf/common.sh@477 -- # '[' -n 1960762 ']' 00:29:58.218 20:30:34 -- nvmf/common.sh@478 -- # killprocess 1960762 00:29:58.218 20:30:34 -- common/autotest_common.sh@924 -- # '[' -z 1960762 ']' 00:29:58.218 20:30:34 -- common/autotest_common.sh@928 -- # kill -0 1960762 00:29:58.218 20:30:34 -- common/autotest_common.sh@929 -- # uname 00:29:58.218 20:30:34 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:29:58.218 20:30:34 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 1960762 00:29:58.218 20:30:34 -- common/autotest_common.sh@930 -- # process_name=reactor_1 00:29:58.218 20:30:34 -- common/autotest_common.sh@934 -- # '[' reactor_1 = sudo ']' 00:29:58.218 20:30:34 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 1960762' 00:29:58.218 killing process with pid 1960762 00:29:58.218 20:30:34 -- common/autotest_common.sh@943 -- # kill 1960762 00:29:58.218 20:30:34 -- common/autotest_common.sh@948 -- # wait 1960762 00:29:58.218 20:30:34 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:29:58.218 20:30:34 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:29:58.218 20:30:34 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:29:58.219 20:30:34 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:58.219 20:30:34 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:29:58.219 20:30:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:58.219 20:30:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:58.219 20:30:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:59.156 20:30:36 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:29:59.156 00:29:59.156 real 0m26.858s 00:29:59.156 user 1m3.420s 00:29:59.156 sys 0m6.745s 00:29:59.156 20:30:36 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:59.156 20:30:36 -- common/autotest_common.sh@10 -- # set +x 00:29:59.156 ************************************ 00:29:59.156 END TEST nvmf_bdevperf 00:29:59.156 ************************************ 00:29:59.156 20:30:36 -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:59.156 20:30:36 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:29:59.156 20:30:36 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:29:59.156 20:30:36 -- common/autotest_common.sh@10 -- # set +x 00:29:59.156 ************************************ 00:29:59.156 START TEST nvmf_target_disconnect 00:29:59.156 ************************************ 00:29:59.156 20:30:36 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:59.415 * Looking for test storage... 00:29:59.415 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:59.415 20:30:36 -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:59.415 20:30:36 -- nvmf/common.sh@7 -- # uname -s 00:29:59.415 20:30:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:59.415 20:30:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:59.415 20:30:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:59.415 20:30:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:59.415 20:30:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:59.415 20:30:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:59.415 20:30:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:59.415 20:30:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:59.415 20:30:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:59.415 20:30:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:59.415 20:30:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:29:59.415 20:30:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:29:59.415 20:30:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:59.415 20:30:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:59.415 20:30:36 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:59.416 20:30:36 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:59.416 20:30:36 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:59.416 20:30:36 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:59.416 20:30:36 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:59.416 20:30:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.416 20:30:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.416 20:30:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.416 20:30:36 -- paths/export.sh@5 -- # export PATH 00:29:59.416 20:30:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.416 20:30:36 -- nvmf/common.sh@46 -- # : 0 00:29:59.416 20:30:36 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:29:59.416 20:30:36 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:29:59.416 20:30:36 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:29:59.416 20:30:36 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:59.416 20:30:36 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:59.416 20:30:36 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:29:59.416 20:30:36 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:29:59.416 20:30:36 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:29:59.416 20:30:36 -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:59.416 20:30:36 -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:29:59.416 20:30:36 -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:29:59.416 20:30:36 -- host/target_disconnect.sh@77 -- # nvmftestinit 00:29:59.416 20:30:36 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:29:59.416 20:30:36 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:59.416 20:30:36 -- nvmf/common.sh@436 -- # prepare_net_devs 00:29:59.416 20:30:36 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:29:59.416 20:30:36 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:29:59.416 20:30:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:59.416 20:30:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:59.416 20:30:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:59.416 20:30:36 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:29:59.416 20:30:36 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:29:59.416 20:30:36 -- nvmf/common.sh@284 -- # xtrace_disable 00:29:59.416 20:30:36 -- common/autotest_common.sh@10 -- # set +x 00:30:05.983 20:30:42 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:30:05.983 20:30:42 -- nvmf/common.sh@290 -- # pci_devs=() 00:30:05.983 20:30:42 -- nvmf/common.sh@290 -- # local -a pci_devs 00:30:05.983 20:30:42 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:30:05.983 20:30:42 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:30:05.983 20:30:42 -- nvmf/common.sh@292 -- # pci_drivers=() 00:30:05.983 20:30:42 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:30:05.983 20:30:42 -- nvmf/common.sh@294 -- # net_devs=() 00:30:05.983 20:30:42 -- nvmf/common.sh@294 -- # local -ga net_devs 00:30:05.983 20:30:42 -- nvmf/common.sh@295 -- # e810=() 00:30:05.983 20:30:42 -- nvmf/common.sh@295 -- # local -ga e810 00:30:05.983 20:30:42 -- nvmf/common.sh@296 -- # x722=() 00:30:05.983 20:30:42 -- nvmf/common.sh@296 -- # local -ga x722 00:30:05.983 20:30:42 -- nvmf/common.sh@297 -- # mlx=() 00:30:05.983 20:30:42 -- nvmf/common.sh@297 -- # local -ga mlx 00:30:05.983 20:30:42 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:05.983 20:30:42 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:05.983 20:30:42 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:05.983 20:30:42 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:05.983 20:30:42 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:05.983 20:30:42 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:05.983 20:30:42 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:05.983 20:30:42 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:05.983 20:30:42 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:05.983 20:30:42 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:05.983 20:30:42 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:05.983 20:30:42 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:30:05.983 20:30:42 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:30:05.983 20:30:42 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:30:05.983 20:30:42 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:30:05.983 20:30:42 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:30:05.983 20:30:42 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:30:05.983 20:30:42 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:05.983 20:30:42 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:05.983 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:05.983 20:30:42 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:05.983 20:30:42 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:05.983 20:30:42 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:05.983 20:30:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:05.983 20:30:42 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:05.983 20:30:42 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:05.983 20:30:42 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:05.983 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:05.983 20:30:42 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:05.983 20:30:42 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:05.983 20:30:42 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:05.983 20:30:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:05.983 20:30:42 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:05.983 20:30:42 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:30:05.983 20:30:42 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:30:05.983 20:30:42 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:30:05.983 20:30:42 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:05.983 20:30:42 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:05.983 20:30:42 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:05.983 20:30:42 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:05.983 20:30:42 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:05.983 Found net devices under 0000:af:00.0: cvl_0_0 00:30:05.983 20:30:42 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:05.983 20:30:42 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:05.983 20:30:42 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:05.983 20:30:42 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:05.983 20:30:42 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:05.983 20:30:42 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:05.983 Found net devices under 0000:af:00.1: cvl_0_1 00:30:05.983 20:30:42 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:05.983 20:30:42 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:30:05.983 20:30:42 -- nvmf/common.sh@402 -- # is_hw=yes 00:30:05.983 20:30:42 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:30:05.983 20:30:42 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:30:05.983 20:30:42 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:30:05.983 20:30:42 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:05.983 20:30:42 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:05.983 20:30:42 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:05.983 20:30:42 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:30:05.983 20:30:42 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:05.983 20:30:42 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:05.983 20:30:42 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:30:05.983 20:30:42 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:05.983 20:30:42 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:05.983 20:30:42 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:30:05.983 20:30:42 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:30:05.983 20:30:42 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:30:05.983 20:30:42 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:05.983 20:30:42 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:05.983 20:30:42 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:05.983 20:30:42 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:30:05.983 20:30:42 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:05.983 20:30:42 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:05.983 20:30:42 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:05.983 20:30:42 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:30:05.983 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:05.983 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:30:05.983 00:30:05.983 --- 10.0.0.2 ping statistics --- 00:30:05.983 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:05.983 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:30:05.983 20:30:42 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:05.983 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:05.983 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.452 ms 00:30:05.983 00:30:05.983 --- 10.0.0.1 ping statistics --- 00:30:05.983 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:05.983 rtt min/avg/max/mdev = 0.452/0.452/0.452/0.000 ms 00:30:05.983 20:30:42 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:05.983 20:30:42 -- nvmf/common.sh@410 -- # return 0 00:30:05.983 20:30:42 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:30:05.983 20:30:42 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:05.983 20:30:42 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:30:05.983 20:30:42 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:30:05.983 20:30:42 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:05.983 20:30:42 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:30:05.983 20:30:42 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:30:05.983 20:30:42 -- host/target_disconnect.sh@78 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:30:05.983 20:30:42 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:30:05.983 20:30:42 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:30:05.983 20:30:42 -- common/autotest_common.sh@10 -- # set +x 00:30:05.983 ************************************ 00:30:05.983 START TEST nvmf_target_disconnect_tc1 00:30:05.983 ************************************ 00:30:05.983 20:30:42 -- common/autotest_common.sh@1102 -- # nvmf_target_disconnect_tc1 00:30:05.983 20:30:42 -- host/target_disconnect.sh@32 -- # set +e 00:30:05.983 20:30:42 -- host/target_disconnect.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:05.983 EAL: No free 2048 kB hugepages reported on node 1 00:30:05.983 [2024-02-14 20:30:42.713703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.983 [2024-02-14 20:30:42.714253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.983 [2024-02-14 20:30:42.714272] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13ad3e0 with addr=10.0.0.2, port=4420 00:30:05.983 [2024-02-14 20:30:42.714295] nvme_tcp.c:2594:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:05.983 [2024-02-14 20:30:42.714310] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:05.983 [2024-02-14 20:30:42.714319] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:30:05.983 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:30:05.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:30:05.983 Initializing NVMe Controllers 00:30:05.983 20:30:42 -- host/target_disconnect.sh@33 -- # trap - ERR 00:30:05.983 20:30:42 -- host/target_disconnect.sh@33 -- # print_backtrace 00:30:05.983 20:30:42 -- common/autotest_common.sh@1130 -- # [[ hxBET =~ e ]] 00:30:05.983 20:30:42 -- common/autotest_common.sh@1130 -- # return 0 00:30:05.983 20:30:42 -- host/target_disconnect.sh@37 -- # '[' 1 '!=' 1 ']' 00:30:05.983 20:30:42 -- host/target_disconnect.sh@41 -- # set -e 00:30:05.983 00:30:05.983 real 0m0.087s 00:30:05.983 user 0m0.037s 00:30:05.983 sys 0m0.050s 00:30:05.983 20:30:42 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:05.983 20:30:42 -- common/autotest_common.sh@10 -- # set +x 00:30:05.983 ************************************ 00:30:05.983 END TEST nvmf_target_disconnect_tc1 00:30:05.983 ************************************ 00:30:05.984 20:30:42 -- host/target_disconnect.sh@79 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:30:05.984 20:30:42 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:30:05.984 20:30:42 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:30:05.984 20:30:42 -- common/autotest_common.sh@10 -- # set +x 00:30:05.984 ************************************ 00:30:05.984 START TEST nvmf_target_disconnect_tc2 00:30:05.984 ************************************ 00:30:05.984 20:30:42 -- common/autotest_common.sh@1102 -- # nvmf_target_disconnect_tc2 00:30:05.984 20:30:42 -- host/target_disconnect.sh@45 -- # disconnect_init 10.0.0.2 00:30:05.984 20:30:42 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:30:05.984 20:30:42 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:30:05.984 20:30:42 -- common/autotest_common.sh@710 -- # xtrace_disable 00:30:05.984 20:30:42 -- common/autotest_common.sh@10 -- # set +x 00:30:05.984 20:30:42 -- nvmf/common.sh@469 -- # nvmfpid=1966193 00:30:05.984 20:30:42 -- nvmf/common.sh@470 -- # waitforlisten 1966193 00:30:05.984 20:30:42 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:30:05.984 20:30:42 -- common/autotest_common.sh@817 -- # '[' -z 1966193 ']' 00:30:05.984 20:30:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:05.984 20:30:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:05.984 20:30:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:05.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:05.984 20:30:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:05.984 20:30:42 -- common/autotest_common.sh@10 -- # set +x 00:30:05.984 [2024-02-14 20:30:42.820799] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:30:05.984 [2024-02-14 20:30:42.820839] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:05.984 EAL: No free 2048 kB hugepages reported on node 1 00:30:05.984 [2024-02-14 20:30:42.895223] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:05.984 [2024-02-14 20:30:42.969825] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:05.984 [2024-02-14 20:30:42.969946] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:05.984 [2024-02-14 20:30:42.969954] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:05.984 [2024-02-14 20:30:42.969960] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:05.984 [2024-02-14 20:30:42.970082] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:30:05.984 [2024-02-14 20:30:42.970197] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:30:05.984 [2024-02-14 20:30:42.970238] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:30:05.984 [2024-02-14 20:30:42.970240] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:30:06.241 20:30:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:06.241 20:30:43 -- common/autotest_common.sh@850 -- # return 0 00:30:06.241 20:30:43 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:30:06.241 20:30:43 -- common/autotest_common.sh@716 -- # xtrace_disable 00:30:06.241 20:30:43 -- common/autotest_common.sh@10 -- # set +x 00:30:06.241 20:30:43 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:06.241 20:30:43 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:06.241 20:30:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:06.241 20:30:43 -- common/autotest_common.sh@10 -- # set +x 00:30:06.241 Malloc0 00:30:06.499 20:30:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:06.499 20:30:43 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:06.499 20:30:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:06.499 20:30:43 -- common/autotest_common.sh@10 -- # set +x 00:30:06.499 [2024-02-14 20:30:43.665485] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:06.499 20:30:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:06.499 20:30:43 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:06.499 20:30:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:06.499 20:30:43 -- common/autotest_common.sh@10 -- # set +x 00:30:06.499 20:30:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:06.499 20:30:43 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:06.499 20:30:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:06.499 20:30:43 -- common/autotest_common.sh@10 -- # set +x 00:30:06.499 20:30:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:06.499 20:30:43 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:06.499 20:30:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:06.499 20:30:43 -- common/autotest_common.sh@10 -- # set +x 00:30:06.499 [2024-02-14 20:30:43.690520] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:06.499 20:30:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:06.499 20:30:43 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:06.499 20:30:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:06.499 20:30:43 -- common/autotest_common.sh@10 -- # set +x 00:30:06.499 20:30:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:06.499 20:30:43 -- host/target_disconnect.sh@50 -- # reconnectpid=1966440 00:30:06.499 20:30:43 -- host/target_disconnect.sh@52 -- # sleep 2 00:30:06.499 20:30:43 -- host/target_disconnect.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:06.499 EAL: No free 2048 kB hugepages reported on node 1 00:30:08.398 20:30:45 -- host/target_disconnect.sh@53 -- # kill -9 1966193 00:30:08.398 20:30:45 -- host/target_disconnect.sh@55 -- # sleep 2 00:30:08.398 Read completed with error (sct=0, sc=8) 00:30:08.398 starting I/O failed 00:30:08.398 Read completed with error (sct=0, sc=8) 00:30:08.398 starting I/O failed 00:30:08.398 Read completed with error (sct=0, sc=8) 00:30:08.398 starting I/O failed 00:30:08.398 Read completed with error (sct=0, sc=8) 00:30:08.398 starting I/O failed 00:30:08.398 Read completed with error (sct=0, sc=8) 00:30:08.398 starting I/O failed 00:30:08.398 Read completed with error (sct=0, sc=8) 00:30:08.398 starting I/O failed 00:30:08.398 Read completed with error (sct=0, sc=8) 00:30:08.398 starting I/O failed 00:30:08.398 Read completed with error (sct=0, sc=8) 00:30:08.398 starting I/O failed 00:30:08.398 Read completed with error (sct=0, sc=8) 00:30:08.398 starting I/O failed 00:30:08.398 Read completed with error (sct=0, sc=8) 00:30:08.398 starting I/O failed 00:30:08.398 Read completed with error (sct=0, sc=8) 00:30:08.398 starting I/O failed 00:30:08.398 Write completed with error (sct=0, sc=8) 00:30:08.398 starting I/O failed 00:30:08.398 Write completed with error (sct=0, sc=8) 00:30:08.398 starting I/O failed 00:30:08.398 Write completed with error (sct=0, sc=8) 00:30:08.398 starting I/O failed 00:30:08.398 Write completed with error (sct=0, sc=8) 00:30:08.398 starting I/O failed 00:30:08.398 Write completed with error (sct=0, sc=8) 00:30:08.398 starting I/O failed 00:30:08.398 Write completed with error (sct=0, sc=8) 00:30:08.398 starting I/O failed 00:30:08.398 Write completed with error (sct=0, sc=8) 00:30:08.398 starting I/O failed 00:30:08.398 Write completed with error (sct=0, sc=8) 00:30:08.398 starting I/O failed 00:30:08.398 Read completed with error (sct=0, sc=8) 00:30:08.398 starting I/O failed 00:30:08.398 Write completed with error (sct=0, sc=8) 00:30:08.398 starting I/O failed 00:30:08.398 Write completed with error (sct=0, sc=8) 00:30:08.398 starting I/O failed 00:30:08.398 Write completed with error (sct=0, sc=8) 00:30:08.398 starting I/O failed 00:30:08.398 Write completed with error (sct=0, sc=8) 00:30:08.398 starting I/O failed 00:30:08.398 Write completed with error (sct=0, sc=8) 00:30:08.398 starting I/O failed 00:30:08.398 Write completed with error (sct=0, sc=8) 00:30:08.398 starting I/O failed 00:30:08.398 Write completed with error (sct=0, sc=8) 00:30:08.398 starting I/O failed 00:30:08.398 Write completed with error (sct=0, sc=8) 00:30:08.398 starting I/O failed 00:30:08.398 Read completed with error (sct=0, sc=8) 00:30:08.398 starting I/O failed 00:30:08.398 Write completed with error (sct=0, sc=8) 00:30:08.398 starting I/O failed 00:30:08.398 Read completed with error (sct=0, sc=8) 00:30:08.399 starting I/O failed 00:30:08.399 Read completed with error (sct=0, sc=8) 00:30:08.399 starting I/O failed 00:30:08.399 Read completed with error (sct=0, sc=8) 00:30:08.399 starting I/O failed 00:30:08.399 Read completed with error (sct=0, sc=8) 00:30:08.399 starting I/O failed 00:30:08.399 Read completed with error (sct=0, sc=8) 00:30:08.399 starting I/O failed 00:30:08.399 Read completed with error (sct=0, sc=8) 00:30:08.399 starting I/O failed 00:30:08.399 Read completed with error (sct=0, sc=8) 00:30:08.399 [2024-02-14 20:30:45.719031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.399 starting I/O failed 00:30:08.399 Read completed with error (sct=0, sc=8) 00:30:08.399 starting I/O failed 00:30:08.399 Read completed with error (sct=0, sc=8) 00:30:08.399 starting I/O failed 00:30:08.399 Read completed with error (sct=0, sc=8) 00:30:08.399 starting I/O failed 00:30:08.399 Read completed with error (sct=0, sc=8) 00:30:08.399 starting I/O failed 00:30:08.399 Read completed with error (sct=0, sc=8) 00:30:08.399 starting I/O failed 00:30:08.399 Read completed with error (sct=0, sc=8) 00:30:08.399 starting I/O failed 00:30:08.399 Read completed with error (sct=0, sc=8) 00:30:08.399 starting I/O failed 00:30:08.399 Read completed with error (sct=0, sc=8) 00:30:08.399 starting I/O failed 00:30:08.399 Write completed with error (sct=0, sc=8) 00:30:08.399 starting I/O failed 00:30:08.399 Write completed with error (sct=0, sc=8) 00:30:08.399 starting I/O failed 00:30:08.399 Read completed with error (sct=0, sc=8) 00:30:08.399 starting I/O failed 00:30:08.399 Read completed with error (sct=0, sc=8) 00:30:08.399 starting I/O failed 00:30:08.399 Read completed with error (sct=0, sc=8) 00:30:08.399 starting I/O failed 00:30:08.399 Read completed with error (sct=0, sc=8) 00:30:08.399 starting I/O failed 00:30:08.399 Read completed with error (sct=0, sc=8) 00:30:08.399 starting I/O failed 00:30:08.399 Write completed with error (sct=0, sc=8) 00:30:08.399 starting I/O failed 00:30:08.399 Read completed with error (sct=0, sc=8) 00:30:08.399 starting I/O failed 00:30:08.399 Read completed with error (sct=0, sc=8) 00:30:08.399 starting I/O failed 00:30:08.399 Write completed with error (sct=0, sc=8) 00:30:08.399 starting I/O failed 00:30:08.399 Write completed with error (sct=0, sc=8) 00:30:08.399 starting I/O failed 00:30:08.399 Read completed with error (sct=0, sc=8) 00:30:08.399 starting I/O failed 00:30:08.399 Write completed with error (sct=0, sc=8) 00:30:08.399 starting I/O failed 00:30:08.399 Write completed with error (sct=0, sc=8) 00:30:08.399 starting I/O failed 00:30:08.399 Read completed with error (sct=0, sc=8) 00:30:08.399 starting I/O failed 00:30:08.399 Write completed with error (sct=0, sc=8) 00:30:08.399 starting I/O failed 00:30:08.399 Write completed with error (sct=0, sc=8) 00:30:08.399 starting I/O failed 00:30:08.399 Read completed with error (sct=0, sc=8) 00:30:08.399 starting I/O failed 00:30:08.399 [2024-02-14 20:30:45.719231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.399 Read completed with error (sct=0, sc=8) 00:30:08.399 starting I/O failed 00:30:08.399 Read completed with error (sct=0, sc=8) 00:30:08.399 starting I/O failed 00:30:08.399 Read completed with error (sct=0, sc=8) 00:30:08.399 starting I/O failed 00:30:08.399 Read completed with error (sct=0, sc=8) 00:30:08.399 starting I/O failed 00:30:08.399 Read completed with error (sct=0, sc=8) 00:30:08.399 starting I/O failed 00:30:08.399 Read completed with error (sct=0, sc=8) 00:30:08.399 starting I/O failed 00:30:08.399 Read completed with error (sct=0, sc=8) 00:30:08.399 starting I/O failed 00:30:08.399 Read completed with error (sct=0, sc=8) 00:30:08.399 starting I/O failed 00:30:08.399 Read completed with error (sct=0, sc=8) 00:30:08.399 starting I/O failed 00:30:08.399 Read completed with error (sct=0, sc=8) 00:30:08.399 starting I/O failed 00:30:08.399 Read completed with error (sct=0, sc=8) 00:30:08.399 starting I/O failed 00:30:08.399 Read completed with error (sct=0, sc=8) 00:30:08.399 starting I/O failed 00:30:08.399 Read completed with error (sct=0, sc=8) 00:30:08.399 starting I/O failed 00:30:08.399 Read completed with error (sct=0, sc=8) 00:30:08.399 starting I/O failed 00:30:08.399 Write completed with error (sct=0, sc=8) 00:30:08.399 starting I/O failed 00:30:08.399 Write completed with error (sct=0, sc=8) 00:30:08.399 starting I/O failed 00:30:08.399 Write completed with error (sct=0, sc=8) 00:30:08.399 starting I/O failed 00:30:08.399 Write completed with error (sct=0, sc=8) 00:30:08.399 starting I/O failed 00:30:08.399 Write completed with error (sct=0, sc=8) 00:30:08.399 starting I/O failed 00:30:08.399 Write completed with error (sct=0, sc=8) 00:30:08.399 starting I/O failed 00:30:08.399 Write completed with error (sct=0, sc=8) 00:30:08.399 starting I/O failed 00:30:08.399 Read completed with error (sct=0, sc=8) 00:30:08.399 starting I/O failed 00:30:08.399 Write completed with error (sct=0, sc=8) 00:30:08.399 starting I/O failed 00:30:08.399 Write completed with error (sct=0, sc=8) 00:30:08.399 starting I/O failed 00:30:08.399 Read completed with error (sct=0, sc=8) 00:30:08.399 starting I/O failed 00:30:08.399 Write completed with error (sct=0, sc=8) 00:30:08.399 starting I/O failed 00:30:08.399 Read completed with error (sct=0, sc=8) 00:30:08.399 starting I/O failed 00:30:08.399 Read completed with error (sct=0, sc=8) 00:30:08.399 starting I/O failed 00:30:08.399 Read completed with error (sct=0, sc=8) 00:30:08.399 starting I/O failed 00:30:08.399 Write completed with error (sct=0, sc=8) 00:30:08.399 starting I/O failed 00:30:08.399 Read completed with error (sct=0, sc=8) 00:30:08.399 starting I/O failed 00:30:08.399 Read completed with error (sct=0, sc=8) 00:30:08.399 starting I/O failed 00:30:08.399 [2024-02-14 20:30:45.719416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:08.399 Read completed with error (sct=0, sc=8) 00:30:08.399 starting I/O failed 00:30:08.399 Read completed with error (sct=0, sc=8) 00:30:08.399 starting I/O failed 00:30:08.399 Read completed with error (sct=0, sc=8) 00:30:08.399 starting I/O failed 00:30:08.399 Read completed with error (sct=0, sc=8) 00:30:08.399 starting I/O failed 00:30:08.399 Write completed with error (sct=0, sc=8) 00:30:08.399 starting I/O failed 00:30:08.399 Read completed with error (sct=0, sc=8) 00:30:08.399 starting I/O failed 00:30:08.399 Write completed with error (sct=0, sc=8) 00:30:08.399 starting I/O failed 00:30:08.399 Write completed with error (sct=0, sc=8) 00:30:08.399 starting I/O failed 00:30:08.399 Read completed with error (sct=0, sc=8) 00:30:08.399 starting I/O failed 00:30:08.399 Read completed with error (sct=0, sc=8) 00:30:08.399 starting I/O failed 00:30:08.399 Read completed with error (sct=0, sc=8) 00:30:08.399 starting I/O failed 00:30:08.399 Read completed with error (sct=0, sc=8) 00:30:08.399 starting I/O failed 00:30:08.399 Write completed with error (sct=0, sc=8) 00:30:08.399 starting I/O failed 00:30:08.399 Read completed with error (sct=0, sc=8) 00:30:08.399 starting I/O failed 00:30:08.399 Write completed with error (sct=0, sc=8) 00:30:08.399 starting I/O failed 00:30:08.399 Read completed with error (sct=0, sc=8) 00:30:08.399 starting I/O failed 00:30:08.399 Read completed with error (sct=0, sc=8) 00:30:08.399 starting I/O failed 00:30:08.399 Write completed with error (sct=0, sc=8) 00:30:08.399 starting I/O failed 00:30:08.399 Read completed with error (sct=0, sc=8) 00:30:08.399 starting I/O failed 00:30:08.399 Write completed with error (sct=0, sc=8) 00:30:08.399 starting I/O failed 00:30:08.399 Write completed with error (sct=0, sc=8) 00:30:08.399 starting I/O failed 00:30:08.399 Read completed with error (sct=0, sc=8) 00:30:08.399 starting I/O failed 00:30:08.399 Write completed with error (sct=0, sc=8) 00:30:08.399 starting I/O failed 00:30:08.399 Read completed with error (sct=0, sc=8) 00:30:08.399 starting I/O failed 00:30:08.399 Write completed with error (sct=0, sc=8) 00:30:08.399 starting I/O failed 00:30:08.399 Write completed with error (sct=0, sc=8) 00:30:08.399 starting I/O failed 00:30:08.399 Write completed with error (sct=0, sc=8) 00:30:08.399 starting I/O failed 00:30:08.399 Read completed with error (sct=0, sc=8) 00:30:08.399 starting I/O failed 00:30:08.399 Read completed with error (sct=0, sc=8) 00:30:08.399 starting I/O failed 00:30:08.399 Read completed with error (sct=0, sc=8) 00:30:08.399 starting I/O failed 00:30:08.399 Read completed with error (sct=0, sc=8) 00:30:08.399 starting I/O failed 00:30:08.399 Write completed with error (sct=0, sc=8) 00:30:08.399 starting I/O failed 00:30:08.399 [2024-02-14 20:30:45.719601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:08.399 [2024-02-14 20:30:45.720070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.399 [2024-02-14 20:30:45.720514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.399 [2024-02-14 20:30:45.720547] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.399 qpair failed and we were unable to recover it. 00:30:08.399 [2024-02-14 20:30:45.721000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.399 [2024-02-14 20:30:45.721459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.399 [2024-02-14 20:30:45.721489] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.399 qpair failed and we were unable to recover it. 00:30:08.399 [2024-02-14 20:30:45.721886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.399 [2024-02-14 20:30:45.722271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-02-14 20:30:45.722300] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.400 qpair failed and we were unable to recover it. 00:30:08.400 [2024-02-14 20:30:45.722767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-02-14 20:30:45.723176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-02-14 20:30:45.723205] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.400 qpair failed and we were unable to recover it. 00:30:08.400 [2024-02-14 20:30:45.723592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-02-14 20:30:45.723991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-02-14 20:30:45.724022] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.400 qpair failed and we were unable to recover it. 00:30:08.400 [2024-02-14 20:30:45.724321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-02-14 20:30:45.724710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-02-14 20:30:45.724740] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.400 qpair failed and we were unable to recover it. 00:30:08.400 [2024-02-14 20:30:45.725124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-02-14 20:30:45.725578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-02-14 20:30:45.725607] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.400 qpair failed and we were unable to recover it. 00:30:08.400 [2024-02-14 20:30:45.726000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-02-14 20:30:45.726391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-02-14 20:30:45.726420] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.400 qpair failed and we were unable to recover it. 00:30:08.400 [2024-02-14 20:30:45.726875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-02-14 20:30:45.727204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-02-14 20:30:45.727218] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.400 qpair failed and we were unable to recover it. 00:30:08.400 [2024-02-14 20:30:45.727659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-02-14 20:30:45.727965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-02-14 20:30:45.727994] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.400 qpair failed and we were unable to recover it. 00:30:08.400 [2024-02-14 20:30:45.728398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-02-14 20:30:45.728803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-02-14 20:30:45.728819] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.400 qpair failed and we were unable to recover it. 00:30:08.400 [2024-02-14 20:30:45.729260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-02-14 20:30:45.729717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-02-14 20:30:45.729747] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.400 qpair failed and we were unable to recover it. 00:30:08.400 [2024-02-14 20:30:45.730189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-02-14 20:30:45.730664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-02-14 20:30:45.730694] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.400 qpair failed and we were unable to recover it. 00:30:08.400 [2024-02-14 20:30:45.731095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-02-14 20:30:45.731552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-02-14 20:30:45.731580] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.400 qpair failed and we were unable to recover it. 00:30:08.400 [2024-02-14 20:30:45.731971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-02-14 20:30:45.732421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-02-14 20:30:45.732450] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.400 qpair failed and we were unable to recover it. 00:30:08.400 [2024-02-14 20:30:45.732778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-02-14 20:30:45.733238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-02-14 20:30:45.733267] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.400 qpair failed and we were unable to recover it. 00:30:08.400 [2024-02-14 20:30:45.733726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-02-14 20:30:45.734114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-02-14 20:30:45.734142] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.400 qpair failed and we were unable to recover it. 00:30:08.400 [2024-02-14 20:30:45.734532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-02-14 20:30:45.734917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-02-14 20:30:45.734947] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.400 qpair failed and we were unable to recover it. 00:30:08.400 [2024-02-14 20:30:45.735434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-02-14 20:30:45.735795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-02-14 20:30:45.735825] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.400 qpair failed and we were unable to recover it. 00:30:08.400 [2024-02-14 20:30:45.736231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-02-14 20:30:45.736714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-02-14 20:30:45.736729] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.400 qpair failed and we were unable to recover it. 00:30:08.400 [2024-02-14 20:30:45.737136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-02-14 20:30:45.737478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-02-14 20:30:45.737498] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.400 qpair failed and we were unable to recover it. 00:30:08.400 [2024-02-14 20:30:45.737813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-02-14 20:30:45.738169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-02-14 20:30:45.738184] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.400 qpair failed and we were unable to recover it. 00:30:08.400 [2024-02-14 20:30:45.738617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-02-14 20:30:45.738946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-02-14 20:30:45.738977] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.400 qpair failed and we were unable to recover it. 00:30:08.400 [2024-02-14 20:30:45.739444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-02-14 20:30:45.739850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-02-14 20:30:45.739881] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.400 qpair failed and we were unable to recover it. 00:30:08.400 [2024-02-14 20:30:45.740282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-02-14 20:30:45.740591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-02-14 20:30:45.740620] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.400 qpair failed and we were unable to recover it. 00:30:08.400 [2024-02-14 20:30:45.741069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-02-14 20:30:45.741458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-02-14 20:30:45.741472] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.400 qpair failed and we were unable to recover it. 00:30:08.400 [2024-02-14 20:30:45.741809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-02-14 20:30:45.742226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-02-14 20:30:45.742255] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.400 qpair failed and we were unable to recover it. 00:30:08.400 [2024-02-14 20:30:45.742713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-02-14 20:30:45.743043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-02-14 20:30:45.743072] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.400 qpair failed and we were unable to recover it. 00:30:08.400 [2024-02-14 20:30:45.743401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-02-14 20:30:45.743788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-02-14 20:30:45.743818] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.401 qpair failed and we were unable to recover it. 00:30:08.401 [2024-02-14 20:30:45.744257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.401 [2024-02-14 20:30:45.744660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.401 [2024-02-14 20:30:45.744689] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.401 qpair failed and we were unable to recover it. 00:30:08.401 [2024-02-14 20:30:45.745002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.401 [2024-02-14 20:30:45.745448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.401 [2024-02-14 20:30:45.745463] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.401 qpair failed and we were unable to recover it. 00:30:08.401 [2024-02-14 20:30:45.745777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.401 [2024-02-14 20:30:45.746120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.401 [2024-02-14 20:30:45.746148] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.401 qpair failed and we were unable to recover it. 00:30:08.401 [2024-02-14 20:30:45.746382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.401 [2024-02-14 20:30:45.746831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.401 [2024-02-14 20:30:45.746861] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.401 qpair failed and we were unable to recover it. 00:30:08.401 [2024-02-14 20:30:45.747346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.401 [2024-02-14 20:30:45.747753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.401 [2024-02-14 20:30:45.747782] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.401 qpair failed and we were unable to recover it. 00:30:08.401 [2024-02-14 20:30:45.748223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.401 [2024-02-14 20:30:45.748601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.401 [2024-02-14 20:30:45.748630] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.401 qpair failed and we were unable to recover it. 00:30:08.401 [2024-02-14 20:30:45.749022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.401 [2024-02-14 20:30:45.749330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.401 [2024-02-14 20:30:45.749344] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.401 qpair failed and we were unable to recover it. 00:30:08.401 [2024-02-14 20:30:45.749800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.401 [2024-02-14 20:30:45.750154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.401 [2024-02-14 20:30:45.750183] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.401 qpair failed and we were unable to recover it. 00:30:08.401 [2024-02-14 20:30:45.750581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.401 [2024-02-14 20:30:45.750954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.401 [2024-02-14 20:30:45.750984] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.401 qpair failed and we were unable to recover it. 00:30:08.401 [2024-02-14 20:30:45.751361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.401 [2024-02-14 20:30:45.751765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.401 [2024-02-14 20:30:45.751794] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.401 qpair failed and we were unable to recover it. 00:30:08.401 [2024-02-14 20:30:45.752230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.401 [2024-02-14 20:30:45.752597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.401 [2024-02-14 20:30:45.752626] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.401 qpair failed and we were unable to recover it. 00:30:08.401 [2024-02-14 20:30:45.753015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.401 [2024-02-14 20:30:45.753259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.401 [2024-02-14 20:30:45.753287] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.401 qpair failed and we were unable to recover it. 00:30:08.401 [2024-02-14 20:30:45.753682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.401 [2024-02-14 20:30:45.754005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.401 [2024-02-14 20:30:45.754034] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.401 qpair failed and we were unable to recover it. 00:30:08.401 [2024-02-14 20:30:45.754502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.401 [2024-02-14 20:30:45.754943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.401 [2024-02-14 20:30:45.754973] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.401 qpair failed and we were unable to recover it. 00:30:08.401 [2024-02-14 20:30:45.755223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.401 [2024-02-14 20:30:45.755596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.401 [2024-02-14 20:30:45.755625] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.401 qpair failed and we were unable to recover it. 00:30:08.401 [2024-02-14 20:30:45.756020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.401 [2024-02-14 20:30:45.756400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.401 [2024-02-14 20:30:45.756429] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.401 qpair failed and we were unable to recover it. 00:30:08.401 [2024-02-14 20:30:45.756811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.401 [2024-02-14 20:30:45.757267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.401 [2024-02-14 20:30:45.757296] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.401 qpair failed and we were unable to recover it. 00:30:08.401 [2024-02-14 20:30:45.757702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.401 [2024-02-14 20:30:45.758073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.401 [2024-02-14 20:30:45.758102] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.401 qpair failed and we were unable to recover it. 00:30:08.401 [2024-02-14 20:30:45.758377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.401 [2024-02-14 20:30:45.758597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.401 [2024-02-14 20:30:45.758626] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.401 qpair failed and we were unable to recover it. 00:30:08.401 [2024-02-14 20:30:45.759021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.401 [2024-02-14 20:30:45.759499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.401 [2024-02-14 20:30:45.759527] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.401 qpair failed and we were unable to recover it. 00:30:08.401 [2024-02-14 20:30:45.759963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.401 [2024-02-14 20:30:45.760287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.401 [2024-02-14 20:30:45.760316] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.401 qpair failed and we were unable to recover it. 00:30:08.401 [2024-02-14 20:30:45.760752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.401 [2024-02-14 20:30:45.761225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.401 [2024-02-14 20:30:45.761253] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.401 qpair failed and we were unable to recover it. 00:30:08.401 [2024-02-14 20:30:45.761559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.401 [2024-02-14 20:30:45.761800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.401 [2024-02-14 20:30:45.761830] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.401 qpair failed and we were unable to recover it. 00:30:08.401 [2024-02-14 20:30:45.762221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.401 [2024-02-14 20:30:45.762689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.401 [2024-02-14 20:30:45.762718] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.401 qpair failed and we were unable to recover it. 00:30:08.401 [2024-02-14 20:30:45.763172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.401 [2024-02-14 20:30:45.763566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.401 [2024-02-14 20:30:45.763595] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.401 qpair failed and we were unable to recover it. 00:30:08.401 [2024-02-14 20:30:45.764040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.401 [2024-02-14 20:30:45.764437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.401 [2024-02-14 20:30:45.764465] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.401 qpair failed and we were unable to recover it. 00:30:08.401 [2024-02-14 20:30:45.764867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.401 [2024-02-14 20:30:45.765273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-02-14 20:30:45.765302] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.402 qpair failed and we were unable to recover it. 00:30:08.402 [2024-02-14 20:30:45.765673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-02-14 20:30:45.766078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-02-14 20:30:45.766107] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.402 qpair failed and we were unable to recover it. 00:30:08.402 [2024-02-14 20:30:45.766562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-02-14 20:30:45.766969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-02-14 20:30:45.766984] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.402 qpair failed and we were unable to recover it. 00:30:08.402 [2024-02-14 20:30:45.767336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-02-14 20:30:45.767713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-02-14 20:30:45.767727] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.402 qpair failed and we were unable to recover it. 00:30:08.402 [2024-02-14 20:30:45.768087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-02-14 20:30:45.768412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-02-14 20:30:45.768454] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.402 qpair failed and we were unable to recover it. 00:30:08.402 [2024-02-14 20:30:45.768864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-02-14 20:30:45.769320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-02-14 20:30:45.769349] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.402 qpair failed and we were unable to recover it. 00:30:08.402 [2024-02-14 20:30:45.769754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-02-14 20:30:45.770063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-02-14 20:30:45.770092] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.402 qpair failed and we were unable to recover it. 00:30:08.402 [2024-02-14 20:30:45.770475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-02-14 20:30:45.770927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-02-14 20:30:45.770942] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.402 qpair failed and we were unable to recover it. 00:30:08.402 [2024-02-14 20:30:45.771318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-02-14 20:30:45.771796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-02-14 20:30:45.771826] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.402 qpair failed and we were unable to recover it. 00:30:08.402 [2024-02-14 20:30:45.772281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-02-14 20:30:45.772710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-02-14 20:30:45.772739] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.402 qpair failed and we were unable to recover it. 00:30:08.402 [2024-02-14 20:30:45.773134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-02-14 20:30:45.773584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-02-14 20:30:45.773613] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.402 qpair failed and we were unable to recover it. 00:30:08.402 [2024-02-14 20:30:45.774003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-02-14 20:30:45.774431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-02-14 20:30:45.774460] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.402 qpair failed and we were unable to recover it. 00:30:08.402 [2024-02-14 20:30:45.774914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-02-14 20:30:45.775368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-02-14 20:30:45.775410] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.402 qpair failed and we were unable to recover it. 00:30:08.402 [2024-02-14 20:30:45.775825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-02-14 20:30:45.776182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-02-14 20:30:45.776211] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.402 qpair failed and we were unable to recover it. 00:30:08.402 [2024-02-14 20:30:45.776604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-02-14 20:30:45.777075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-02-14 20:30:45.777104] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.402 qpair failed and we were unable to recover it. 00:30:08.402 [2024-02-14 20:30:45.777476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-02-14 20:30:45.777942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-02-14 20:30:45.777971] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.402 qpair failed and we were unable to recover it. 00:30:08.402 [2024-02-14 20:30:45.778298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-02-14 20:30:45.778675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-02-14 20:30:45.778710] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.402 qpair failed and we were unable to recover it. 00:30:08.402 [2024-02-14 20:30:45.779146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-02-14 20:30:45.779483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-02-14 20:30:45.779512] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.402 qpair failed and we were unable to recover it. 00:30:08.402 [2024-02-14 20:30:45.779975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-02-14 20:30:45.780354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-02-14 20:30:45.780382] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.402 qpair failed and we were unable to recover it. 00:30:08.402 [2024-02-14 20:30:45.780614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-02-14 20:30:45.780952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-02-14 20:30:45.780966] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.402 qpair failed and we were unable to recover it. 00:30:08.402 [2024-02-14 20:30:45.781332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-02-14 20:30:45.781761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-02-14 20:30:45.781790] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.402 qpair failed and we were unable to recover it. 00:30:08.402 [2024-02-14 20:30:45.782195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-02-14 20:30:45.782583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-02-14 20:30:45.782611] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.402 qpair failed and we were unable to recover it. 00:30:08.402 [2024-02-14 20:30:45.783028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-02-14 20:30:45.783312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-02-14 20:30:45.783340] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.402 qpair failed and we were unable to recover it. 00:30:08.402 [2024-02-14 20:30:45.783500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-02-14 20:30:45.783901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-02-14 20:30:45.783931] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.402 qpair failed and we were unable to recover it. 00:30:08.402 [2024-02-14 20:30:45.784363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-02-14 20:30:45.784791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-02-14 20:30:45.784821] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.402 qpair failed and we were unable to recover it. 00:30:08.402 [2024-02-14 20:30:45.785260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-02-14 20:30:45.785585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-02-14 20:30:45.785613] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.402 qpair failed and we were unable to recover it. 00:30:08.402 [2024-02-14 20:30:45.786045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-02-14 20:30:45.786499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-02-14 20:30:45.786528] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.402 qpair failed and we were unable to recover it. 00:30:08.402 [2024-02-14 20:30:45.786858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-02-14 20:30:45.787232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-02-14 20:30:45.787247] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.402 qpair failed and we were unable to recover it. 00:30:08.402 [2024-02-14 20:30:45.787596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-02-14 20:30:45.788055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-02-14 20:30:45.788085] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.402 qpair failed and we were unable to recover it. 00:30:08.402 [2024-02-14 20:30:45.788412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-02-14 20:30:45.788840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-02-14 20:30:45.788870] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.403 qpair failed and we were unable to recover it. 00:30:08.403 [2024-02-14 20:30:45.789253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-02-14 20:30:45.789659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-02-14 20:30:45.789688] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.403 qpair failed and we were unable to recover it. 00:30:08.403 [2024-02-14 20:30:45.790141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-02-14 20:30:45.790576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-02-14 20:30:45.790605] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.403 qpair failed and we were unable to recover it. 00:30:08.403 [2024-02-14 20:30:45.791002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-02-14 20:30:45.791390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-02-14 20:30:45.791420] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.403 qpair failed and we were unable to recover it. 00:30:08.403 [2024-02-14 20:30:45.791803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-02-14 20:30:45.792190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-02-14 20:30:45.792219] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.403 qpair failed and we were unable to recover it. 00:30:08.403 [2024-02-14 20:30:45.792665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-02-14 20:30:45.793025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-02-14 20:30:45.793054] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.403 qpair failed and we were unable to recover it. 00:30:08.403 [2024-02-14 20:30:45.793454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-02-14 20:30:45.793614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-02-14 20:30:45.793643] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.403 qpair failed and we were unable to recover it. 00:30:08.403 [2024-02-14 20:30:45.794121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-02-14 20:30:45.794489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-02-14 20:30:45.794517] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.403 qpair failed and we were unable to recover it. 00:30:08.403 [2024-02-14 20:30:45.794870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-02-14 20:30:45.795197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-02-14 20:30:45.795226] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.403 qpair failed and we were unable to recover it. 00:30:08.403 [2024-02-14 20:30:45.795626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-02-14 20:30:45.795957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-02-14 20:30:45.795986] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.403 qpair failed and we were unable to recover it. 00:30:08.403 [2024-02-14 20:30:45.796356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-02-14 20:30:45.796757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-02-14 20:30:45.796786] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.403 qpair failed and we were unable to recover it. 00:30:08.403 [2024-02-14 20:30:45.797176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-02-14 20:30:45.797534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-02-14 20:30:45.797563] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.403 qpair failed and we were unable to recover it. 00:30:08.403 [2024-02-14 20:30:45.797927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-02-14 20:30:45.798302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-02-14 20:30:45.798331] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.403 qpair failed and we were unable to recover it. 00:30:08.403 [2024-02-14 20:30:45.798731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-02-14 20:30:45.799124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-02-14 20:30:45.799153] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.403 qpair failed and we were unable to recover it. 00:30:08.403 [2024-02-14 20:30:45.799545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-02-14 20:30:45.799996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-02-14 20:30:45.800025] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.403 qpair failed and we were unable to recover it. 00:30:08.403 [2024-02-14 20:30:45.800404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-02-14 20:30:45.800806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-02-14 20:30:45.800820] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.403 qpair failed and we were unable to recover it. 00:30:08.403 [2024-02-14 20:30:45.801016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-02-14 20:30:45.801433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-02-14 20:30:45.801464] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.403 qpair failed and we were unable to recover it. 00:30:08.403 [2024-02-14 20:30:45.801912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-02-14 20:30:45.802347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-02-14 20:30:45.802376] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.403 qpair failed and we were unable to recover it. 00:30:08.403 [2024-02-14 20:30:45.802752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-02-14 20:30:45.803141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-02-14 20:30:45.803170] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.403 qpair failed and we were unable to recover it. 00:30:08.403 [2024-02-14 20:30:45.803551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-02-14 20:30:45.803981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-02-14 20:30:45.804010] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.403 qpair failed and we were unable to recover it. 00:30:08.403 [2024-02-14 20:30:45.804398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-02-14 20:30:45.804776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-02-14 20:30:45.804805] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.403 qpair failed and we were unable to recover it. 00:30:08.403 [2024-02-14 20:30:45.805287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-02-14 20:30:45.805776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-02-14 20:30:45.805806] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.403 qpair failed and we were unable to recover it. 00:30:08.403 [2024-02-14 20:30:45.806269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-02-14 20:30:45.806590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-02-14 20:30:45.806620] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.403 qpair failed and we were unable to recover it. 00:30:08.403 [2024-02-14 20:30:45.806969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-02-14 20:30:45.807517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-02-14 20:30:45.807555] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.403 qpair failed and we were unable to recover it. 00:30:08.404 [2024-02-14 20:30:45.808119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.404 [2024-02-14 20:30:45.808587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.404 [2024-02-14 20:30:45.808603] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.404 qpair failed and we were unable to recover it. 00:30:08.404 [2024-02-14 20:30:45.808986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.404 [2024-02-14 20:30:45.809390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.404 [2024-02-14 20:30:45.809405] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.404 qpair failed and we were unable to recover it. 00:30:08.404 [2024-02-14 20:30:45.809776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.404 [2024-02-14 20:30:45.810208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.404 [2024-02-14 20:30:45.810237] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.404 qpair failed and we were unable to recover it. 00:30:08.404 [2024-02-14 20:30:45.810688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.404 [2024-02-14 20:30:45.811064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.404 [2024-02-14 20:30:45.811079] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.404 qpair failed and we were unable to recover it. 00:30:08.404 [2024-02-14 20:30:45.811434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.404 [2024-02-14 20:30:45.811850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.404 [2024-02-14 20:30:45.811880] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.404 qpair failed and we were unable to recover it. 00:30:08.404 [2024-02-14 20:30:45.812291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.404 [2024-02-14 20:30:45.812655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.404 [2024-02-14 20:30:45.812671] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.404 qpair failed and we were unable to recover it. 00:30:08.670 [2024-02-14 20:30:45.812959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.670 [2024-02-14 20:30:45.813398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.670 [2024-02-14 20:30:45.813412] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.670 qpair failed and we were unable to recover it. 00:30:08.670 [2024-02-14 20:30:45.813699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.670 [2024-02-14 20:30:45.814126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.670 [2024-02-14 20:30:45.814141] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.670 qpair failed and we were unable to recover it. 00:30:08.670 [2024-02-14 20:30:45.814570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.670 [2024-02-14 20:30:45.814928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.670 [2024-02-14 20:30:45.814942] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.670 qpair failed and we were unable to recover it. 00:30:08.670 [2024-02-14 20:30:45.815322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.670 [2024-02-14 20:30:45.815784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.670 [2024-02-14 20:30:45.815814] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.670 qpair failed and we were unable to recover it. 00:30:08.670 [2024-02-14 20:30:45.816275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.670 [2024-02-14 20:30:45.816753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.670 [2024-02-14 20:30:45.816784] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.670 qpair failed and we were unable to recover it. 00:30:08.670 [2024-02-14 20:30:45.817162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.670 [2024-02-14 20:30:45.817539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.670 [2024-02-14 20:30:45.817568] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.670 qpair failed and we were unable to recover it. 00:30:08.670 [2024-02-14 20:30:45.818013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.670 [2024-02-14 20:30:45.818445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.670 [2024-02-14 20:30:45.818474] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.670 qpair failed and we were unable to recover it. 00:30:08.670 [2024-02-14 20:30:45.818805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.670 [2024-02-14 20:30:45.819201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.670 [2024-02-14 20:30:45.819230] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.670 qpair failed and we were unable to recover it. 00:30:08.670 [2024-02-14 20:30:45.819602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.670 [2024-02-14 20:30:45.819952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.670 [2024-02-14 20:30:45.819988] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.670 qpair failed and we were unable to recover it. 00:30:08.670 [2024-02-14 20:30:45.820419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.670 [2024-02-14 20:30:45.820802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.670 [2024-02-14 20:30:45.820832] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.670 qpair failed and we were unable to recover it. 00:30:08.670 [2024-02-14 20:30:45.821287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.670 [2024-02-14 20:30:45.821697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-02-14 20:30:45.821713] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.671 qpair failed and we were unable to recover it. 00:30:08.671 [2024-02-14 20:30:45.822068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-02-14 20:30:45.822668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-02-14 20:30:45.822683] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.671 qpair failed and we were unable to recover it. 00:30:08.671 [2024-02-14 20:30:45.823115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-02-14 20:30:45.823492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-02-14 20:30:45.823521] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.671 qpair failed and we were unable to recover it. 00:30:08.671 [2024-02-14 20:30:45.823919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-02-14 20:30:45.824221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-02-14 20:30:45.824250] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.671 qpair failed and we were unable to recover it. 00:30:08.671 [2024-02-14 20:30:45.824689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-02-14 20:30:45.825121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-02-14 20:30:45.825151] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.671 qpair failed and we were unable to recover it. 00:30:08.671 [2024-02-14 20:30:45.825528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-02-14 20:30:45.825979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-02-14 20:30:45.826009] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.671 qpair failed and we were unable to recover it. 00:30:08.671 [2024-02-14 20:30:45.826337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-02-14 20:30:45.826914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-02-14 20:30:45.826944] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.671 qpair failed and we were unable to recover it. 00:30:08.671 [2024-02-14 20:30:45.827405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-02-14 20:30:45.827856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-02-14 20:30:45.827887] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.671 qpair failed and we were unable to recover it. 00:30:08.671 [2024-02-14 20:30:45.828201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-02-14 20:30:45.828562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-02-14 20:30:45.828596] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.671 qpair failed and we were unable to recover it. 00:30:08.671 [2024-02-14 20:30:45.829039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-02-14 20:30:45.829347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-02-14 20:30:45.829376] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.671 qpair failed and we were unable to recover it. 00:30:08.671 [2024-02-14 20:30:45.829770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-02-14 20:30:45.830088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-02-14 20:30:45.830117] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.671 qpair failed and we were unable to recover it. 00:30:08.671 [2024-02-14 20:30:45.830573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-02-14 20:30:45.831028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-02-14 20:30:45.831058] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.671 qpair failed and we were unable to recover it. 00:30:08.671 [2024-02-14 20:30:45.831265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-02-14 20:30:45.831721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-02-14 20:30:45.831751] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.671 qpair failed and we were unable to recover it. 00:30:08.671 [2024-02-14 20:30:45.832130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-02-14 20:30:45.832612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-02-14 20:30:45.832640] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.671 qpair failed and we were unable to recover it. 00:30:08.671 [2024-02-14 20:30:45.833115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-02-14 20:30:45.833438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-02-14 20:30:45.833467] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.671 qpair failed and we were unable to recover it. 00:30:08.671 [2024-02-14 20:30:45.833696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-02-14 20:30:45.833994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-02-14 20:30:45.834023] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.671 qpair failed and we were unable to recover it. 00:30:08.671 [2024-02-14 20:30:45.834325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-02-14 20:30:45.834635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-02-14 20:30:45.834681] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.671 qpair failed and we were unable to recover it. 00:30:08.671 [2024-02-14 20:30:45.835054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-02-14 20:30:45.835508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-02-14 20:30:45.835537] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.671 qpair failed and we were unable to recover it. 00:30:08.671 [2024-02-14 20:30:45.835933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-02-14 20:30:45.836303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-02-14 20:30:45.836333] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.671 qpair failed and we were unable to recover it. 00:30:08.671 [2024-02-14 20:30:45.836797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-02-14 20:30:45.837152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-02-14 20:30:45.837181] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.671 qpair failed and we were unable to recover it. 00:30:08.671 [2024-02-14 20:30:45.837614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-02-14 20:30:45.838021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-02-14 20:30:45.838051] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.671 qpair failed and we were unable to recover it. 00:30:08.671 [2024-02-14 20:30:45.838430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-02-14 20:30:45.838807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-02-14 20:30:45.838837] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.671 qpair failed and we were unable to recover it. 00:30:08.671 [2024-02-14 20:30:45.839161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-02-14 20:30:45.839536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-02-14 20:30:45.839564] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.671 qpair failed and we were unable to recover it. 00:30:08.671 [2024-02-14 20:30:45.839934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-02-14 20:30:45.840387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-02-14 20:30:45.840417] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.671 qpair failed and we were unable to recover it. 00:30:08.671 [2024-02-14 20:30:45.840798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-02-14 20:30:45.841186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-02-14 20:30:45.841215] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.671 qpair failed and we were unable to recover it. 00:30:08.671 [2024-02-14 20:30:45.841683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-02-14 20:30:45.842136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-02-14 20:30:45.842165] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.671 qpair failed and we were unable to recover it. 00:30:08.671 [2024-02-14 20:30:45.842598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-02-14 20:30:45.843062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-02-14 20:30:45.843077] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.671 qpair failed and we were unable to recover it. 00:30:08.672 [2024-02-14 20:30:45.843421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-02-14 20:30:45.843836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-02-14 20:30:45.843851] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.672 qpair failed and we were unable to recover it. 00:30:08.672 [2024-02-14 20:30:45.844221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-02-14 20:30:45.844412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-02-14 20:30:45.844426] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.672 qpair failed and we were unable to recover it. 00:30:08.672 [2024-02-14 20:30:45.844781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-02-14 20:30:45.845163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-02-14 20:30:45.845193] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.672 qpair failed and we were unable to recover it. 00:30:08.672 [2024-02-14 20:30:45.845658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-02-14 20:30:45.846043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-02-14 20:30:45.846073] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.672 qpair failed and we were unable to recover it. 00:30:08.672 [2024-02-14 20:30:45.846451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-02-14 20:30:45.846817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-02-14 20:30:45.846847] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.672 qpair failed and we were unable to recover it. 00:30:08.672 [2024-02-14 20:30:45.847279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-02-14 20:30:45.847666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-02-14 20:30:45.847696] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.672 qpair failed and we were unable to recover it. 00:30:08.672 [2024-02-14 20:30:45.848080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-02-14 20:30:45.848310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-02-14 20:30:45.848339] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.672 qpair failed and we were unable to recover it. 00:30:08.672 [2024-02-14 20:30:45.848644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-02-14 20:30:45.849054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-02-14 20:30:45.849068] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.672 qpair failed and we were unable to recover it. 00:30:08.672 [2024-02-14 20:30:45.849470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-02-14 20:30:45.849922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-02-14 20:30:45.849952] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.672 qpair failed and we were unable to recover it. 00:30:08.672 [2024-02-14 20:30:45.850388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-02-14 20:30:45.850704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-02-14 20:30:45.850746] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.672 qpair failed and we were unable to recover it. 00:30:08.672 [2024-02-14 20:30:45.850971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-02-14 20:30:45.851375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-02-14 20:30:45.851404] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.672 qpair failed and we were unable to recover it. 00:30:08.672 [2024-02-14 20:30:45.851867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-02-14 20:30:45.852232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-02-14 20:30:45.852260] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.672 qpair failed and we were unable to recover it. 00:30:08.672 [2024-02-14 20:30:45.852717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-02-14 20:30:45.853049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-02-14 20:30:45.853079] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.672 qpair failed and we were unable to recover it. 00:30:08.672 [2024-02-14 20:30:45.853557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-02-14 20:30:45.854011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-02-14 20:30:45.854041] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.672 qpair failed and we were unable to recover it. 00:30:08.672 [2024-02-14 20:30:45.854492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-02-14 20:30:45.854920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-02-14 20:30:45.854951] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.672 qpair failed and we were unable to recover it. 00:30:08.672 [2024-02-14 20:30:45.855369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-02-14 20:30:45.855798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-02-14 20:30:45.855828] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.672 qpair failed and we were unable to recover it. 00:30:08.672 [2024-02-14 20:30:45.856213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-02-14 20:30:45.856584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-02-14 20:30:45.856613] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.672 qpair failed and we were unable to recover it. 00:30:08.672 [2024-02-14 20:30:45.857079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-02-14 20:30:45.857519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-02-14 20:30:45.857549] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.672 qpair failed and we were unable to recover it. 00:30:08.672 [2024-02-14 20:30:45.857999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-02-14 20:30:45.858339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-02-14 20:30:45.858368] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.672 qpair failed and we were unable to recover it. 00:30:08.672 [2024-02-14 20:30:45.858768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-02-14 20:30:45.859203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-02-14 20:30:45.859232] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.672 qpair failed and we were unable to recover it. 00:30:08.672 [2024-02-14 20:30:45.859688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-02-14 20:30:45.860129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-02-14 20:30:45.860159] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.672 qpair failed and we were unable to recover it. 00:30:08.672 [2024-02-14 20:30:45.860526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-02-14 20:30:45.860985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-02-14 20:30:45.861015] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.672 qpair failed and we were unable to recover it. 00:30:08.672 [2024-02-14 20:30:45.861404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-02-14 20:30:45.861867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-02-14 20:30:45.861897] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.672 qpair failed and we were unable to recover it. 00:30:08.672 [2024-02-14 20:30:45.862355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-02-14 20:30:45.862784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-02-14 20:30:45.862821] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.672 qpair failed and we were unable to recover it. 00:30:08.672 [2024-02-14 20:30:45.863155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-02-14 20:30:45.863477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-02-14 20:30:45.863506] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.672 qpair failed and we were unable to recover it. 00:30:08.672 [2024-02-14 20:30:45.863962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-02-14 20:30:45.864419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-02-14 20:30:45.864456] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.672 qpair failed and we were unable to recover it. 00:30:08.672 [2024-02-14 20:30:45.864862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-02-14 20:30:45.865241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-02-14 20:30:45.865271] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.672 qpair failed and we were unable to recover it. 00:30:08.672 [2024-02-14 20:30:45.865659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-02-14 20:30:45.866042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-02-14 20:30:45.866057] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.673 qpair failed and we were unable to recover it. 00:30:08.673 [2024-02-14 20:30:45.866422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-02-14 20:30:45.866775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-02-14 20:30:45.866805] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.673 qpair failed and we were unable to recover it. 00:30:08.673 [2024-02-14 20:30:45.867257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-02-14 20:30:45.867679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-02-14 20:30:45.867709] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.673 qpair failed and we were unable to recover it. 00:30:08.673 [2024-02-14 20:30:45.868174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-02-14 20:30:45.868605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-02-14 20:30:45.868634] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.673 qpair failed and we were unable to recover it. 00:30:08.673 [2024-02-14 20:30:45.869045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-02-14 20:30:45.869423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-02-14 20:30:45.869452] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.673 qpair failed and we were unable to recover it. 00:30:08.673 [2024-02-14 20:30:45.869914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-02-14 20:30:45.870363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-02-14 20:30:45.870397] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.673 qpair failed and we were unable to recover it. 00:30:08.673 [2024-02-14 20:30:45.870852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-02-14 20:30:45.871217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-02-14 20:30:45.871246] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.673 qpair failed and we were unable to recover it. 00:30:08.673 [2024-02-14 20:30:45.871705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-02-14 20:30:45.872107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-02-14 20:30:45.872137] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.673 qpair failed and we were unable to recover it. 00:30:08.673 [2024-02-14 20:30:45.872602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-02-14 20:30:45.872986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-02-14 20:30:45.873016] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.673 qpair failed and we were unable to recover it. 00:30:08.673 [2024-02-14 20:30:45.873450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-02-14 20:30:45.873838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-02-14 20:30:45.873868] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.673 qpair failed and we were unable to recover it. 00:30:08.673 [2024-02-14 20:30:45.874250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-02-14 20:30:45.874624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-02-14 20:30:45.874662] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.673 qpair failed and we were unable to recover it. 00:30:08.673 [2024-02-14 20:30:45.875046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-02-14 20:30:45.875509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-02-14 20:30:45.875538] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.673 qpair failed and we were unable to recover it. 00:30:08.673 [2024-02-14 20:30:45.875857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-02-14 20:30:45.876223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-02-14 20:30:45.876252] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.673 qpair failed and we were unable to recover it. 00:30:08.673 [2024-02-14 20:30:45.876705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-02-14 20:30:45.877150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-02-14 20:30:45.877179] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.673 qpair failed and we were unable to recover it. 00:30:08.673 [2024-02-14 20:30:45.877572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-02-14 20:30:45.877947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-02-14 20:30:45.877977] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.673 qpair failed and we were unable to recover it. 00:30:08.673 [2024-02-14 20:30:45.878371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-02-14 20:30:45.878847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-02-14 20:30:45.878876] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.673 qpair failed and we were unable to recover it. 00:30:08.673 [2024-02-14 20:30:45.879263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-02-14 20:30:45.879720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-02-14 20:30:45.879750] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.673 qpair failed and we were unable to recover it. 00:30:08.673 [2024-02-14 20:30:45.880133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-02-14 20:30:45.880516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-02-14 20:30:45.880545] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.673 qpair failed and we were unable to recover it. 00:30:08.673 [2024-02-14 20:30:45.880994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-02-14 20:30:45.881366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-02-14 20:30:45.881396] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.673 qpair failed and we were unable to recover it. 00:30:08.673 [2024-02-14 20:30:45.881718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-02-14 20:30:45.882106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-02-14 20:30:45.882135] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.673 qpair failed and we were unable to recover it. 00:30:08.673 [2024-02-14 20:30:45.882388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-02-14 20:30:45.882789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-02-14 20:30:45.882819] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.673 qpair failed and we were unable to recover it. 00:30:08.673 [2024-02-14 20:30:45.883202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-02-14 20:30:45.883680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-02-14 20:30:45.883710] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.673 qpair failed and we were unable to recover it. 00:30:08.673 [2024-02-14 20:30:45.884095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-02-14 20:30:45.884471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-02-14 20:30:45.884501] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.673 qpair failed and we were unable to recover it. 00:30:08.673 [2024-02-14 20:30:45.884878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-02-14 20:30:45.885307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-02-14 20:30:45.885336] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.673 qpair failed and we were unable to recover it. 00:30:08.673 [2024-02-14 20:30:45.885720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-02-14 20:30:45.886194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-02-14 20:30:45.886208] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.673 qpair failed and we were unable to recover it. 00:30:08.673 [2024-02-14 20:30:45.886640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-02-14 20:30:45.887047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-02-14 20:30:45.887076] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.673 qpair failed and we were unable to recover it. 00:30:08.673 [2024-02-14 20:30:45.887473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-02-14 20:30:45.887927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-02-14 20:30:45.887957] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.673 qpair failed and we were unable to recover it. 00:30:08.673 [2024-02-14 20:30:45.888393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-02-14 20:30:45.888819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-02-14 20:30:45.888849] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.673 qpair failed and we were unable to recover it. 00:30:08.673 [2024-02-14 20:30:45.889306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-02-14 20:30:45.889528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-02-14 20:30:45.889557] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.674 qpair failed and we were unable to recover it. 00:30:08.674 [2024-02-14 20:30:45.889934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-02-14 20:30:45.890362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-02-14 20:30:45.890377] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.674 qpair failed and we were unable to recover it. 00:30:08.674 [2024-02-14 20:30:45.890785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-02-14 20:30:45.891253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-02-14 20:30:45.891282] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.674 qpair failed and we were unable to recover it. 00:30:08.674 [2024-02-14 20:30:45.891714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-02-14 20:30:45.892005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-02-14 20:30:45.892033] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.674 qpair failed and we were unable to recover it. 00:30:08.674 [2024-02-14 20:30:45.892417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-02-14 20:30:45.892803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-02-14 20:30:45.892833] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.674 qpair failed and we were unable to recover it. 00:30:08.674 [2024-02-14 20:30:45.893161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-02-14 20:30:45.893589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-02-14 20:30:45.893618] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.674 qpair failed and we were unable to recover it. 00:30:08.674 [2024-02-14 20:30:45.894110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-02-14 20:30:45.894562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-02-14 20:30:45.894591] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.674 qpair failed and we were unable to recover it. 00:30:08.674 [2024-02-14 20:30:45.894989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-02-14 20:30:45.895447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-02-14 20:30:45.895477] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.674 qpair failed and we were unable to recover it. 00:30:08.674 [2024-02-14 20:30:45.895806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-02-14 20:30:45.896142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-02-14 20:30:45.896172] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.674 qpair failed and we were unable to recover it. 00:30:08.674 [2024-02-14 20:30:45.896556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-02-14 20:30:45.896883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-02-14 20:30:45.896913] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.674 qpair failed and we were unable to recover it. 00:30:08.674 [2024-02-14 20:30:45.897193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-02-14 20:30:45.897581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-02-14 20:30:45.897610] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.674 qpair failed and we were unable to recover it. 00:30:08.674 [2024-02-14 20:30:45.898007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-02-14 20:30:45.898396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-02-14 20:30:45.898426] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.674 qpair failed and we were unable to recover it. 00:30:08.674 [2024-02-14 20:30:45.898884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-02-14 20:30:45.899259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-02-14 20:30:45.899288] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.674 qpair failed and we were unable to recover it. 00:30:08.674 [2024-02-14 20:30:45.899749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-02-14 20:30:45.900062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-02-14 20:30:45.900091] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.674 qpair failed and we were unable to recover it. 00:30:08.674 [2024-02-14 20:30:45.900526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-02-14 20:30:45.900826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-02-14 20:30:45.900841] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.674 qpair failed and we were unable to recover it. 00:30:08.674 [2024-02-14 20:30:45.900969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-02-14 20:30:45.901321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-02-14 20:30:45.901349] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.674 qpair failed and we were unable to recover it. 00:30:08.674 [2024-02-14 20:30:45.901747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-02-14 20:30:45.902210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-02-14 20:30:45.902239] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.674 qpair failed and we were unable to recover it. 00:30:08.674 [2024-02-14 20:30:45.902619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-02-14 20:30:45.903086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-02-14 20:30:45.903116] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.674 qpair failed and we were unable to recover it. 00:30:08.674 [2024-02-14 20:30:45.903485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-02-14 20:30:45.903834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-02-14 20:30:45.903864] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.674 qpair failed and we were unable to recover it. 00:30:08.674 [2024-02-14 20:30:45.904317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-02-14 20:30:45.904693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-02-14 20:30:45.904724] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.674 qpair failed and we were unable to recover it. 00:30:08.674 [2024-02-14 20:30:45.905183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-02-14 20:30:45.905589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-02-14 20:30:45.905618] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.674 qpair failed and we were unable to recover it. 00:30:08.674 [2024-02-14 20:30:45.906088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-02-14 20:30:45.906544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-02-14 20:30:45.906573] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.674 qpair failed and we were unable to recover it. 00:30:08.674 [2024-02-14 20:30:45.906959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-02-14 20:30:45.907411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-02-14 20:30:45.907439] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.674 qpair failed and we were unable to recover it. 00:30:08.674 [2024-02-14 20:30:45.907856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-02-14 20:30:45.908238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-02-14 20:30:45.908268] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.674 qpair failed and we were unable to recover it. 00:30:08.674 [2024-02-14 20:30:45.908669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-02-14 20:30:45.909125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-02-14 20:30:45.909154] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.674 qpair failed and we were unable to recover it. 00:30:08.674 [2024-02-14 20:30:45.909543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-02-14 20:30:45.909913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-02-14 20:30:45.909943] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.674 qpair failed and we were unable to recover it. 00:30:08.674 [2024-02-14 20:30:45.910326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-02-14 20:30:45.910729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-02-14 20:30:45.910744] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.675 qpair failed and we were unable to recover it. 00:30:08.675 [2024-02-14 20:30:45.911144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-02-14 20:30:45.911572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-02-14 20:30:45.911601] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.675 qpair failed and we were unable to recover it. 00:30:08.675 [2024-02-14 20:30:45.911992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-02-14 20:30:45.912469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-02-14 20:30:45.912503] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.675 qpair failed and we were unable to recover it. 00:30:08.675 [2024-02-14 20:30:45.912887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-02-14 20:30:45.913269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-02-14 20:30:45.913298] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.675 qpair failed and we were unable to recover it. 00:30:08.675 [2024-02-14 20:30:45.913756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-02-14 20:30:45.913978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-02-14 20:30:45.914007] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.675 qpair failed and we were unable to recover it. 00:30:08.675 [2024-02-14 20:30:45.914461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-02-14 20:30:45.914835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-02-14 20:30:45.914865] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.675 qpair failed and we were unable to recover it. 00:30:08.675 [2024-02-14 20:30:45.915232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-02-14 20:30:45.915399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-02-14 20:30:45.915428] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.675 qpair failed and we were unable to recover it. 00:30:08.675 [2024-02-14 20:30:45.915859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-02-14 20:30:45.916173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-02-14 20:30:45.916202] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.675 qpair failed and we were unable to recover it. 00:30:08.675 [2024-02-14 20:30:45.916583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-02-14 20:30:45.917019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-02-14 20:30:45.917049] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.675 qpair failed and we were unable to recover it. 00:30:08.675 [2024-02-14 20:30:45.917376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-02-14 20:30:45.917801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-02-14 20:30:45.917816] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.675 qpair failed and we were unable to recover it. 00:30:08.675 [2024-02-14 20:30:45.918234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-02-14 20:30:45.918635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-02-14 20:30:45.918673] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.675 qpair failed and we were unable to recover it. 00:30:08.675 [2024-02-14 20:30:45.919073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-02-14 20:30:45.919503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-02-14 20:30:45.919532] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.675 qpair failed and we were unable to recover it. 00:30:08.675 [2024-02-14 20:30:45.919939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-02-14 20:30:45.920391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-02-14 20:30:45.920420] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.675 qpair failed and we were unable to recover it. 00:30:08.675 [2024-02-14 20:30:45.920880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-02-14 20:30:45.921337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-02-14 20:30:45.921366] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.675 qpair failed and we were unable to recover it. 00:30:08.675 [2024-02-14 20:30:45.921746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-02-14 20:30:45.922142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-02-14 20:30:45.922172] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.675 qpair failed and we were unable to recover it. 00:30:08.675 [2024-02-14 20:30:45.922559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-02-14 20:30:45.922989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-02-14 20:30:45.923020] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.675 qpair failed and we were unable to recover it. 00:30:08.675 [2024-02-14 20:30:45.923409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-02-14 20:30:45.923629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-02-14 20:30:45.923683] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.675 qpair failed and we were unable to recover it. 00:30:08.675 [2024-02-14 20:30:45.924049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-02-14 20:30:45.924481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-02-14 20:30:45.924510] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.675 qpair failed and we were unable to recover it. 00:30:08.675 [2024-02-14 20:30:45.924833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-02-14 20:30:45.925122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-02-14 20:30:45.925137] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.675 qpair failed and we were unable to recover it. 00:30:08.675 [2024-02-14 20:30:45.925543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-02-14 20:30:45.925928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-02-14 20:30:45.925958] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.675 qpair failed and we were unable to recover it. 00:30:08.675 [2024-02-14 20:30:45.926341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-02-14 20:30:45.926747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-02-14 20:30:45.926777] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.676 qpair failed and we were unable to recover it. 00:30:08.676 [2024-02-14 20:30:45.927108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-02-14 20:30:45.927401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-02-14 20:30:45.927430] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.676 qpair failed and we were unable to recover it. 00:30:08.676 [2024-02-14 20:30:45.927798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-02-14 20:30:45.928229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-02-14 20:30:45.928258] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.676 qpair failed and we were unable to recover it. 00:30:08.676 [2024-02-14 20:30:45.928657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-02-14 20:30:45.928973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-02-14 20:30:45.929002] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.676 qpair failed and we were unable to recover it. 00:30:08.676 [2024-02-14 20:30:45.929432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-02-14 20:30:45.929806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-02-14 20:30:45.929835] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.676 qpair failed and we were unable to recover it. 00:30:08.676 [2024-02-14 20:30:45.930297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-02-14 20:30:45.930738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-02-14 20:30:45.930767] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.676 qpair failed and we were unable to recover it. 00:30:08.676 [2024-02-14 20:30:45.931138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-02-14 20:30:45.931522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-02-14 20:30:45.931551] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.676 qpair failed and we were unable to recover it. 00:30:08.676 [2024-02-14 20:30:45.931944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-02-14 20:30:45.932400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-02-14 20:30:45.932414] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.676 qpair failed and we were unable to recover it. 00:30:08.676 [2024-02-14 20:30:45.932827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-02-14 20:30:45.933135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-02-14 20:30:45.933164] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.676 qpair failed and we were unable to recover it. 00:30:08.676 [2024-02-14 20:30:45.933643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-02-14 20:30:45.934015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-02-14 20:30:45.934045] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.676 qpair failed and we were unable to recover it. 00:30:08.676 [2024-02-14 20:30:45.934422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-02-14 20:30:45.934800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-02-14 20:30:45.934830] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.676 qpair failed and we were unable to recover it. 00:30:08.676 [2024-02-14 20:30:45.935288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-02-14 20:30:45.935662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-02-14 20:30:45.935692] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.676 qpair failed and we were unable to recover it. 00:30:08.676 [2024-02-14 20:30:45.936077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-02-14 20:30:45.936452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-02-14 20:30:45.936482] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.676 qpair failed and we were unable to recover it. 00:30:08.676 [2024-02-14 20:30:45.936825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-02-14 20:30:45.937194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-02-14 20:30:45.937223] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.676 qpair failed and we were unable to recover it. 00:30:08.676 [2024-02-14 20:30:45.937706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-02-14 20:30:45.938083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-02-14 20:30:45.938113] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.676 qpair failed and we were unable to recover it. 00:30:08.676 [2024-02-14 20:30:45.938494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-02-14 20:30:45.938952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-02-14 20:30:45.938967] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.676 qpair failed and we were unable to recover it. 00:30:08.676 [2024-02-14 20:30:45.939372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-02-14 20:30:45.939772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-02-14 20:30:45.939787] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.676 qpair failed and we were unable to recover it. 00:30:08.676 [2024-02-14 20:30:45.940004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-02-14 20:30:45.940432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-02-14 20:30:45.940461] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.676 qpair failed and we were unable to recover it. 00:30:08.676 [2024-02-14 20:30:45.940823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-02-14 20:30:45.941165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-02-14 20:30:45.941194] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.676 qpair failed and we were unable to recover it. 00:30:08.676 [2024-02-14 20:30:45.941670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-02-14 20:30:45.942049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-02-14 20:30:45.942078] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.676 qpair failed and we were unable to recover it. 00:30:08.676 [2024-02-14 20:30:45.942533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-02-14 20:30:45.942965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-02-14 20:30:45.942995] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.676 qpair failed and we were unable to recover it. 00:30:08.676 [2024-02-14 20:30:45.943375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-02-14 20:30:45.943751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-02-14 20:30:45.943781] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.676 qpair failed and we were unable to recover it. 00:30:08.676 [2024-02-14 20:30:45.944160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-02-14 20:30:45.944635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-02-14 20:30:45.944684] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.676 qpair failed and we were unable to recover it. 00:30:08.676 [2024-02-14 20:30:45.945063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-02-14 20:30:45.945442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-02-14 20:30:45.945472] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.676 qpair failed and we were unable to recover it. 00:30:08.676 [2024-02-14 20:30:45.945849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-02-14 20:30:45.946223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-02-14 20:30:45.946252] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.676 qpair failed and we were unable to recover it. 00:30:08.676 [2024-02-14 20:30:45.946635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-02-14 20:30:45.947089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-02-14 20:30:45.947119] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.676 qpair failed and we were unable to recover it. 00:30:08.676 [2024-02-14 20:30:45.947577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-02-14 20:30:45.947969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-02-14 20:30:45.947999] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.676 qpair failed and we were unable to recover it. 00:30:08.677 [2024-02-14 20:30:45.948391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.677 [2024-02-14 20:30:45.948755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.677 [2024-02-14 20:30:45.948785] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.677 qpair failed and we were unable to recover it. 00:30:08.677 [2024-02-14 20:30:45.949160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.677 [2024-02-14 20:30:45.949567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.677 [2024-02-14 20:30:45.949596] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.677 qpair failed and we were unable to recover it. 00:30:08.677 [2024-02-14 20:30:45.949982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.677 [2024-02-14 20:30:45.950378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.677 [2024-02-14 20:30:45.950393] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.677 qpair failed and we were unable to recover it. 00:30:08.677 [2024-02-14 20:30:45.950810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.677 [2024-02-14 20:30:45.951156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.677 [2024-02-14 20:30:45.951171] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.677 qpair failed and we were unable to recover it. 00:30:08.677 [2024-02-14 20:30:45.951557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.677 [2024-02-14 20:30:45.951911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.677 [2024-02-14 20:30:45.951926] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.677 qpair failed and we were unable to recover it. 00:30:08.677 [2024-02-14 20:30:45.952296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.677 [2024-02-14 20:30:45.952672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.677 [2024-02-14 20:30:45.952701] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.677 qpair failed and we were unable to recover it. 00:30:08.677 [2024-02-14 20:30:45.953102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.677 [2024-02-14 20:30:45.953584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.677 [2024-02-14 20:30:45.953617] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.677 qpair failed and we were unable to recover it. 00:30:08.677 [2024-02-14 20:30:45.954069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.677 [2024-02-14 20:30:45.954461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.677 [2024-02-14 20:30:45.954491] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.677 qpair failed and we were unable to recover it. 00:30:08.677 [2024-02-14 20:30:45.954936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.677 [2024-02-14 20:30:45.955246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.677 [2024-02-14 20:30:45.955261] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.677 qpair failed and we were unable to recover it. 00:30:08.677 [2024-02-14 20:30:45.955687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.677 [2024-02-14 20:30:45.956044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.677 [2024-02-14 20:30:45.956074] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.677 qpair failed and we were unable to recover it. 00:30:08.677 [2024-02-14 20:30:45.956240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.677 [2024-02-14 20:30:45.956691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.677 [2024-02-14 20:30:45.956721] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.677 qpair failed and we were unable to recover it. 00:30:08.677 [2024-02-14 20:30:45.957206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.677 [2024-02-14 20:30:45.957532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.677 [2024-02-14 20:30:45.957562] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.677 qpair failed and we were unable to recover it. 00:30:08.677 [2024-02-14 20:30:45.958018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.677 [2024-02-14 20:30:45.958442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.677 [2024-02-14 20:30:45.958456] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.677 qpair failed and we were unable to recover it. 00:30:08.677 [2024-02-14 20:30:45.958870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.677 [2024-02-14 20:30:45.959252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.677 [2024-02-14 20:30:45.959282] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.677 qpair failed and we were unable to recover it. 00:30:08.677 [2024-02-14 20:30:45.959660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.677 [2024-02-14 20:30:45.960044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.677 [2024-02-14 20:30:45.960074] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.677 qpair failed and we were unable to recover it. 00:30:08.677 [2024-02-14 20:30:45.960527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.677 [2024-02-14 20:30:45.960938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.677 [2024-02-14 20:30:45.960969] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.677 qpair failed and we were unable to recover it. 00:30:08.677 [2024-02-14 20:30:45.961355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.677 [2024-02-14 20:30:45.961832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.677 [2024-02-14 20:30:45.961867] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.677 qpair failed and we were unable to recover it. 00:30:08.677 [2024-02-14 20:30:45.962323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.677 [2024-02-14 20:30:45.962793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.677 [2024-02-14 20:30:45.962823] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.677 qpair failed and we were unable to recover it. 00:30:08.677 [2024-02-14 20:30:45.963274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.677 [2024-02-14 20:30:45.963715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.677 [2024-02-14 20:30:45.963730] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.677 qpair failed and we were unable to recover it. 00:30:08.677 [2024-02-14 20:30:45.964161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.677 [2024-02-14 20:30:45.964589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.677 [2024-02-14 20:30:45.964618] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.677 qpair failed and we were unable to recover it. 00:30:08.677 [2024-02-14 20:30:45.964850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.677 [2024-02-14 20:30:45.965279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.677 [2024-02-14 20:30:45.965308] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.677 qpair failed and we were unable to recover it. 00:30:08.677 [2024-02-14 20:30:45.965748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.677 [2024-02-14 20:30:45.966115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.677 [2024-02-14 20:30:45.966145] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.677 qpair failed and we were unable to recover it. 00:30:08.677 [2024-02-14 20:30:45.966601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.677 [2024-02-14 20:30:45.966988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.677 [2024-02-14 20:30:45.967018] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.677 qpair failed and we were unable to recover it. 00:30:08.677 [2024-02-14 20:30:45.967475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.677 [2024-02-14 20:30:45.967905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.677 [2024-02-14 20:30:45.967935] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.677 qpair failed and we were unable to recover it. 00:30:08.677 [2024-02-14 20:30:45.968375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.677 [2024-02-14 20:30:45.968817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.677 [2024-02-14 20:30:45.968831] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.677 qpair failed and we were unable to recover it. 00:30:08.677 [2024-02-14 20:30:45.969201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.677 [2024-02-14 20:30:45.969504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.677 [2024-02-14 20:30:45.969519] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.677 qpair failed and we were unable to recover it. 00:30:08.677 [2024-02-14 20:30:45.969920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.677 [2024-02-14 20:30:45.970343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.677 [2024-02-14 20:30:45.970358] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.677 qpair failed and we were unable to recover it. 00:30:08.677 [2024-02-14 20:30:45.970644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.678 [2024-02-14 20:30:45.970996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.678 [2024-02-14 20:30:45.971011] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.678 qpair failed and we were unable to recover it. 00:30:08.678 [2024-02-14 20:30:45.971382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.678 [2024-02-14 20:30:45.971810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.678 [2024-02-14 20:30:45.971825] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.678 qpair failed and we were unable to recover it. 00:30:08.678 [2024-02-14 20:30:45.972202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.678 [2024-02-14 20:30:45.972500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.678 [2024-02-14 20:30:45.972515] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.678 qpair failed and we were unable to recover it. 00:30:08.678 [2024-02-14 20:30:45.972939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.678 [2024-02-14 20:30:45.973226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.678 [2024-02-14 20:30:45.973241] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.678 qpair failed and we were unable to recover it. 00:30:08.678 [2024-02-14 20:30:45.973542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.678 [2024-02-14 20:30:45.973837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.678 [2024-02-14 20:30:45.973852] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.678 qpair failed and we were unable to recover it. 00:30:08.678 [2024-02-14 20:30:45.974211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.678 [2024-02-14 20:30:45.974610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.678 [2024-02-14 20:30:45.974625] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.678 qpair failed and we were unable to recover it. 00:30:08.678 [2024-02-14 20:30:45.975069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.678 [2024-02-14 20:30:45.975415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.678 [2024-02-14 20:30:45.975430] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.678 qpair failed and we were unable to recover it. 00:30:08.678 [2024-02-14 20:30:45.975795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.678 [2024-02-14 20:30:45.976198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.678 [2024-02-14 20:30:45.976212] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.678 qpair failed and we were unable to recover it. 00:30:08.678 [2024-02-14 20:30:45.976643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.678 [2024-02-14 20:30:45.977031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.678 [2024-02-14 20:30:45.977045] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.678 qpair failed and we were unable to recover it. 00:30:08.678 [2024-02-14 20:30:45.977473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.678 [2024-02-14 20:30:45.977873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.678 [2024-02-14 20:30:45.977903] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.678 qpair failed and we were unable to recover it. 00:30:08.678 [2024-02-14 20:30:45.978241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.678 [2024-02-14 20:30:45.978606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.678 [2024-02-14 20:30:45.978620] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.678 qpair failed and we were unable to recover it. 00:30:08.678 [2024-02-14 20:30:45.979088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.678 [2024-02-14 20:30:45.979505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.678 [2024-02-14 20:30:45.979519] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.678 qpair failed and we were unable to recover it. 00:30:08.678 [2024-02-14 20:30:45.979904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.678 [2024-02-14 20:30:45.980329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.678 [2024-02-14 20:30:45.980343] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.678 qpair failed and we were unable to recover it. 00:30:08.678 [2024-02-14 20:30:45.980779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.678 [2024-02-14 20:30:45.981138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.678 [2024-02-14 20:30:45.981153] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.678 qpair failed and we were unable to recover it. 00:30:08.678 [2024-02-14 20:30:45.981453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.678 [2024-02-14 20:30:45.981877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.678 [2024-02-14 20:30:45.981892] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.678 qpair failed and we were unable to recover it. 00:30:08.678 [2024-02-14 20:30:45.982329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.678 [2024-02-14 20:30:45.982609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.678 [2024-02-14 20:30:45.982625] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.678 qpair failed and we were unable to recover it. 00:30:08.678 [2024-02-14 20:30:45.982987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.678 [2024-02-14 20:30:45.983343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.678 [2024-02-14 20:30:45.983357] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.678 qpair failed and we were unable to recover it. 00:30:08.678 [2024-02-14 20:30:45.983720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.678 [2024-02-14 20:30:45.984079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.678 [2024-02-14 20:30:45.984094] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.678 qpair failed and we were unable to recover it. 00:30:08.678 [2024-02-14 20:30:45.984443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.678 [2024-02-14 20:30:45.984803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.678 [2024-02-14 20:30:45.984818] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.678 qpair failed and we were unable to recover it. 00:30:08.678 [2024-02-14 20:30:45.985260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.678 [2024-02-14 20:30:45.985577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.678 [2024-02-14 20:30:45.985606] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.678 qpair failed and we were unable to recover it. 00:30:08.678 [2024-02-14 20:30:45.985998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.678 [2024-02-14 20:30:45.986405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.678 [2024-02-14 20:30:45.986419] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.678 qpair failed and we were unable to recover it. 00:30:08.678 [2024-02-14 20:30:45.986771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.678 [2024-02-14 20:30:45.987177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.678 [2024-02-14 20:30:45.987191] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.678 qpair failed and we were unable to recover it. 00:30:08.678 [2024-02-14 20:30:45.987556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.678 [2024-02-14 20:30:45.987975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.678 [2024-02-14 20:30:45.987990] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.678 qpair failed and we were unable to recover it. 00:30:08.678 [2024-02-14 20:30:45.988435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.678 [2024-02-14 20:30:45.988837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.679 [2024-02-14 20:30:45.988852] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.679 qpair failed and we were unable to recover it. 00:30:08.679 [2024-02-14 20:30:45.989205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.679 [2024-02-14 20:30:45.989557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.679 [2024-02-14 20:30:45.989572] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.679 qpair failed and we were unable to recover it. 00:30:08.679 [2024-02-14 20:30:45.989922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.679 [2024-02-14 20:30:45.990518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.679 [2024-02-14 20:30:45.990546] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.679 qpair failed and we were unable to recover it. 00:30:08.679 [2024-02-14 20:30:45.990979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.679 [2024-02-14 20:30:45.991240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.679 [2024-02-14 20:30:45.991269] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.679 qpair failed and we were unable to recover it. 00:30:08.679 [2024-02-14 20:30:45.991625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.679 [2024-02-14 20:30:45.992058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.679 [2024-02-14 20:30:45.992073] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.679 qpair failed and we were unable to recover it. 00:30:08.679 [2024-02-14 20:30:45.992355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.679 [2024-02-14 20:30:45.992710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.679 [2024-02-14 20:30:45.992726] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.679 qpair failed and we were unable to recover it. 00:30:08.679 [2024-02-14 20:30:45.992989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.679 [2024-02-14 20:30:45.993418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.679 [2024-02-14 20:30:45.993446] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.679 qpair failed and we were unable to recover it. 00:30:08.679 [2024-02-14 20:30:45.993855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.679 [2024-02-14 20:30:45.994286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.679 [2024-02-14 20:30:45.994300] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.679 qpair failed and we were unable to recover it. 00:30:08.679 [2024-02-14 20:30:45.994680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.679 [2024-02-14 20:30:45.994956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.679 [2024-02-14 20:30:45.994970] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.679 qpair failed and we were unable to recover it. 00:30:08.679 [2024-02-14 20:30:45.995324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.679 [2024-02-14 20:30:45.995724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.679 [2024-02-14 20:30:45.995753] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.679 qpair failed and we were unable to recover it. 00:30:08.679 [2024-02-14 20:30:45.996227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.679 [2024-02-14 20:30:45.996553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.679 [2024-02-14 20:30:45.996581] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.679 qpair failed and we were unable to recover it. 00:30:08.679 [2024-02-14 20:30:45.996952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.679 [2024-02-14 20:30:45.997387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.679 [2024-02-14 20:30:45.997402] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.679 qpair failed and we were unable to recover it. 00:30:08.679 [2024-02-14 20:30:45.997711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.679 [2024-02-14 20:30:45.998161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.679 [2024-02-14 20:30:45.998175] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.679 qpair failed and we were unable to recover it. 00:30:08.679 [2024-02-14 20:30:45.998630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.679 [2024-02-14 20:30:45.999022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.679 [2024-02-14 20:30:45.999053] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.679 qpair failed and we were unable to recover it. 00:30:08.679 [2024-02-14 20:30:45.999398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.679 [2024-02-14 20:30:45.999746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.679 [2024-02-14 20:30:45.999761] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.679 qpair failed and we were unable to recover it. 00:30:08.679 [2024-02-14 20:30:46.000164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.679 [2024-02-14 20:30:46.000615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.679 [2024-02-14 20:30:46.000629] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.679 qpair failed and we were unable to recover it. 00:30:08.679 [2024-02-14 20:30:46.000993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.679 [2024-02-14 20:30:46.001277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.679 [2024-02-14 20:30:46.001291] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.679 qpair failed and we were unable to recover it. 00:30:08.679 [2024-02-14 20:30:46.001419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.679 [2024-02-14 20:30:46.001770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.679 [2024-02-14 20:30:46.001789] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.679 qpair failed and we were unable to recover it. 00:30:08.679 [2024-02-14 20:30:46.002087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.679 [2024-02-14 20:30:46.002387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.679 [2024-02-14 20:30:46.002402] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.679 qpair failed and we were unable to recover it. 00:30:08.679 [2024-02-14 20:30:46.002757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.679 [2024-02-14 20:30:46.003099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.679 [2024-02-14 20:30:46.003114] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.679 qpair failed and we were unable to recover it. 00:30:08.679 [2024-02-14 20:30:46.003478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.679 [2024-02-14 20:30:46.003759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.679 [2024-02-14 20:30:46.003774] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.679 qpair failed and we were unable to recover it. 00:30:08.679 [2024-02-14 20:30:46.004043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.679 [2024-02-14 20:30:46.004244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.679 [2024-02-14 20:30:46.004259] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.679 qpair failed and we were unable to recover it. 00:30:08.679 [2024-02-14 20:30:46.004617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.679 [2024-02-14 20:30:46.004971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.679 [2024-02-14 20:30:46.004985] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.679 qpair failed and we were unable to recover it. 00:30:08.679 [2024-02-14 20:30:46.005436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.679 [2024-02-14 20:30:46.005746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.679 [2024-02-14 20:30:46.005775] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.679 qpair failed and we were unable to recover it. 00:30:08.679 [2024-02-14 20:30:46.006089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.679 [2024-02-14 20:30:46.006505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.679 [2024-02-14 20:30:46.006519] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.679 qpair failed and we were unable to recover it. 00:30:08.679 [2024-02-14 20:30:46.006956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.679 [2024-02-14 20:30:46.007332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.679 [2024-02-14 20:30:46.007362] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.679 qpair failed and we were unable to recover it. 00:30:08.679 [2024-02-14 20:30:46.008978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.679 [2024-02-14 20:30:46.009352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.679 [2024-02-14 20:30:46.009371] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.679 qpair failed and we were unable to recover it. 00:30:08.679 [2024-02-14 20:30:46.009685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.679 [2024-02-14 20:30:46.009890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.680 [2024-02-14 20:30:46.009904] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.680 qpair failed and we were unable to recover it. 00:30:08.680 [2024-02-14 20:30:46.010262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.680 [2024-02-14 20:30:46.010640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.680 [2024-02-14 20:30:46.010660] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.680 qpair failed and we were unable to recover it. 00:30:08.680 [2024-02-14 20:30:46.010954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.680 [2024-02-14 20:30:46.011361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.680 [2024-02-14 20:30:46.011376] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.680 qpair failed and we were unable to recover it. 00:30:08.680 [2024-02-14 20:30:46.011673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.680 [2024-02-14 20:30:46.011803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.680 [2024-02-14 20:30:46.011818] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.680 qpair failed and we were unable to recover it. 00:30:08.680 [2024-02-14 20:30:46.012077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.680 [2024-02-14 20:30:46.012427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.680 [2024-02-14 20:30:46.012442] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.680 qpair failed and we were unable to recover it. 00:30:08.680 [2024-02-14 20:30:46.012963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.680 [2024-02-14 20:30:46.013341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.680 [2024-02-14 20:30:46.013357] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.680 qpair failed and we were unable to recover it. 00:30:08.680 [2024-02-14 20:30:46.013786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.680 [2024-02-14 20:30:46.014163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.680 [2024-02-14 20:30:46.014193] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.680 qpair failed and we were unable to recover it. 00:30:08.680 [2024-02-14 20:30:46.014513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.680 [2024-02-14 20:30:46.014734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.680 [2024-02-14 20:30:46.014748] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.680 qpair failed and we were unable to recover it. 00:30:08.680 [2024-02-14 20:30:46.015118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.680 [2024-02-14 20:30:46.015396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.680 [2024-02-14 20:30:46.015410] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.680 qpair failed and we were unable to recover it. 00:30:08.680 [2024-02-14 20:30:46.015830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.680 [2024-02-14 20:30:46.016116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.680 [2024-02-14 20:30:46.016131] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.680 qpair failed and we were unable to recover it. 00:30:08.680 [2024-02-14 20:30:46.016411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.680 [2024-02-14 20:30:46.016685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.680 [2024-02-14 20:30:46.016707] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.680 qpair failed and we were unable to recover it. 00:30:08.680 [2024-02-14 20:30:46.017054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.680 [2024-02-14 20:30:46.017400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.680 [2024-02-14 20:30:46.017415] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.680 qpair failed and we were unable to recover it. 00:30:08.680 [2024-02-14 20:30:46.017841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.680 [2024-02-14 20:30:46.018122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.680 [2024-02-14 20:30:46.018136] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.680 qpair failed and we were unable to recover it. 00:30:08.680 [2024-02-14 20:30:46.018548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.680 [2024-02-14 20:30:46.018918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.680 [2024-02-14 20:30:46.018933] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.680 qpair failed and we were unable to recover it. 00:30:08.680 [2024-02-14 20:30:46.019285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.680 [2024-02-14 20:30:46.019564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.680 [2024-02-14 20:30:46.019578] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.680 qpair failed and we were unable to recover it. 00:30:08.680 [2024-02-14 20:30:46.019874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.680 [2024-02-14 20:30:46.020210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.680 [2024-02-14 20:30:46.020224] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.680 qpair failed and we were unable to recover it. 00:30:08.680 [2024-02-14 20:30:46.020574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.680 [2024-02-14 20:30:46.020924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.680 [2024-02-14 20:30:46.020939] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.680 qpair failed and we were unable to recover it. 00:30:08.680 [2024-02-14 20:30:46.021304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.680 [2024-02-14 20:30:46.021598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.680 [2024-02-14 20:30:46.021612] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.680 qpair failed and we were unable to recover it. 00:30:08.680 [2024-02-14 20:30:46.021991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.680 [2024-02-14 20:30:46.022332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.680 [2024-02-14 20:30:46.022347] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.680 qpair failed and we were unable to recover it. 00:30:08.680 [2024-02-14 20:30:46.022697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.680 [2024-02-14 20:30:46.023130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.680 [2024-02-14 20:30:46.023145] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.680 qpair failed and we were unable to recover it. 00:30:08.680 [2024-02-14 20:30:46.023338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.680 [2024-02-14 20:30:46.023638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.680 [2024-02-14 20:30:46.023657] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.680 qpair failed and we were unable to recover it. 00:30:08.680 [2024-02-14 20:30:46.023936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.680 [2024-02-14 20:30:46.024216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.680 [2024-02-14 20:30:46.024230] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.680 qpair failed and we were unable to recover it. 00:30:08.680 [2024-02-14 20:30:46.024634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.680 [2024-02-14 20:30:46.025064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.680 [2024-02-14 20:30:46.025079] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.680 qpair failed and we were unable to recover it. 00:30:08.680 [2024-02-14 20:30:46.025436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.680 [2024-02-14 20:30:46.025800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.680 [2024-02-14 20:30:46.025815] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.680 qpair failed and we were unable to recover it. 00:30:08.680 [2024-02-14 20:30:46.026110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.680 [2024-02-14 20:30:46.026305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-02-14 20:30:46.026320] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.681 qpair failed and we were unable to recover it. 00:30:08.681 [2024-02-14 20:30:46.026722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-02-14 20:30:46.027012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-02-14 20:30:46.027027] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.681 qpair failed and we were unable to recover it. 00:30:08.681 [2024-02-14 20:30:46.027372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-02-14 20:30:46.027774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-02-14 20:30:46.027789] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.681 qpair failed and we were unable to recover it. 00:30:08.681 [2024-02-14 20:30:46.028127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-02-14 20:30:46.028480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-02-14 20:30:46.028495] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.681 qpair failed and we were unable to recover it. 00:30:08.681 [2024-02-14 20:30:46.028857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-02-14 20:30:46.029208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-02-14 20:30:46.029222] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.681 qpair failed and we were unable to recover it. 00:30:08.681 [2024-02-14 20:30:46.029573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-02-14 20:30:46.029984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-02-14 20:30:46.029999] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.681 qpair failed and we were unable to recover it. 00:30:08.681 [2024-02-14 20:30:46.030320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-02-14 20:30:46.030783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-02-14 20:30:46.030797] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.681 qpair failed and we were unable to recover it. 00:30:08.681 [2024-02-14 20:30:46.031241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-02-14 20:30:46.031552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-02-14 20:30:46.031567] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.681 qpair failed and we were unable to recover it. 00:30:08.681 [2024-02-14 20:30:46.031875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-02-14 20:30:46.032273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-02-14 20:30:46.032303] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.681 qpair failed and we were unable to recover it. 00:30:08.681 [2024-02-14 20:30:46.032626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-02-14 20:30:46.033040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-02-14 20:30:46.033070] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.681 qpair failed and we were unable to recover it. 00:30:08.681 [2024-02-14 20:30:46.033509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-02-14 20:30:46.033918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-02-14 20:30:46.033948] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.681 qpair failed and we were unable to recover it. 00:30:08.681 [2024-02-14 20:30:46.034644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-02-14 20:30:46.034951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-02-14 20:30:46.034967] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.681 qpair failed and we were unable to recover it. 00:30:08.681 [2024-02-14 20:30:46.035254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-02-14 20:30:46.035743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-02-14 20:30:46.035759] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.681 qpair failed and we were unable to recover it. 00:30:08.681 [2024-02-14 20:30:46.036051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-02-14 20:30:46.036341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-02-14 20:30:46.036356] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.681 qpair failed and we were unable to recover it. 00:30:08.681 [2024-02-14 20:30:46.036759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-02-14 20:30:46.037187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-02-14 20:30:46.037218] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.681 qpair failed and we were unable to recover it. 00:30:08.681 [2024-02-14 20:30:46.037673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-02-14 20:30:46.037870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-02-14 20:30:46.037899] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.681 qpair failed and we were unable to recover it. 00:30:08.681 [2024-02-14 20:30:46.038286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-02-14 20:30:46.038603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-02-14 20:30:46.038634] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.681 qpair failed and we were unable to recover it. 00:30:08.681 [2024-02-14 20:30:46.038975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-02-14 20:30:46.039392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-02-14 20:30:46.039427] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.681 qpair failed and we were unable to recover it. 00:30:08.681 [2024-02-14 20:30:46.039890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-02-14 20:30:46.040276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-02-14 20:30:46.040305] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.681 qpair failed and we were unable to recover it. 00:30:08.681 [2024-02-14 20:30:46.040684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-02-14 20:30:46.041034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-02-14 20:30:46.041063] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.681 qpair failed and we were unable to recover it. 00:30:08.681 [2024-02-14 20:30:46.041523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-02-14 20:30:46.041835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-02-14 20:30:46.041874] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.681 qpair failed and we were unable to recover it. 00:30:08.681 [2024-02-14 20:30:46.042184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-02-14 20:30:46.042545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-02-14 20:30:46.042560] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.681 qpair failed and we were unable to recover it. 00:30:08.681 [2024-02-14 20:30:46.042907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-02-14 20:30:46.043176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-02-14 20:30:46.043191] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.681 qpair failed and we were unable to recover it. 00:30:08.681 [2024-02-14 20:30:46.043554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-02-14 20:30:46.043984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-02-14 20:30:46.044014] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.681 qpair failed and we were unable to recover it. 00:30:08.681 [2024-02-14 20:30:46.044329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-02-14 20:30:46.044691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-02-14 20:30:46.044707] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.681 qpair failed and we were unable to recover it. 00:30:08.681 [2024-02-14 20:30:46.044993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-02-14 20:30:46.045476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-02-14 20:30:46.045505] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.681 qpair failed and we were unable to recover it. 00:30:08.681 [2024-02-14 20:30:46.045972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-02-14 20:30:46.046436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-02-14 20:30:46.046465] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.682 qpair failed and we were unable to recover it. 00:30:08.682 [2024-02-14 20:30:46.046902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-02-14 20:30:46.047276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-02-14 20:30:46.047304] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.682 qpair failed and we were unable to recover it. 00:30:08.682 [2024-02-14 20:30:46.047630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-02-14 20:30:46.048170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-02-14 20:30:46.048200] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.682 qpair failed and we were unable to recover it. 00:30:08.682 [2024-02-14 20:30:46.048636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-02-14 20:30:46.049040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-02-14 20:30:46.049070] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.682 qpair failed and we were unable to recover it. 00:30:08.682 [2024-02-14 20:30:46.049461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-02-14 20:30:46.049869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-02-14 20:30:46.049885] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.682 qpair failed and we were unable to recover it. 00:30:08.682 [2024-02-14 20:30:46.050256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-02-14 20:30:46.050671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-02-14 20:30:46.050701] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.682 qpair failed and we were unable to recover it. 00:30:08.682 [2024-02-14 20:30:46.051091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-02-14 20:30:46.051479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-02-14 20:30:46.051508] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.682 qpair failed and we were unable to recover it. 00:30:08.682 [2024-02-14 20:30:46.051885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-02-14 20:30:46.052223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-02-14 20:30:46.052252] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.682 qpair failed and we were unable to recover it. 00:30:08.682 [2024-02-14 20:30:46.052712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-02-14 20:30:46.053102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-02-14 20:30:46.053131] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.682 qpair failed and we were unable to recover it. 00:30:08.682 [2024-02-14 20:30:46.053448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-02-14 20:30:46.053641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-02-14 20:30:46.053682] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.682 qpair failed and we were unable to recover it. 00:30:08.682 [2024-02-14 20:30:46.053844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-02-14 20:30:46.054304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-02-14 20:30:46.054332] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.682 qpair failed and we were unable to recover it. 00:30:08.682 [2024-02-14 20:30:46.054793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-02-14 20:30:46.055181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-02-14 20:30:46.055211] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.682 qpair failed and we were unable to recover it. 00:30:08.682 [2024-02-14 20:30:46.055544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-02-14 20:30:46.055912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-02-14 20:30:46.055942] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.682 qpair failed and we were unable to recover it. 00:30:08.682 [2024-02-14 20:30:46.056324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-02-14 20:30:46.056755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-02-14 20:30:46.056784] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.682 qpair failed and we were unable to recover it. 00:30:08.682 [2024-02-14 20:30:46.057173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-02-14 20:30:46.057350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-02-14 20:30:46.057379] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.682 qpair failed and we were unable to recover it. 00:30:08.682 [2024-02-14 20:30:46.057781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-02-14 20:30:46.058093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-02-14 20:30:46.058123] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.682 qpair failed and we were unable to recover it. 00:30:08.682 [2024-02-14 20:30:46.058505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-02-14 20:30:46.058886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-02-14 20:30:46.058918] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.682 qpair failed and we were unable to recover it. 00:30:08.682 [2024-02-14 20:30:46.059306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-02-14 20:30:46.059711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-02-14 20:30:46.059741] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.682 qpair failed and we were unable to recover it. 00:30:08.682 [2024-02-14 20:30:46.060108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-02-14 20:30:46.060459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-02-14 20:30:46.060473] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.682 qpair failed and we were unable to recover it. 00:30:08.682 [2024-02-14 20:30:46.060826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-02-14 20:30:46.061191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-02-14 20:30:46.061206] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.682 qpair failed and we were unable to recover it. 00:30:08.682 [2024-02-14 20:30:46.061576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-02-14 20:30:46.061860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-02-14 20:30:46.061876] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.682 qpair failed and we were unable to recover it. 00:30:08.682 [2024-02-14 20:30:46.062281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-02-14 20:30:46.062700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-02-14 20:30:46.062716] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.682 qpair failed and we were unable to recover it. 00:30:08.682 [2024-02-14 20:30:46.063072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-02-14 20:30:46.063438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-02-14 20:30:46.063453] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.682 qpair failed and we were unable to recover it. 00:30:08.682 [2024-02-14 20:30:46.063790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-02-14 20:30:46.064142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-02-14 20:30:46.064157] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.682 qpair failed and we were unable to recover it. 00:30:08.682 [2024-02-14 20:30:46.064452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-02-14 20:30:46.064793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-02-14 20:30:46.064809] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.682 qpair failed and we were unable to recover it. 00:30:08.682 [2024-02-14 20:30:46.065083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-02-14 20:30:46.065413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-02-14 20:30:46.065427] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.682 qpair failed and we were unable to recover it. 00:30:08.682 [2024-02-14 20:30:46.065657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-02-14 20:30:46.066088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-02-14 20:30:46.066117] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.682 qpair failed and we were unable to recover it. 00:30:08.682 [2024-02-14 20:30:46.066444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.683 [2024-02-14 20:30:46.066821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.683 [2024-02-14 20:30:46.066851] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.683 qpair failed and we were unable to recover it. 00:30:08.683 [2024-02-14 20:30:46.067305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.683 [2024-02-14 20:30:46.067708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.683 [2024-02-14 20:30:46.067738] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.683 qpair failed and we were unable to recover it. 00:30:08.683 [2024-02-14 20:30:46.068141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.683 [2024-02-14 20:30:46.068427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.683 [2024-02-14 20:30:46.068441] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.683 qpair failed and we were unable to recover it. 00:30:08.683 [2024-02-14 20:30:46.068829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.683 [2024-02-14 20:30:46.069127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.683 [2024-02-14 20:30:46.069142] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.683 qpair failed and we were unable to recover it. 00:30:08.683 [2024-02-14 20:30:46.069546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.683 [2024-02-14 20:30:46.069922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.683 [2024-02-14 20:30:46.069937] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.683 qpair failed and we were unable to recover it. 00:30:08.683 [2024-02-14 20:30:46.070238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.683 [2024-02-14 20:30:46.070642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.683 [2024-02-14 20:30:46.070662] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.683 qpair failed and we were unable to recover it. 00:30:08.683 [2024-02-14 20:30:46.071020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.683 [2024-02-14 20:30:46.071479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.683 [2024-02-14 20:30:46.071509] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.683 qpair failed and we were unable to recover it. 00:30:08.683 [2024-02-14 20:30:46.071844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.683 [2024-02-14 20:30:46.072250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.683 [2024-02-14 20:30:46.072279] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.683 qpair failed and we were unable to recover it. 00:30:08.683 [2024-02-14 20:30:46.072677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.683 [2024-02-14 20:30:46.072989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.683 [2024-02-14 20:30:46.073019] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.683 qpair failed and we were unable to recover it. 00:30:08.683 [2024-02-14 20:30:46.073404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.683 [2024-02-14 20:30:46.074669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.683 [2024-02-14 20:30:46.074700] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.683 qpair failed and we were unable to recover it. 00:30:08.683 [2024-02-14 20:30:46.075145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.683 [2024-02-14 20:30:46.075509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.683 [2024-02-14 20:30:46.075525] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.683 qpair failed and we were unable to recover it. 00:30:08.683 [2024-02-14 20:30:46.075834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.683 [2024-02-14 20:30:46.076241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.683 [2024-02-14 20:30:46.076256] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.683 qpair failed and we were unable to recover it. 00:30:08.683 [2024-02-14 20:30:46.076607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.683 [2024-02-14 20:30:46.077049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.683 [2024-02-14 20:30:46.077064] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.683 qpair failed and we were unable to recover it. 00:30:08.683 [2024-02-14 20:30:46.077361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.683 [2024-02-14 20:30:46.077659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.683 [2024-02-14 20:30:46.077673] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.683 qpair failed and we were unable to recover it. 00:30:08.683 [2024-02-14 20:30:46.078055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.683 [2024-02-14 20:30:46.078357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.683 [2024-02-14 20:30:46.078371] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.683 qpair failed and we were unable to recover it. 00:30:08.997 [2024-02-14 20:30:46.078655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-02-14 20:30:46.079036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-02-14 20:30:46.079055] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.997 qpair failed and we were unable to recover it. 00:30:08.997 [2024-02-14 20:30:46.079490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-02-14 20:30:46.079798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-02-14 20:30:46.079815] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.997 qpair failed and we were unable to recover it. 00:30:08.997 [2024-02-14 20:30:46.080161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-02-14 20:30:46.080551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-02-14 20:30:46.080566] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.997 qpair failed and we were unable to recover it. 00:30:08.997 [2024-02-14 20:30:46.080861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-02-14 20:30:46.081239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-02-14 20:30:46.081253] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.997 qpair failed and we were unable to recover it. 00:30:08.997 [2024-02-14 20:30:46.081539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-02-14 20:30:46.081881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-02-14 20:30:46.081896] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.998 qpair failed and we were unable to recover it. 00:30:08.998 [2024-02-14 20:30:46.082247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-02-14 20:30:46.082745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-02-14 20:30:46.082761] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.998 qpair failed and we were unable to recover it. 00:30:08.998 [2024-02-14 20:30:46.083123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-02-14 20:30:46.083403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-02-14 20:30:46.083417] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.998 qpair failed and we were unable to recover it. 00:30:08.998 [2024-02-14 20:30:46.083748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-02-14 20:30:46.084115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-02-14 20:30:46.084131] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.998 qpair failed and we were unable to recover it. 00:30:08.998 [2024-02-14 20:30:46.084598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-02-14 20:30:46.084954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-02-14 20:30:46.084969] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.998 qpair failed and we were unable to recover it. 00:30:08.998 [2024-02-14 20:30:46.085284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-02-14 20:30:46.085637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-02-14 20:30:46.085654] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.998 qpair failed and we were unable to recover it. 00:30:08.998 [2024-02-14 20:30:46.085935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-02-14 20:30:46.086280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-02-14 20:30:46.086295] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.998 qpair failed and we were unable to recover it. 00:30:08.998 [2024-02-14 20:30:46.086628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-02-14 20:30:46.087040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-02-14 20:30:46.087055] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.998 qpair failed and we were unable to recover it. 00:30:08.998 [2024-02-14 20:30:46.087502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-02-14 20:30:46.087880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-02-14 20:30:46.087896] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.998 qpair failed and we were unable to recover it. 00:30:08.998 [2024-02-14 20:30:46.088198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-02-14 20:30:46.088561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-02-14 20:30:46.088576] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.998 qpair failed and we were unable to recover it. 00:30:08.998 [2024-02-14 20:30:46.088988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-02-14 20:30:46.089296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-02-14 20:30:46.089311] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.998 qpair failed and we were unable to recover it. 00:30:08.998 [2024-02-14 20:30:46.089758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-02-14 20:30:46.090126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-02-14 20:30:46.090141] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.998 qpair failed and we were unable to recover it. 00:30:08.998 [2024-02-14 20:30:46.090560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-02-14 20:30:46.090920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-02-14 20:30:46.090936] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.998 qpair failed and we were unable to recover it. 00:30:08.998 [2024-02-14 20:30:46.091237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-02-14 20:30:46.091594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-02-14 20:30:46.091610] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.998 qpair failed and we were unable to recover it. 00:30:08.998 [2024-02-14 20:30:46.091978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-02-14 20:30:46.092378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-02-14 20:30:46.092393] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.998 qpair failed and we were unable to recover it. 00:30:08.998 [2024-02-14 20:30:46.092809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-02-14 20:30:46.093199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-02-14 20:30:46.093228] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.998 qpair failed and we were unable to recover it. 00:30:08.998 [2024-02-14 20:30:46.093617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-02-14 20:30:46.094050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-02-14 20:30:46.094080] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.998 qpair failed and we were unable to recover it. 00:30:08.998 [2024-02-14 20:30:46.094545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-02-14 20:30:46.094949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-02-14 20:30:46.094999] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.998 qpair failed and we were unable to recover it. 00:30:08.998 [2024-02-14 20:30:46.095394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-02-14 20:30:46.095847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-02-14 20:30:46.095877] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.998 qpair failed and we were unable to recover it. 00:30:08.998 [2024-02-14 20:30:46.096259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-02-14 20:30:46.096634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-02-14 20:30:46.096684] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.998 qpair failed and we were unable to recover it. 00:30:08.998 [2024-02-14 20:30:46.097072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-02-14 20:30:46.097408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-02-14 20:30:46.097437] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.998 qpair failed and we were unable to recover it. 00:30:08.998 [2024-02-14 20:30:46.097774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-02-14 20:30:46.098135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-02-14 20:30:46.098164] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.998 qpair failed and we were unable to recover it. 00:30:08.998 [2024-02-14 20:30:46.098565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-02-14 20:30:46.098979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-02-14 20:30:46.099009] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.998 qpair failed and we were unable to recover it. 00:30:08.998 [2024-02-14 20:30:46.099340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-02-14 20:30:46.099701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-02-14 20:30:46.099731] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.998 qpair failed and we were unable to recover it. 00:30:08.998 [2024-02-14 20:30:46.100142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-02-14 20:30:46.100341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-02-14 20:30:46.100371] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.998 qpair failed and we were unable to recover it. 00:30:08.998 [2024-02-14 20:30:46.100705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-02-14 20:30:46.101089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-02-14 20:30:46.101118] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.998 qpair failed and we were unable to recover it. 00:30:08.998 [2024-02-14 20:30:46.101449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-02-14 20:30:46.101944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-02-14 20:30:46.101974] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.998 qpair failed and we were unable to recover it. 00:30:08.998 [2024-02-14 20:30:46.102364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-02-14 20:30:46.102640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-02-14 20:30:46.102678] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.998 qpair failed and we were unable to recover it. 00:30:08.998 [2024-02-14 20:30:46.103074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-02-14 20:30:46.103386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-02-14 20:30:46.103416] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.999 qpair failed and we were unable to recover it. 00:30:08.999 [2024-02-14 20:30:46.103804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-02-14 20:30:46.104352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-02-14 20:30:46.104381] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.999 qpair failed and we were unable to recover it. 00:30:08.999 [2024-02-14 20:30:46.104839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-02-14 20:30:46.105150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-02-14 20:30:46.105179] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.999 qpair failed and we were unable to recover it. 00:30:08.999 [2024-02-14 20:30:46.105557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-02-14 20:30:46.105863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-02-14 20:30:46.105894] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.999 qpair failed and we were unable to recover it. 00:30:08.999 [2024-02-14 20:30:46.106230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-02-14 20:30:46.106606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-02-14 20:30:46.106621] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.999 qpair failed and we were unable to recover it. 00:30:08.999 [2024-02-14 20:30:46.106983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-02-14 20:30:46.107292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-02-14 20:30:46.107320] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.999 qpair failed and we were unable to recover it. 00:30:08.999 [2024-02-14 20:30:46.107661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-02-14 20:30:46.107994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-02-14 20:30:46.108024] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.999 qpair failed and we were unable to recover it. 00:30:08.999 [2024-02-14 20:30:46.108511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-02-14 20:30:46.108949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-02-14 20:30:46.108980] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.999 qpair failed and we were unable to recover it. 00:30:08.999 [2024-02-14 20:30:46.109308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-02-14 20:30:46.109706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-02-14 20:30:46.109736] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.999 qpair failed and we were unable to recover it. 00:30:08.999 [2024-02-14 20:30:46.110061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-02-14 20:30:46.110450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-02-14 20:30:46.110480] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.999 qpair failed and we were unable to recover it. 00:30:08.999 [2024-02-14 20:30:46.110951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-02-14 20:30:46.111288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-02-14 20:30:46.111317] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.999 qpair failed and we were unable to recover it. 00:30:08.999 [2024-02-14 20:30:46.111707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-02-14 20:30:46.112041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-02-14 20:30:46.112070] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.999 qpair failed and we were unable to recover it. 00:30:08.999 [2024-02-14 20:30:46.112449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-02-14 20:30:46.112918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-02-14 20:30:46.112949] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.999 qpair failed and we were unable to recover it. 00:30:08.999 [2024-02-14 20:30:46.113338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-02-14 20:30:46.113722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-02-14 20:30:46.113752] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.999 qpair failed and we were unable to recover it. 00:30:08.999 [2024-02-14 20:30:46.114098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-02-14 20:30:46.114426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-02-14 20:30:46.114455] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.999 qpair failed and we were unable to recover it. 00:30:08.999 [2024-02-14 20:30:46.114826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-02-14 20:30:46.115157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-02-14 20:30:46.115186] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.999 qpair failed and we were unable to recover it. 00:30:08.999 [2024-02-14 20:30:46.115664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-02-14 20:30:46.116137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-02-14 20:30:46.116165] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.999 qpair failed and we were unable to recover it. 00:30:08.999 [2024-02-14 20:30:46.116602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-02-14 20:30:46.116946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-02-14 20:30:46.116977] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.999 qpair failed and we were unable to recover it. 00:30:08.999 [2024-02-14 20:30:46.117319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-02-14 20:30:46.117755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-02-14 20:30:46.117786] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.999 qpair failed and we were unable to recover it. 00:30:08.999 [2024-02-14 20:30:46.118105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-02-14 20:30:46.118441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-02-14 20:30:46.118476] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.999 qpair failed and we were unable to recover it. 00:30:08.999 [2024-02-14 20:30:46.118853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-02-14 20:30:46.119215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-02-14 20:30:46.119230] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.999 qpair failed and we were unable to recover it. 00:30:08.999 [2024-02-14 20:30:46.119691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-02-14 20:30:46.120059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-02-14 20:30:46.120074] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.999 qpair failed and we were unable to recover it. 00:30:08.999 [2024-02-14 20:30:46.120373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-02-14 20:30:46.120672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-02-14 20:30:46.120688] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.999 qpair failed and we were unable to recover it. 00:30:08.999 [2024-02-14 20:30:46.120994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-02-14 20:30:46.121354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-02-14 20:30:46.121369] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.999 qpair failed and we were unable to recover it. 00:30:08.999 [2024-02-14 20:30:46.121771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-02-14 20:30:46.122125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-02-14 20:30:46.122140] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.999 qpair failed and we were unable to recover it. 00:30:08.999 [2024-02-14 20:30:46.122444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-02-14 20:30:46.122843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-02-14 20:30:46.122858] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:08.999 qpair failed and we were unable to recover it. 00:30:09.000 [2024-02-14 20:30:46.123205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-02-14 20:30:46.123572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-02-14 20:30:46.123588] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.000 qpair failed and we were unable to recover it. 00:30:09.000 [2024-02-14 20:30:46.123999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-02-14 20:30:46.124346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-02-14 20:30:46.124361] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.000 qpair failed and we were unable to recover it. 00:30:09.000 [2024-02-14 20:30:46.124787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-02-14 20:30:46.125089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-02-14 20:30:46.125103] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.000 qpair failed and we were unable to recover it. 00:30:09.000 [2024-02-14 20:30:46.125455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-02-14 20:30:46.125793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-02-14 20:30:46.125811] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.000 qpair failed and we were unable to recover it. 00:30:09.000 [2024-02-14 20:30:46.126118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-02-14 20:30:46.126334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-02-14 20:30:46.126349] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.000 qpair failed and we were unable to recover it. 00:30:09.000 [2024-02-14 20:30:46.126702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-02-14 20:30:46.127106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-02-14 20:30:46.127121] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.000 qpair failed and we were unable to recover it. 00:30:09.000 [2024-02-14 20:30:46.127335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-02-14 20:30:46.127751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-02-14 20:30:46.127766] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.000 qpair failed and we were unable to recover it. 00:30:09.000 [2024-02-14 20:30:46.128122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-02-14 20:30:46.128551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-02-14 20:30:46.128566] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.000 qpair failed and we were unable to recover it. 00:30:09.000 [2024-02-14 20:30:46.128870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-02-14 20:30:46.129224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-02-14 20:30:46.129239] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.000 qpair failed and we were unable to recover it. 00:30:09.000 [2024-02-14 20:30:46.129583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-02-14 20:30:46.129895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-02-14 20:30:46.129911] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.000 qpair failed and we were unable to recover it. 00:30:09.000 [2024-02-14 20:30:46.130191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-02-14 20:30:46.130552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-02-14 20:30:46.130567] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.000 qpair failed and we were unable to recover it. 00:30:09.000 [2024-02-14 20:30:46.130996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-02-14 20:30:46.131376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-02-14 20:30:46.131391] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.000 qpair failed and we were unable to recover it. 00:30:09.000 [2024-02-14 20:30:46.131700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-02-14 20:30:46.131982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-02-14 20:30:46.131997] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.000 qpair failed and we were unable to recover it. 00:30:09.000 [2024-02-14 20:30:46.132307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-02-14 20:30:46.132658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-02-14 20:30:46.132674] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.000 qpair failed and we were unable to recover it. 00:30:09.000 [2024-02-14 20:30:46.132955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-02-14 20:30:46.133312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-02-14 20:30:46.133327] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.000 qpair failed and we were unable to recover it. 00:30:09.000 [2024-02-14 20:30:46.133664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-02-14 20:30:46.133965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-02-14 20:30:46.133980] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.000 qpair failed and we were unable to recover it. 00:30:09.000 [2024-02-14 20:30:46.134284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-02-14 20:30:46.134626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-02-14 20:30:46.134641] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.000 qpair failed and we were unable to recover it. 00:30:09.000 [2024-02-14 20:30:46.134926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-02-14 20:30:46.135224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-02-14 20:30:46.135239] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.000 qpair failed and we were unable to recover it. 00:30:09.000 [2024-02-14 20:30:46.135521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-02-14 20:30:46.135834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-02-14 20:30:46.135849] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.000 qpair failed and we were unable to recover it. 00:30:09.000 [2024-02-14 20:30:46.136144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-02-14 20:30:46.136501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-02-14 20:30:46.136515] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.000 qpair failed and we were unable to recover it. 00:30:09.000 [2024-02-14 20:30:46.136816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-02-14 20:30:46.137165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-02-14 20:30:46.137179] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.000 qpair failed and we were unable to recover it. 00:30:09.000 [2024-02-14 20:30:46.137349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-02-14 20:30:46.137721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-02-14 20:30:46.137736] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.000 qpair failed and we were unable to recover it. 00:30:09.000 [2024-02-14 20:30:46.138014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-02-14 20:30:46.138380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-02-14 20:30:46.138409] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.000 qpair failed and we were unable to recover it. 00:30:09.000 [2024-02-14 20:30:46.138787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-02-14 20:30:46.139152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-02-14 20:30:46.139182] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.000 qpair failed and we were unable to recover it. 00:30:09.000 [2024-02-14 20:30:46.139568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-02-14 20:30:46.139939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-02-14 20:30:46.139969] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.000 qpair failed and we were unable to recover it. 00:30:09.000 [2024-02-14 20:30:46.140273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-02-14 20:30:46.140704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-02-14 20:30:46.140735] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.000 qpair failed and we were unable to recover it. 00:30:09.000 [2024-02-14 20:30:46.141040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-02-14 20:30:46.141377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-02-14 20:30:46.141405] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.000 qpair failed and we were unable to recover it. 00:30:09.000 [2024-02-14 20:30:46.141804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-02-14 20:30:46.142198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-02-14 20:30:46.142227] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.001 qpair failed and we were unable to recover it. 00:30:09.001 [2024-02-14 20:30:46.142670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-02-14 20:30:46.143957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-02-14 20:30:46.143986] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.001 qpair failed and we were unable to recover it. 00:30:09.001 [2024-02-14 20:30:46.144376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-02-14 20:30:46.144692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-02-14 20:30:46.144725] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.001 qpair failed and we were unable to recover it. 00:30:09.001 [2024-02-14 20:30:46.145110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-02-14 20:30:46.145428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-02-14 20:30:46.145457] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.001 qpair failed and we were unable to recover it. 00:30:09.001 [2024-02-14 20:30:46.145892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-02-14 20:30:46.146328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-02-14 20:30:46.146361] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.001 qpair failed and we were unable to recover it. 00:30:09.001 [2024-02-14 20:30:46.146754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-02-14 20:30:46.147115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-02-14 20:30:46.147144] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.001 qpair failed and we were unable to recover it. 00:30:09.001 [2024-02-14 20:30:46.147544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-02-14 20:30:46.147987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-02-14 20:30:46.148018] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.001 qpair failed and we were unable to recover it. 00:30:09.001 [2024-02-14 20:30:46.148358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-02-14 20:30:46.148749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-02-14 20:30:46.148779] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.001 qpair failed and we were unable to recover it. 00:30:09.001 [2024-02-14 20:30:46.149218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-02-14 20:30:46.149538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-02-14 20:30:46.149567] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.001 qpair failed and we were unable to recover it. 00:30:09.001 [2024-02-14 20:30:46.150008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-02-14 20:30:46.150336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-02-14 20:30:46.150365] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.001 qpair failed and we were unable to recover it. 00:30:09.001 [2024-02-14 20:30:46.150761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-02-14 20:30:46.151079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-02-14 20:30:46.151094] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.001 qpair failed and we were unable to recover it. 00:30:09.001 [2024-02-14 20:30:46.151443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-02-14 20:30:46.151807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-02-14 20:30:46.151837] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.001 qpair failed and we were unable to recover it. 00:30:09.001 [2024-02-14 20:30:46.152167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-02-14 20:30:46.152507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-02-14 20:30:46.152536] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.001 qpair failed and we were unable to recover it. 00:30:09.001 [2024-02-14 20:30:46.152870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-02-14 20:30:46.153249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-02-14 20:30:46.153278] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.001 qpair failed and we were unable to recover it. 00:30:09.001 [2024-02-14 20:30:46.153667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-02-14 20:30:46.154101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-02-14 20:30:46.154130] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.001 qpair failed and we were unable to recover it. 00:30:09.001 [2024-02-14 20:30:46.154508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-02-14 20:30:46.154857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-02-14 20:30:46.154888] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.001 qpair failed and we were unable to recover it. 00:30:09.001 [2024-02-14 20:30:46.155303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-02-14 20:30:46.155643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-02-14 20:30:46.155697] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.001 qpair failed and we were unable to recover it. 00:30:09.001 [2024-02-14 20:30:46.156097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-02-14 20:30:46.156475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-02-14 20:30:46.156490] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.001 qpair failed and we were unable to recover it. 00:30:09.001 [2024-02-14 20:30:46.156851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-02-14 20:30:46.157164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-02-14 20:30:46.157193] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.001 qpair failed and we were unable to recover it. 00:30:09.001 [2024-02-14 20:30:46.157626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-02-14 20:30:46.158002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-02-14 20:30:46.158031] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.001 qpair failed and we were unable to recover it. 00:30:09.001 [2024-02-14 20:30:46.158441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-02-14 20:30:46.158846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-02-14 20:30:46.158877] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.001 qpair failed and we were unable to recover it. 00:30:09.001 [2024-02-14 20:30:46.159249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-02-14 20:30:46.159558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-02-14 20:30:46.159588] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.001 qpair failed and we were unable to recover it. 00:30:09.001 [2024-02-14 20:30:46.160041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-02-14 20:30:46.160397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-02-14 20:30:46.160427] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.001 qpair failed and we were unable to recover it. 00:30:09.001 [2024-02-14 20:30:46.160862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-02-14 20:30:46.161247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-02-14 20:30:46.161276] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.001 qpair failed and we were unable to recover it. 00:30:09.001 [2024-02-14 20:30:46.161657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-02-14 20:30:46.162031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-02-14 20:30:46.162061] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.001 qpair failed and we were unable to recover it. 00:30:09.001 [2024-02-14 20:30:46.162444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-02-14 20:30:46.162827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-02-14 20:30:46.162857] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.001 qpair failed and we were unable to recover it. 00:30:09.001 [2024-02-14 20:30:46.163246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-02-14 20:30:46.163548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-02-14 20:30:46.163577] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.001 qpair failed and we were unable to recover it. 00:30:09.002 [2024-02-14 20:30:46.163982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.002 [2024-02-14 20:30:46.164293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.002 [2024-02-14 20:30:46.164326] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.002 qpair failed and we were unable to recover it. 00:30:09.002 [2024-02-14 20:30:46.164659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.002 [2024-02-14 20:30:46.164908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.002 [2024-02-14 20:30:46.164936] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.002 qpair failed and we were unable to recover it. 00:30:09.002 [2024-02-14 20:30:46.165324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.002 [2024-02-14 20:30:46.165728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.002 [2024-02-14 20:30:46.165758] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.002 qpair failed and we were unable to recover it. 00:30:09.002 [2024-02-14 20:30:46.166135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.002 [2024-02-14 20:30:46.166531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.002 [2024-02-14 20:30:46.166561] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.002 qpair failed and we were unable to recover it. 00:30:09.002 [2024-02-14 20:30:46.166933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.002 [2024-02-14 20:30:46.167369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.002 [2024-02-14 20:30:46.167398] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.002 qpair failed and we were unable to recover it. 00:30:09.002 [2024-02-14 20:30:46.167729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.002 [2024-02-14 20:30:46.167908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.002 [2024-02-14 20:30:46.167937] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.002 qpair failed and we were unable to recover it. 00:30:09.002 [2024-02-14 20:30:46.168117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.002 [2024-02-14 20:30:46.168476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.002 [2024-02-14 20:30:46.168505] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.002 qpair failed and we were unable to recover it. 00:30:09.002 [2024-02-14 20:30:46.168910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.002 [2024-02-14 20:30:46.169396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.002 [2024-02-14 20:30:46.169437] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.002 qpair failed and we were unable to recover it. 00:30:09.002 [2024-02-14 20:30:46.169831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.002 [2024-02-14 20:30:46.170148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.002 [2024-02-14 20:30:46.170177] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.002 qpair failed and we were unable to recover it. 00:30:09.002 [2024-02-14 20:30:46.170572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.002 [2024-02-14 20:30:46.170934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.002 [2024-02-14 20:30:46.170965] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.002 qpair failed and we were unable to recover it. 00:30:09.002 [2024-02-14 20:30:46.171292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.002 [2024-02-14 20:30:46.171748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.002 [2024-02-14 20:30:46.171778] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.002 qpair failed and we were unable to recover it. 00:30:09.002 [2024-02-14 20:30:46.172168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.002 [2024-02-14 20:30:46.172610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.002 [2024-02-14 20:30:46.172709] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.002 qpair failed and we were unable to recover it. 00:30:09.002 [2024-02-14 20:30:46.173074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.002 [2024-02-14 20:30:46.173424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.002 [2024-02-14 20:30:46.173453] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.002 qpair failed and we were unable to recover it. 00:30:09.002 [2024-02-14 20:30:46.173915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.002 [2024-02-14 20:30:46.174298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.002 [2024-02-14 20:30:46.174327] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.002 qpair failed and we were unable to recover it. 00:30:09.002 [2024-02-14 20:30:46.174738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.002 [2024-02-14 20:30:46.175069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.002 [2024-02-14 20:30:46.175098] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.002 qpair failed and we were unable to recover it. 00:30:09.002 [2024-02-14 20:30:46.175536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.002 [2024-02-14 20:30:46.175908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.002 [2024-02-14 20:30:46.175923] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.002 qpair failed and we were unable to recover it. 00:30:09.002 [2024-02-14 20:30:46.176280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.002 [2024-02-14 20:30:46.176715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.002 [2024-02-14 20:30:46.176745] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.002 qpair failed and we were unable to recover it. 00:30:09.002 [2024-02-14 20:30:46.177154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.002 [2024-02-14 20:30:46.177547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.002 [2024-02-14 20:30:46.177576] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.002 qpair failed and we were unable to recover it. 00:30:09.002 [2024-02-14 20:30:46.177961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.002 [2024-02-14 20:30:46.178349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.002 [2024-02-14 20:30:46.178378] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.002 qpair failed and we were unable to recover it. 00:30:09.002 [2024-02-14 20:30:46.178833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.002 [2024-02-14 20:30:46.179215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.002 [2024-02-14 20:30:46.179245] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.002 qpair failed and we were unable to recover it. 00:30:09.002 [2024-02-14 20:30:46.179626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.002 [2024-02-14 20:30:46.180065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.002 [2024-02-14 20:30:46.180095] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.002 qpair failed and we were unable to recover it. 00:30:09.002 [2024-02-14 20:30:46.180579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.002 [2024-02-14 20:30:46.180959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.002 [2024-02-14 20:30:46.180990] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.002 qpair failed and we were unable to recover it. 00:30:09.002 [2024-02-14 20:30:46.181374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-02-14 20:30:46.181813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-02-14 20:30:46.181844] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.003 qpair failed and we were unable to recover it. 00:30:09.003 [2024-02-14 20:30:46.182200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-02-14 20:30:46.182636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-02-14 20:30:46.182659] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.003 qpair failed and we were unable to recover it. 00:30:09.003 [2024-02-14 20:30:46.183027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-02-14 20:30:46.183354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-02-14 20:30:46.183384] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.003 qpair failed and we were unable to recover it. 00:30:09.003 [2024-02-14 20:30:46.183769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-02-14 20:30:46.184165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-02-14 20:30:46.184195] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.003 qpair failed and we were unable to recover it. 00:30:09.003 [2024-02-14 20:30:46.184596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-02-14 20:30:46.184948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-02-14 20:30:46.184978] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.003 qpair failed and we were unable to recover it. 00:30:09.003 [2024-02-14 20:30:46.185368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-02-14 20:30:46.185761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-02-14 20:30:46.185791] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.003 qpair failed and we were unable to recover it. 00:30:09.003 [2024-02-14 20:30:46.186176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-02-14 20:30:46.186658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-02-14 20:30:46.186677] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.003 qpair failed and we were unable to recover it. 00:30:09.003 [2024-02-14 20:30:46.187068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-02-14 20:30:46.187548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-02-14 20:30:46.187577] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.003 qpair failed and we were unable to recover it. 00:30:09.003 [2024-02-14 20:30:46.188037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-02-14 20:30:46.188510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-02-14 20:30:46.188539] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.003 qpair failed and we were unable to recover it. 00:30:09.003 [2024-02-14 20:30:46.188929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-02-14 20:30:46.189341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-02-14 20:30:46.189371] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.003 qpair failed and we were unable to recover it. 00:30:09.003 [2024-02-14 20:30:46.189759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-02-14 20:30:46.190142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-02-14 20:30:46.190170] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.003 qpair failed and we were unable to recover it. 00:30:09.003 [2024-02-14 20:30:46.190569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-02-14 20:30:46.190948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-02-14 20:30:46.190979] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.003 qpair failed and we were unable to recover it. 00:30:09.003 [2024-02-14 20:30:46.191369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-02-14 20:30:46.191797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-02-14 20:30:46.191812] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.003 qpair failed and we were unable to recover it. 00:30:09.003 [2024-02-14 20:30:46.192127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-02-14 20:30:46.192669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-02-14 20:30:46.192699] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.003 qpair failed and we were unable to recover it. 00:30:09.003 [2024-02-14 20:30:46.193152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-02-14 20:30:46.193538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-02-14 20:30:46.193568] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.003 qpair failed and we were unable to recover it. 00:30:09.003 [2024-02-14 20:30:46.193970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-02-14 20:30:46.194458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-02-14 20:30:46.194487] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.003 qpair failed and we were unable to recover it. 00:30:09.003 [2024-02-14 20:30:46.194983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-02-14 20:30:46.195417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-02-14 20:30:46.195446] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.003 qpair failed and we were unable to recover it. 00:30:09.003 [2024-02-14 20:30:46.195861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-02-14 20:30:46.196199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-02-14 20:30:46.196228] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.003 qpair failed and we were unable to recover it. 00:30:09.003 [2024-02-14 20:30:46.196606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-02-14 20:30:46.197062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-02-14 20:30:46.197091] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.003 qpair failed and we were unable to recover it. 00:30:09.003 [2024-02-14 20:30:46.197490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-02-14 20:30:46.197931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-02-14 20:30:46.197962] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.003 qpair failed and we were unable to recover it. 00:30:09.003 [2024-02-14 20:30:46.198289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-02-14 20:30:46.198754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-02-14 20:30:46.198798] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.003 qpair failed and we were unable to recover it. 00:30:09.003 [2024-02-14 20:30:46.199191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-02-14 20:30:46.199673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-02-14 20:30:46.199703] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.003 qpair failed and we were unable to recover it. 00:30:09.003 [2024-02-14 20:30:46.200138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-02-14 20:30:46.200577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-02-14 20:30:46.200613] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.003 qpair failed and we were unable to recover it. 00:30:09.003 [2024-02-14 20:30:46.201033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-02-14 20:30:46.201496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-02-14 20:30:46.201525] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.003 qpair failed and we were unable to recover it. 00:30:09.003 [2024-02-14 20:30:46.201963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-02-14 20:30:46.202388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-02-14 20:30:46.202403] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.003 qpair failed and we were unable to recover it. 00:30:09.003 [2024-02-14 20:30:46.202802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-02-14 20:30:46.203147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-02-14 20:30:46.203178] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.003 qpair failed and we were unable to recover it. 00:30:09.003 [2024-02-14 20:30:46.203638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-02-14 20:30:46.204027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.004 [2024-02-14 20:30:46.204056] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.004 qpair failed and we were unable to recover it. 00:30:09.004 [2024-02-14 20:30:46.204440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.004 [2024-02-14 20:30:46.204896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.004 [2024-02-14 20:30:46.204927] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.004 qpair failed and we were unable to recover it. 00:30:09.004 [2024-02-14 20:30:46.205263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.004 [2024-02-14 20:30:46.205727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.004 [2024-02-14 20:30:46.205758] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.004 qpair failed and we were unable to recover it. 00:30:09.004 [2024-02-14 20:30:46.206145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.004 [2024-02-14 20:30:46.206564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.004 [2024-02-14 20:30:46.206599] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.004 qpair failed and we were unable to recover it. 00:30:09.004 [2024-02-14 20:30:46.207005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.004 [2024-02-14 20:30:46.207337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.004 [2024-02-14 20:30:46.207366] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.004 qpair failed and we were unable to recover it. 00:30:09.004 [2024-02-14 20:30:46.207755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.004 [2024-02-14 20:30:46.208143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.004 [2024-02-14 20:30:46.208172] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.004 qpair failed and we were unable to recover it. 00:30:09.004 [2024-02-14 20:30:46.208630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.004 [2024-02-14 20:30:46.209106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.004 [2024-02-14 20:30:46.209135] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.004 qpair failed and we were unable to recover it. 00:30:09.004 [2024-02-14 20:30:46.209476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.004 [2024-02-14 20:30:46.209940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.004 [2024-02-14 20:30:46.209955] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.004 qpair failed and we were unable to recover it. 00:30:09.004 [2024-02-14 20:30:46.210351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.004 [2024-02-14 20:30:46.210805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.004 [2024-02-14 20:30:46.210835] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.004 qpair failed and we were unable to recover it. 00:30:09.004 [2024-02-14 20:30:46.211218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.004 [2024-02-14 20:30:46.211594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.004 [2024-02-14 20:30:46.211623] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.004 qpair failed and we were unable to recover it. 00:30:09.004 [2024-02-14 20:30:46.212016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.004 [2024-02-14 20:30:46.212466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.004 [2024-02-14 20:30:46.212495] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.004 qpair failed and we were unable to recover it. 00:30:09.004 [2024-02-14 20:30:46.212825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.004 [2024-02-14 20:30:46.213276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.004 [2024-02-14 20:30:46.213305] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.004 qpair failed and we were unable to recover it. 00:30:09.004 [2024-02-14 20:30:46.213716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.004 [2024-02-14 20:30:46.214110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.004 [2024-02-14 20:30:46.214140] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.004 qpair failed and we were unable to recover it. 00:30:09.004 [2024-02-14 20:30:46.214572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.004 [2024-02-14 20:30:46.214996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.004 [2024-02-14 20:30:46.215026] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.004 qpair failed and we were unable to recover it. 00:30:09.004 [2024-02-14 20:30:46.215427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.004 [2024-02-14 20:30:46.215882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.004 [2024-02-14 20:30:46.215912] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.004 qpair failed and we were unable to recover it. 00:30:09.004 [2024-02-14 20:30:46.216349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.004 [2024-02-14 20:30:46.216833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.004 [2024-02-14 20:30:46.216864] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.004 qpair failed and we were unable to recover it. 00:30:09.004 [2024-02-14 20:30:46.217253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.004 [2024-02-14 20:30:46.217723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.004 [2024-02-14 20:30:46.217754] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.004 qpair failed and we were unable to recover it. 00:30:09.004 [2024-02-14 20:30:46.218097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.004 [2024-02-14 20:30:46.218418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.004 [2024-02-14 20:30:46.218447] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.004 qpair failed and we were unable to recover it. 00:30:09.004 [2024-02-14 20:30:46.218910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.004 [2024-02-14 20:30:46.219246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.004 [2024-02-14 20:30:46.219275] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.004 qpair failed and we were unable to recover it. 00:30:09.004 [2024-02-14 20:30:46.219771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.004 [2024-02-14 20:30:46.220155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.004 [2024-02-14 20:30:46.220183] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.004 qpair failed and we were unable to recover it. 00:30:09.004 [2024-02-14 20:30:46.220676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.004 [2024-02-14 20:30:46.221048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.004 [2024-02-14 20:30:46.221076] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.004 qpair failed and we were unable to recover it. 00:30:09.004 [2024-02-14 20:30:46.221464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.004 [2024-02-14 20:30:46.221847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.004 [2024-02-14 20:30:46.221862] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.004 qpair failed and we were unable to recover it. 00:30:09.004 [2024-02-14 20:30:46.222239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.004 [2024-02-14 20:30:46.222682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.004 [2024-02-14 20:30:46.222712] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.004 qpair failed and we were unable to recover it. 00:30:09.004 [2024-02-14 20:30:46.223146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.004 [2024-02-14 20:30:46.223555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-02-14 20:30:46.223584] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.005 qpair failed and we were unable to recover it. 00:30:09.005 [2024-02-14 20:30:46.223978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-02-14 20:30:46.224273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-02-14 20:30:46.224287] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.005 qpair failed and we were unable to recover it. 00:30:09.005 [2024-02-14 20:30:46.224627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-02-14 20:30:46.225002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-02-14 20:30:46.225032] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.005 qpair failed and we were unable to recover it. 00:30:09.005 [2024-02-14 20:30:46.225431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-02-14 20:30:46.225799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-02-14 20:30:46.225815] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.005 qpair failed and we were unable to recover it. 00:30:09.005 [2024-02-14 20:30:46.226252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-02-14 20:30:46.226718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-02-14 20:30:46.226748] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.005 qpair failed and we were unable to recover it. 00:30:09.005 [2024-02-14 20:30:46.227132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-02-14 20:30:46.227521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-02-14 20:30:46.227551] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.005 qpair failed and we were unable to recover it. 00:30:09.005 [2024-02-14 20:30:46.227945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-02-14 20:30:46.228357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-02-14 20:30:46.228386] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.005 qpair failed and we were unable to recover it. 00:30:09.005 [2024-02-14 20:30:46.228774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-02-14 20:30:46.229087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-02-14 20:30:46.229102] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.005 qpair failed and we were unable to recover it. 00:30:09.005 [2024-02-14 20:30:46.229629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-02-14 20:30:46.230141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-02-14 20:30:46.230156] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.005 qpair failed and we were unable to recover it. 00:30:09.005 [2024-02-14 20:30:46.230653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-02-14 20:30:46.231048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-02-14 20:30:46.231064] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.005 qpair failed and we were unable to recover it. 00:30:09.005 [2024-02-14 20:30:46.231497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-02-14 20:30:46.231898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-02-14 20:30:46.231928] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.005 qpair failed and we were unable to recover it. 00:30:09.005 [2024-02-14 20:30:46.232323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-02-14 20:30:46.232753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-02-14 20:30:46.232768] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.005 qpair failed and we were unable to recover it. 00:30:09.005 [2024-02-14 20:30:46.233127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-02-14 20:30:46.233433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-02-14 20:30:46.233448] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.005 qpair failed and we were unable to recover it. 00:30:09.005 [2024-02-14 20:30:46.233830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-02-14 20:30:46.234262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-02-14 20:30:46.234277] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.005 qpair failed and we were unable to recover it. 00:30:09.005 [2024-02-14 20:30:46.234826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-02-14 20:30:46.235133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-02-14 20:30:46.235148] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.005 qpair failed and we were unable to recover it. 00:30:09.005 [2024-02-14 20:30:46.235602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-02-14 20:30:46.235969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-02-14 20:30:46.235985] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.005 qpair failed and we were unable to recover it. 00:30:09.005 [2024-02-14 20:30:46.236295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-02-14 20:30:46.236732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-02-14 20:30:46.236748] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.005 qpair failed and we were unable to recover it. 00:30:09.005 [2024-02-14 20:30:46.237056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-02-14 20:30:46.237362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-02-14 20:30:46.237376] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.005 qpair failed and we were unable to recover it. 00:30:09.005 [2024-02-14 20:30:46.237755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-02-14 20:30:46.238112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-02-14 20:30:46.238126] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.005 qpair failed and we were unable to recover it. 00:30:09.005 [2024-02-14 20:30:46.238656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-02-14 20:30:46.238987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-02-14 20:30:46.239022] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.005 qpair failed and we were unable to recover it. 00:30:09.005 [2024-02-14 20:30:46.239345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-02-14 20:30:46.239762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-02-14 20:30:46.239792] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.005 qpair failed and we were unable to recover it. 00:30:09.005 [2024-02-14 20:30:46.240183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-02-14 20:30:46.240659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-02-14 20:30:46.240675] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.005 qpair failed and we were unable to recover it. 00:30:09.005 [2024-02-14 20:30:46.240990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-02-14 20:30:46.241311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-02-14 20:30:46.241340] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.005 qpair failed and we were unable to recover it. 00:30:09.005 [2024-02-14 20:30:46.241798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-02-14 20:30:46.242151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-02-14 20:30:46.242166] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.005 qpair failed and we were unable to recover it. 00:30:09.005 [2024-02-14 20:30:46.242618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-02-14 20:30:46.242995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-02-14 20:30:46.243025] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.005 qpair failed and we were unable to recover it. 00:30:09.005 [2024-02-14 20:30:46.243433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-02-14 20:30:46.243900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-02-14 20:30:46.243916] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.005 qpair failed and we were unable to recover it. 00:30:09.005 [2024-02-14 20:30:46.244225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-02-14 20:30:46.244668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-02-14 20:30:46.244683] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.005 qpair failed and we were unable to recover it. 00:30:09.005 [2024-02-14 20:30:46.245037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-02-14 20:30:46.245393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-02-14 20:30:46.245407] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.006 qpair failed and we were unable to recover it. 00:30:09.006 [2024-02-14 20:30:46.245942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-02-14 20:30:46.246410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-02-14 20:30:46.246439] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.006 qpair failed and we were unable to recover it. 00:30:09.006 [2024-02-14 20:30:46.246900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-02-14 20:30:46.247209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-02-14 20:30:46.247223] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.006 qpair failed and we were unable to recover it. 00:30:09.006 [2024-02-14 20:30:46.247715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-02-14 20:30:46.248017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-02-14 20:30:46.248032] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.006 qpair failed and we were unable to recover it. 00:30:09.006 [2024-02-14 20:30:46.248397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-02-14 20:30:46.248844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-02-14 20:30:46.248862] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.006 qpair failed and we were unable to recover it. 00:30:09.006 [2024-02-14 20:30:46.249221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-02-14 20:30:46.249507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-02-14 20:30:46.249522] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.006 qpair failed and we were unable to recover it. 00:30:09.006 [2024-02-14 20:30:46.249971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-02-14 20:30:46.250273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-02-14 20:30:46.250287] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.006 qpair failed and we were unable to recover it. 00:30:09.006 [2024-02-14 20:30:46.250738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-02-14 20:30:46.251121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-02-14 20:30:46.251137] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.006 qpair failed and we were unable to recover it. 00:30:09.006 [2024-02-14 20:30:46.251534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-02-14 20:30:46.251966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-02-14 20:30:46.251997] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.006 qpair failed and we were unable to recover it. 00:30:09.006 [2024-02-14 20:30:46.252342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-02-14 20:30:46.252800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-02-14 20:30:46.252831] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.006 qpair failed and we were unable to recover it. 00:30:09.006 [2024-02-14 20:30:46.253395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-02-14 20:30:46.253868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-02-14 20:30:46.253899] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.006 qpair failed and we were unable to recover it. 00:30:09.006 [2024-02-14 20:30:46.254288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-02-14 20:30:46.254763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-02-14 20:30:46.254778] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.006 qpair failed and we were unable to recover it. 00:30:09.006 [2024-02-14 20:30:46.255222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-02-14 20:30:46.255616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-02-14 20:30:46.255645] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.006 qpair failed and we were unable to recover it. 00:30:09.006 [2024-02-14 20:30:46.256031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-02-14 20:30:46.256402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-02-14 20:30:46.256417] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.006 qpair failed and we were unable to recover it. 00:30:09.006 [2024-02-14 20:30:46.256851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-02-14 20:30:46.257205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-02-14 20:30:46.257223] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.006 qpair failed and we were unable to recover it. 00:30:09.006 [2024-02-14 20:30:46.257611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-02-14 20:30:46.257983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-02-14 20:30:46.258014] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.006 qpair failed and we were unable to recover it. 00:30:09.006 [2024-02-14 20:30:46.258454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-02-14 20:30:46.258849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-02-14 20:30:46.258880] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.006 qpair failed and we were unable to recover it. 00:30:09.006 [2024-02-14 20:30:46.259344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-02-14 20:30:46.259721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-02-14 20:30:46.259736] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.006 qpair failed and we were unable to recover it. 00:30:09.006 [2024-02-14 20:30:46.260171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-02-14 20:30:46.260606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-02-14 20:30:46.260635] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.006 qpair failed and we were unable to recover it. 00:30:09.006 [2024-02-14 20:30:46.261138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-02-14 20:30:46.261579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-02-14 20:30:46.261608] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.006 qpair failed and we were unable to recover it. 00:30:09.006 [2024-02-14 20:30:46.262019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-02-14 20:30:46.262468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-02-14 20:30:46.262496] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.006 qpair failed and we were unable to recover it. 00:30:09.006 [2024-02-14 20:30:46.263002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-02-14 20:30:46.263384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-02-14 20:30:46.263413] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.006 qpair failed and we were unable to recover it. 00:30:09.006 [2024-02-14 20:30:46.263898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-02-14 20:30:46.264328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-02-14 20:30:46.264344] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.006 qpair failed and we were unable to recover it. 00:30:09.006 [2024-02-14 20:30:46.264699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-02-14 20:30:46.265059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-02-14 20:30:46.265074] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.006 qpair failed and we were unable to recover it. 00:30:09.006 [2024-02-14 20:30:46.265479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-02-14 20:30:46.265798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-02-14 20:30:46.265813] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.006 qpair failed and we were unable to recover it. 00:30:09.006 [2024-02-14 20:30:46.266212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-02-14 20:30:46.266624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-02-14 20:30:46.266665] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.006 qpair failed and we were unable to recover it. 00:30:09.006 [2024-02-14 20:30:46.267289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-02-14 20:30:46.267791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-02-14 20:30:46.267807] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.006 qpair failed and we were unable to recover it. 00:30:09.006 [2024-02-14 20:30:46.268163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-02-14 20:30:46.268581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.007 [2024-02-14 20:30:46.268610] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.007 qpair failed and we were unable to recover it. 00:30:09.007 [2024-02-14 20:30:46.268950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.007 [2024-02-14 20:30:46.269436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.007 [2024-02-14 20:30:46.269465] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.007 qpair failed and we were unable to recover it. 00:30:09.007 [2024-02-14 20:30:46.269950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.007 [2024-02-14 20:30:46.270301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.007 [2024-02-14 20:30:46.270330] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.007 qpair failed and we were unable to recover it. 00:30:09.007 [2024-02-14 20:30:46.270725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.007 [2024-02-14 20:30:46.271163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.007 [2024-02-14 20:30:46.271192] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.007 qpair failed and we were unable to recover it. 00:30:09.007 [2024-02-14 20:30:46.271527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.007 [2024-02-14 20:30:46.271947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.007 [2024-02-14 20:30:46.271978] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.007 qpair failed and we were unable to recover it. 00:30:09.007 [2024-02-14 20:30:46.272230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.007 [2024-02-14 20:30:46.272690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.007 [2024-02-14 20:30:46.272721] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.007 qpair failed and we were unable to recover it. 00:30:09.007 [2024-02-14 20:30:46.273113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.007 [2024-02-14 20:30:46.273578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.007 [2024-02-14 20:30:46.273607] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.007 qpair failed and we were unable to recover it. 00:30:09.007 [2024-02-14 20:30:46.274162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.007 [2024-02-14 20:30:46.274562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.007 [2024-02-14 20:30:46.274591] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.007 qpair failed and we were unable to recover it. 00:30:09.007 [2024-02-14 20:30:46.275004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.007 [2024-02-14 20:30:46.275426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.007 [2024-02-14 20:30:46.275455] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.007 qpair failed and we were unable to recover it. 00:30:09.007 [2024-02-14 20:30:46.275940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.007 [2024-02-14 20:30:46.276285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.007 [2024-02-14 20:30:46.276314] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.007 qpair failed and we were unable to recover it. 00:30:09.007 [2024-02-14 20:30:46.276776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.007 [2024-02-14 20:30:46.277157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.007 [2024-02-14 20:30:46.277186] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.007 qpair failed and we were unable to recover it. 00:30:09.007 [2024-02-14 20:30:46.277600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.007 [2024-02-14 20:30:46.278009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.007 [2024-02-14 20:30:46.278024] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.007 qpair failed and we were unable to recover it. 00:30:09.007 [2024-02-14 20:30:46.278408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.007 [2024-02-14 20:30:46.278867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.007 [2024-02-14 20:30:46.278898] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.007 qpair failed and we were unable to recover it. 00:30:09.007 [2024-02-14 20:30:46.279237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.007 [2024-02-14 20:30:46.279759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.007 [2024-02-14 20:30:46.279790] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.007 qpair failed and we were unable to recover it. 00:30:09.007 [2024-02-14 20:30:46.280309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.007 [2024-02-14 20:30:46.280767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.007 [2024-02-14 20:30:46.280798] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.007 qpair failed and we were unable to recover it. 00:30:09.007 [2024-02-14 20:30:46.281179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.007 [2024-02-14 20:30:46.281639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.007 [2024-02-14 20:30:46.281678] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.007 qpair failed and we were unable to recover it. 00:30:09.007 [2024-02-14 20:30:46.282025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.007 [2024-02-14 20:30:46.282431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.007 [2024-02-14 20:30:46.282460] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.007 qpair failed and we were unable to recover it. 00:30:09.007 [2024-02-14 20:30:46.282950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.007 [2024-02-14 20:30:46.283425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.007 [2024-02-14 20:30:46.283456] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.007 qpair failed and we were unable to recover it. 00:30:09.007 [2024-02-14 20:30:46.283932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.007 [2024-02-14 20:30:46.284323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.007 [2024-02-14 20:30:46.284338] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.007 qpair failed and we were unable to recover it. 00:30:09.007 [2024-02-14 20:30:46.284687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.007 [2024-02-14 20:30:46.285052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.007 [2024-02-14 20:30:46.285068] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.007 qpair failed and we were unable to recover it. 00:30:09.007 [2024-02-14 20:30:46.285432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.007 [2024-02-14 20:30:46.285845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.007 [2024-02-14 20:30:46.285894] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.007 qpair failed and we were unable to recover it. 00:30:09.007 [2024-02-14 20:30:46.286308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.007 [2024-02-14 20:30:46.286792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.007 [2024-02-14 20:30:46.286823] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.008 qpair failed and we were unable to recover it. 00:30:09.008 [2024-02-14 20:30:46.287370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-02-14 20:30:46.287845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-02-14 20:30:46.287861] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.008 qpair failed and we were unable to recover it. 00:30:09.008 [2024-02-14 20:30:46.288235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-02-14 20:30:46.288622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-02-14 20:30:46.288659] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.008 qpair failed and we were unable to recover it. 00:30:09.008 [2024-02-14 20:30:46.289074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-02-14 20:30:46.289493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-02-14 20:30:46.289523] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.008 qpair failed and we were unable to recover it. 00:30:09.008 [2024-02-14 20:30:46.289924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-02-14 20:30:46.290324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-02-14 20:30:46.290354] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.008 qpair failed and we were unable to recover it. 00:30:09.008 [2024-02-14 20:30:46.290777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-02-14 20:30:46.291162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-02-14 20:30:46.291177] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.008 qpair failed and we were unable to recover it. 00:30:09.008 [2024-02-14 20:30:46.291553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-02-14 20:30:46.291982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-02-14 20:30:46.291998] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.008 qpair failed and we were unable to recover it. 00:30:09.008 [2024-02-14 20:30:46.292317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-02-14 20:30:46.292778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-02-14 20:30:46.292808] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.008 qpair failed and we were unable to recover it. 00:30:09.008 [2024-02-14 20:30:46.293234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-02-14 20:30:46.293721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-02-14 20:30:46.293752] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.008 qpair failed and we were unable to recover it. 00:30:09.008 [2024-02-14 20:30:46.294162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-02-14 20:30:46.294681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-02-14 20:30:46.294712] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.008 qpair failed and we were unable to recover it. 00:30:09.008 [2024-02-14 20:30:46.295136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-02-14 20:30:46.295605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-02-14 20:30:46.295635] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.008 qpair failed and we were unable to recover it. 00:30:09.008 [2024-02-14 20:30:46.296085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-02-14 20:30:46.296506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-02-14 20:30:46.296535] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.008 qpair failed and we were unable to recover it. 00:30:09.008 [2024-02-14 20:30:46.296978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-02-14 20:30:46.297382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-02-14 20:30:46.297413] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.008 qpair failed and we were unable to recover it. 00:30:09.008 [2024-02-14 20:30:46.297825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-02-14 20:30:46.298289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-02-14 20:30:46.298318] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.008 qpair failed and we were unable to recover it. 00:30:09.008 [2024-02-14 20:30:46.298811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-02-14 20:30:46.299161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-02-14 20:30:46.299176] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.008 qpair failed and we were unable to recover it. 00:30:09.008 [2024-02-14 20:30:46.299668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-02-14 20:30:46.300009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-02-14 20:30:46.300040] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.008 qpair failed and we were unable to recover it. 00:30:09.008 [2024-02-14 20:30:46.300376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-02-14 20:30:46.300850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-02-14 20:30:46.300881] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.008 qpair failed and we were unable to recover it. 00:30:09.008 [2024-02-14 20:30:46.301334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-02-14 20:30:46.301697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-02-14 20:30:46.301734] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.008 qpair failed and we were unable to recover it. 00:30:09.008 [2024-02-14 20:30:46.302107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-02-14 20:30:46.302458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-02-14 20:30:46.302487] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.008 qpair failed and we were unable to recover it. 00:30:09.008 [2024-02-14 20:30:46.302883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-02-14 20:30:46.303249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-02-14 20:30:46.303280] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.008 qpair failed and we were unable to recover it. 00:30:09.008 [2024-02-14 20:30:46.303806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-02-14 20:30:46.304212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-02-14 20:30:46.304241] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.008 qpair failed and we were unable to recover it. 00:30:09.008 [2024-02-14 20:30:46.304705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-02-14 20:30:46.305104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-02-14 20:30:46.305133] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.008 qpair failed and we were unable to recover it. 00:30:09.008 [2024-02-14 20:30:46.305611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-02-14 20:30:46.306012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-02-14 20:30:46.306042] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.008 qpair failed and we were unable to recover it. 00:30:09.008 [2024-02-14 20:30:46.306391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-02-14 20:30:46.306854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-02-14 20:30:46.306884] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.008 qpair failed and we were unable to recover it. 00:30:09.008 [2024-02-14 20:30:46.307262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-02-14 20:30:46.307657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-02-14 20:30:46.307689] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.008 qpair failed and we were unable to recover it. 00:30:09.008 [2024-02-14 20:30:46.308149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-02-14 20:30:46.308671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-02-14 20:30:46.308702] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.008 qpair failed and we were unable to recover it. 00:30:09.008 [2024-02-14 20:30:46.309063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-02-14 20:30:46.309411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-02-14 20:30:46.309441] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.008 qpair failed and we were unable to recover it. 00:30:09.008 [2024-02-14 20:30:46.309841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-02-14 20:30:46.310177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-02-14 20:30:46.310207] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.009 qpair failed and we were unable to recover it. 00:30:09.009 [2024-02-14 20:30:46.310668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-02-14 20:30:46.311136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-02-14 20:30:46.311166] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.009 qpair failed and we were unable to recover it. 00:30:09.009 [2024-02-14 20:30:46.311594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-02-14 20:30:46.311997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-02-14 20:30:46.312028] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.009 qpair failed and we were unable to recover it. 00:30:09.009 [2024-02-14 20:30:46.312500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-02-14 20:30:46.312989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-02-14 20:30:46.313020] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.009 qpair failed and we were unable to recover it. 00:30:09.009 [2024-02-14 20:30:46.313508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-02-14 20:30:46.313856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-02-14 20:30:46.313887] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.009 qpair failed and we were unable to recover it. 00:30:09.009 [2024-02-14 20:30:46.314240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-02-14 20:30:46.314682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-02-14 20:30:46.314714] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.009 qpair failed and we were unable to recover it. 00:30:09.009 [2024-02-14 20:30:46.315099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-02-14 20:30:46.315455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-02-14 20:30:46.315486] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.009 qpair failed and we were unable to recover it. 00:30:09.009 [2024-02-14 20:30:46.316012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-02-14 20:30:46.316420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-02-14 20:30:46.316449] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.009 qpair failed and we were unable to recover it. 00:30:09.009 [2024-02-14 20:30:46.316908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-02-14 20:30:46.317260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-02-14 20:30:46.317291] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.009 qpair failed and we were unable to recover it. 00:30:09.009 [2024-02-14 20:30:46.317700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-02-14 20:30:46.318104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-02-14 20:30:46.318134] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.009 qpair failed and we were unable to recover it. 00:30:09.009 [2024-02-14 20:30:46.318687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-02-14 20:30:46.319077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-02-14 20:30:46.319106] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.009 qpair failed and we were unable to recover it. 00:30:09.009 [2024-02-14 20:30:46.319621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-02-14 20:30:46.319994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-02-14 20:30:46.320024] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.009 qpair failed and we were unable to recover it. 00:30:09.009 [2024-02-14 20:30:46.320355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-02-14 20:30:46.320927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-02-14 20:30:46.320958] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.009 qpair failed and we were unable to recover it. 00:30:09.009 [2024-02-14 20:30:46.321302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-02-14 20:30:46.321805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-02-14 20:30:46.321836] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.009 qpair failed and we were unable to recover it. 00:30:09.009 [2024-02-14 20:30:46.322179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-02-14 20:30:46.322594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-02-14 20:30:46.322623] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.009 qpair failed and we were unable to recover it. 00:30:09.009 [2024-02-14 20:30:46.323098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-02-14 20:30:46.323457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-02-14 20:30:46.323488] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.009 qpair failed and we were unable to recover it. 00:30:09.009 [2024-02-14 20:30:46.323939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-02-14 20:30:46.324392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-02-14 20:30:46.324422] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.009 qpair failed and we were unable to recover it. 00:30:09.009 [2024-02-14 20:30:46.324854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-02-14 20:30:46.325276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-02-14 20:30:46.325306] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.009 qpair failed and we were unable to recover it. 00:30:09.009 [2024-02-14 20:30:46.325822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-02-14 20:30:46.326197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-02-14 20:30:46.326227] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.009 qpair failed and we were unable to recover it. 00:30:09.009 [2024-02-14 20:30:46.326752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-02-14 20:30:46.327124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-02-14 20:30:46.327154] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.009 qpair failed and we were unable to recover it. 00:30:09.009 [2024-02-14 20:30:46.327505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-02-14 20:30:46.328079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-02-14 20:30:46.328110] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.009 qpair failed and we were unable to recover it. 00:30:09.009 [2024-02-14 20:30:46.328471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-02-14 20:30:46.328871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-02-14 20:30:46.328902] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.009 qpair failed and we were unable to recover it. 00:30:09.009 [2024-02-14 20:30:46.329361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-02-14 20:30:46.329772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-02-14 20:30:46.329803] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.009 qpair failed and we were unable to recover it. 00:30:09.009 [2024-02-14 20:30:46.330157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-02-14 20:30:46.330567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-02-14 20:30:46.330597] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.009 qpair failed and we were unable to recover it. 00:30:09.009 [2024-02-14 20:30:46.331025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-02-14 20:30:46.331452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-02-14 20:30:46.331481] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.009 qpair failed and we were unable to recover it. 00:30:09.009 [2024-02-14 20:30:46.331958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-02-14 20:30:46.332372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-02-14 20:30:46.332401] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.010 qpair failed and we were unable to recover it. 00:30:09.010 [2024-02-14 20:30:46.332878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.010 [2024-02-14 20:30:46.333216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.010 [2024-02-14 20:30:46.333245] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.010 qpair failed and we were unable to recover it. 00:30:09.010 [2024-02-14 20:30:46.333757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.010 [2024-02-14 20:30:46.334195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.010 [2024-02-14 20:30:46.334223] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.010 qpair failed and we were unable to recover it. 00:30:09.010 [2024-02-14 20:30:46.334668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.010 [2024-02-14 20:30:46.335066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.010 [2024-02-14 20:30:46.335095] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.010 qpair failed and we were unable to recover it. 00:30:09.010 [2024-02-14 20:30:46.335430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.010 [2024-02-14 20:30:46.335833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.010 [2024-02-14 20:30:46.335864] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.010 qpair failed and we were unable to recover it. 00:30:09.010 [2024-02-14 20:30:46.336273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.010 [2024-02-14 20:30:46.336729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.010 [2024-02-14 20:30:46.336760] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.010 qpair failed and we were unable to recover it. 00:30:09.010 [2024-02-14 20:30:46.337178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.010 [2024-02-14 20:30:46.337528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.010 [2024-02-14 20:30:46.337559] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.010 qpair failed and we were unable to recover it. 00:30:09.010 [2024-02-14 20:30:46.337977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.010 [2024-02-14 20:30:46.338373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.010 [2024-02-14 20:30:46.338402] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.010 qpair failed and we were unable to recover it. 00:30:09.010 [2024-02-14 20:30:46.338875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.010 [2024-02-14 20:30:46.339219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.010 [2024-02-14 20:30:46.339248] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.010 qpair failed and we were unable to recover it. 00:30:09.010 [2024-02-14 20:30:46.339727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.010 [2024-02-14 20:30:46.340124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.010 [2024-02-14 20:30:46.340154] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.010 qpair failed and we were unable to recover it. 00:30:09.010 [2024-02-14 20:30:46.340668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.010 [2024-02-14 20:30:46.341018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.010 [2024-02-14 20:30:46.341047] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.010 qpair failed and we were unable to recover it. 00:30:09.010 [2024-02-14 20:30:46.341402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.010 [2024-02-14 20:30:46.341807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.010 [2024-02-14 20:30:46.341839] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.010 qpair failed and we were unable to recover it. 00:30:09.010 [2024-02-14 20:30:46.342256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.010 [2024-02-14 20:30:46.342682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.010 [2024-02-14 20:30:46.342713] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.010 qpair failed and we were unable to recover it. 00:30:09.010 [2024-02-14 20:30:46.343130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.010 [2024-02-14 20:30:46.343523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.010 [2024-02-14 20:30:46.343552] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.010 qpair failed and we were unable to recover it. 00:30:09.010 [2024-02-14 20:30:46.343976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.010 [2024-02-14 20:30:46.344424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.010 [2024-02-14 20:30:46.344454] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.010 qpair failed and we were unable to recover it. 00:30:09.010 [2024-02-14 20:30:46.344852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.010 [2024-02-14 20:30:46.345307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.010 [2024-02-14 20:30:46.345336] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.010 qpair failed and we were unable to recover it. 00:30:09.010 [2024-02-14 20:30:46.345730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.010 [2024-02-14 20:30:46.346124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.010 [2024-02-14 20:30:46.346160] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.010 qpair failed and we were unable to recover it. 00:30:09.010 [2024-02-14 20:30:46.346693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.010 [2024-02-14 20:30:46.347040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.010 [2024-02-14 20:30:46.347070] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.010 qpair failed and we were unable to recover it. 00:30:09.010 [2024-02-14 20:30:46.347554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.010 [2024-02-14 20:30:46.348023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.010 [2024-02-14 20:30:46.348055] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.010 qpair failed and we were unable to recover it. 00:30:09.010 [2024-02-14 20:30:46.348452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.010 [2024-02-14 20:30:46.348849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.010 [2024-02-14 20:30:46.348879] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.010 qpair failed and we were unable to recover it. 00:30:09.010 [2024-02-14 20:30:46.349330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.010 [2024-02-14 20:30:46.349805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.010 [2024-02-14 20:30:46.349837] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.010 qpair failed and we were unable to recover it. 00:30:09.010 [2024-02-14 20:30:46.350240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.010 [2024-02-14 20:30:46.350711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.010 [2024-02-14 20:30:46.350742] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.010 qpair failed and we were unable to recover it. 00:30:09.010 [2024-02-14 20:30:46.351194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.010 [2024-02-14 20:30:46.351606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.010 [2024-02-14 20:30:46.351635] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.010 qpair failed and we were unable to recover it. 00:30:09.010 [2024-02-14 20:30:46.352124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.010 [2024-02-14 20:30:46.352581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.010 [2024-02-14 20:30:46.352611] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.010 qpair failed and we were unable to recover it. 00:30:09.010 [2024-02-14 20:30:46.352998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.010 [2024-02-14 20:30:46.353343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.010 [2024-02-14 20:30:46.353373] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.010 qpair failed and we were unable to recover it. 00:30:09.010 [2024-02-14 20:30:46.353789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.010 [2024-02-14 20:30:46.354221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.010 [2024-02-14 20:30:46.354251] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.010 qpair failed and we were unable to recover it. 00:30:09.010 [2024-02-14 20:30:46.354719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.010 [2024-02-14 20:30:46.355092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.011 [2024-02-14 20:30:46.355121] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.011 qpair failed and we were unable to recover it. 00:30:09.011 [2024-02-14 20:30:46.355555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.011 [2024-02-14 20:30:46.356052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.011 [2024-02-14 20:30:46.356083] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.011 qpair failed and we were unable to recover it. 00:30:09.011 [2024-02-14 20:30:46.356580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.011 [2024-02-14 20:30:46.356985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.011 [2024-02-14 20:30:46.357016] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.011 qpair failed and we were unable to recover it. 00:30:09.011 [2024-02-14 20:30:46.357466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.011 [2024-02-14 20:30:46.357860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.011 [2024-02-14 20:30:46.357892] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.011 qpair failed and we were unable to recover it. 00:30:09.011 [2024-02-14 20:30:46.358250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.011 [2024-02-14 20:30:46.358734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.011 [2024-02-14 20:30:46.358764] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.011 qpair failed and we were unable to recover it. 00:30:09.011 [2024-02-14 20:30:46.359166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.011 [2024-02-14 20:30:46.359574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.011 [2024-02-14 20:30:46.359603] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.011 qpair failed and we were unable to recover it. 00:30:09.011 [2024-02-14 20:30:46.360182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.011 [2024-02-14 20:30:46.360606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.011 [2024-02-14 20:30:46.360636] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.011 qpair failed and we were unable to recover it. 00:30:09.011 [2024-02-14 20:30:46.361057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.011 [2024-02-14 20:30:46.361428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.011 [2024-02-14 20:30:46.361457] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.011 qpair failed and we were unable to recover it. 00:30:09.011 [2024-02-14 20:30:46.361879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.011 [2024-02-14 20:30:46.362329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.011 [2024-02-14 20:30:46.362358] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.011 qpair failed and we were unable to recover it. 00:30:09.011 [2024-02-14 20:30:46.362773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.011 [2024-02-14 20:30:46.363245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.011 [2024-02-14 20:30:46.363274] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.011 qpair failed and we were unable to recover it. 00:30:09.011 [2024-02-14 20:30:46.363765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.011 [2024-02-14 20:30:46.364164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.011 [2024-02-14 20:30:46.364194] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.011 qpair failed and we were unable to recover it. 00:30:09.011 [2024-02-14 20:30:46.364663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.011 [2024-02-14 20:30:46.365074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.011 [2024-02-14 20:30:46.365104] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.011 qpair failed and we were unable to recover it. 00:30:09.011 [2024-02-14 20:30:46.365602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.011 [2024-02-14 20:30:46.366018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.011 [2024-02-14 20:30:46.366034] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.011 qpair failed and we were unable to recover it. 00:30:09.011 [2024-02-14 20:30:46.366411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.011 [2024-02-14 20:30:46.366888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.011 [2024-02-14 20:30:46.366919] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.011 qpair failed and we were unable to recover it. 00:30:09.011 [2024-02-14 20:30:46.367316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.011 [2024-02-14 20:30:46.367774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.011 [2024-02-14 20:30:46.367806] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.011 qpair failed and we were unable to recover it. 00:30:09.011 [2024-02-14 20:30:46.368226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.011 [2024-02-14 20:30:46.368679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.011 [2024-02-14 20:30:46.368710] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.011 qpair failed and we were unable to recover it. 00:30:09.011 [2024-02-14 20:30:46.369131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.011 [2024-02-14 20:30:46.369568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.011 [2024-02-14 20:30:46.369597] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.011 qpair failed and we were unable to recover it. 00:30:09.011 [2024-02-14 20:30:46.370019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.011 [2024-02-14 20:30:46.370422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.011 [2024-02-14 20:30:46.370451] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.011 qpair failed and we were unable to recover it. 00:30:09.011 [2024-02-14 20:30:46.370870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.011 [2024-02-14 20:30:46.371292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.011 [2024-02-14 20:30:46.371322] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.011 qpair failed and we were unable to recover it. 00:30:09.011 [2024-02-14 20:30:46.371724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.011 [2024-02-14 20:30:46.372071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.011 [2024-02-14 20:30:46.372101] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.011 qpair failed and we were unable to recover it. 00:30:09.011 [2024-02-14 20:30:46.372445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.011 [2024-02-14 20:30:46.372825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.011 [2024-02-14 20:30:46.372855] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.011 qpair failed and we were unable to recover it. 00:30:09.011 [2024-02-14 20:30:46.373268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.011 [2024-02-14 20:30:46.373728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.011 [2024-02-14 20:30:46.373759] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.011 qpair failed and we were unable to recover it. 00:30:09.011 [2024-02-14 20:30:46.374231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.011 [2024-02-14 20:30:46.374642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.011 [2024-02-14 20:30:46.374694] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.011 qpair failed and we were unable to recover it. 00:30:09.011 [2024-02-14 20:30:46.375105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.011 [2024-02-14 20:30:46.375506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.011 [2024-02-14 20:30:46.375536] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.011 qpair failed and we were unable to recover it. 00:30:09.011 [2024-02-14 20:30:46.375945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.011 [2024-02-14 20:30:46.376422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.011 [2024-02-14 20:30:46.376452] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.011 qpair failed and we were unable to recover it. 00:30:09.011 [2024-02-14 20:30:46.376903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.011 [2024-02-14 20:30:46.377353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.011 [2024-02-14 20:30:46.377383] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.011 qpair failed and we were unable to recover it. 00:30:09.011 [2024-02-14 20:30:46.377712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.011 [2024-02-14 20:30:46.378160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.011 [2024-02-14 20:30:46.378189] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.011 qpair failed and we were unable to recover it. 00:30:09.011 [2024-02-14 20:30:46.378719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.012 [2024-02-14 20:30:46.379125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.012 [2024-02-14 20:30:46.379155] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.012 qpair failed and we were unable to recover it. 00:30:09.012 [2024-02-14 20:30:46.379576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.012 [2024-02-14 20:30:46.380090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.012 [2024-02-14 20:30:46.380121] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.012 qpair failed and we were unable to recover it. 00:30:09.012 [2024-02-14 20:30:46.380547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.012 [2024-02-14 20:30:46.380996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.012 [2024-02-14 20:30:46.381026] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.012 qpair failed and we were unable to recover it. 00:30:09.012 [2024-02-14 20:30:46.381366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.012 [2024-02-14 20:30:46.381765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.012 [2024-02-14 20:30:46.381796] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.012 qpair failed and we were unable to recover it. 00:30:09.012 [2024-02-14 20:30:46.382194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.012 [2024-02-14 20:30:46.382733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.012 [2024-02-14 20:30:46.382763] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.012 qpair failed and we were unable to recover it. 00:30:09.012 [2024-02-14 20:30:46.383167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.012 [2024-02-14 20:30:46.383634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.012 [2024-02-14 20:30:46.383677] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.012 qpair failed and we were unable to recover it. 00:30:09.012 [2024-02-14 20:30:46.384084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.012 [2024-02-14 20:30:46.384569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.012 [2024-02-14 20:30:46.384598] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.012 qpair failed and we were unable to recover it. 00:30:09.012 [2024-02-14 20:30:46.384968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.012 [2024-02-14 20:30:46.385319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.012 [2024-02-14 20:30:46.385348] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.012 qpair failed and we were unable to recover it. 00:30:09.012 [2024-02-14 20:30:46.385802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.012 [2024-02-14 20:30:46.386211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.012 [2024-02-14 20:30:46.386240] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.012 qpair failed and we were unable to recover it. 00:30:09.012 [2024-02-14 20:30:46.386679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.012 [2024-02-14 20:30:46.387062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.012 [2024-02-14 20:30:46.387077] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.012 qpair failed and we were unable to recover it. 00:30:09.012 [2024-02-14 20:30:46.387511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.012 [2024-02-14 20:30:46.387967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.012 [2024-02-14 20:30:46.387999] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.012 qpair failed and we were unable to recover it. 00:30:09.012 [2024-02-14 20:30:46.388495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.012 [2024-02-14 20:30:46.388896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.012 [2024-02-14 20:30:46.388927] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.012 qpair failed and we were unable to recover it. 00:30:09.012 [2024-02-14 20:30:46.389374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.012 [2024-02-14 20:30:46.389744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.012 [2024-02-14 20:30:46.389761] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.012 qpair failed and we were unable to recover it. 00:30:09.012 [2024-02-14 20:30:46.390222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.012 [2024-02-14 20:30:46.390700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.012 [2024-02-14 20:30:46.390730] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.012 qpair failed and we were unable to recover it. 00:30:09.012 [2024-02-14 20:30:46.391082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.012 [2024-02-14 20:30:46.391468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.012 [2024-02-14 20:30:46.391503] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.012 qpair failed and we were unable to recover it. 00:30:09.012 [2024-02-14 20:30:46.391911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.012 [2024-02-14 20:30:46.392316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.012 [2024-02-14 20:30:46.392346] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.012 qpair failed and we were unable to recover it. 00:30:09.012 [2024-02-14 20:30:46.392825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.012 [2024-02-14 20:30:46.393258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.012 [2024-02-14 20:30:46.393287] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.012 qpair failed and we were unable to recover it. 00:30:09.012 [2024-02-14 20:30:46.393774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.012 [2024-02-14 20:30:46.394123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.012 [2024-02-14 20:30:46.394153] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.012 qpair failed and we were unable to recover it. 00:30:09.012 [2024-02-14 20:30:46.394634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.012 [2024-02-14 20:30:46.395098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.012 [2024-02-14 20:30:46.395128] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.012 qpair failed and we were unable to recover it. 00:30:09.012 [2024-02-14 20:30:46.395658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.012 [2024-02-14 20:30:46.396082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.012 [2024-02-14 20:30:46.396113] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.012 qpair failed and we were unable to recover it. 00:30:09.012 [2024-02-14 20:30:46.396602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.012 [2024-02-14 20:30:46.397090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.012 [2024-02-14 20:30:46.397121] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.012 qpair failed and we were unable to recover it. 00:30:09.012 [2024-02-14 20:30:46.397587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.012 [2024-02-14 20:30:46.398065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.012 [2024-02-14 20:30:46.398097] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.012 qpair failed and we were unable to recover it. 00:30:09.012 [2024-02-14 20:30:46.398450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.012 [2024-02-14 20:30:46.398848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.012 [2024-02-14 20:30:46.398879] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.013 qpair failed and we were unable to recover it. 00:30:09.013 [2024-02-14 20:30:46.399205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.013 [2024-02-14 20:30:46.399593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.013 [2024-02-14 20:30:46.399622] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.013 qpair failed and we were unable to recover it. 00:30:09.013 [2024-02-14 20:30:46.400057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.013 [2024-02-14 20:30:46.400528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.013 [2024-02-14 20:30:46.400568] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.013 qpair failed and we were unable to recover it. 00:30:09.013 [2024-02-14 20:30:46.401040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.013 [2024-02-14 20:30:46.401383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.013 [2024-02-14 20:30:46.401413] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.013 qpair failed and we were unable to recover it. 00:30:09.013 [2024-02-14 20:30:46.401886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.013 [2024-02-14 20:30:46.402281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.013 [2024-02-14 20:30:46.402297] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.013 qpair failed and we were unable to recover it. 00:30:09.013 [2024-02-14 20:30:46.402719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.013 [2024-02-14 20:30:46.403098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.013 [2024-02-14 20:30:46.403114] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.013 qpair failed and we were unable to recover it. 00:30:09.013 [2024-02-14 20:30:46.403572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.013 [2024-02-14 20:30:46.403970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.013 [2024-02-14 20:30:46.404001] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.013 qpair failed and we were unable to recover it. 00:30:09.013 [2024-02-14 20:30:46.404476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.013 [2024-02-14 20:30:46.404961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.013 [2024-02-14 20:30:46.404992] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.013 qpair failed and we were unable to recover it. 00:30:09.013 [2024-02-14 20:30:46.405444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.013 [2024-02-14 20:30:46.405844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.013 [2024-02-14 20:30:46.405877] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.013 qpair failed and we were unable to recover it. 00:30:09.013 [2024-02-14 20:30:46.406243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.278 [2024-02-14 20:30:46.406551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.278 [2024-02-14 20:30:46.406567] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.278 qpair failed and we were unable to recover it. 00:30:09.278 [2024-02-14 20:30:46.406988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.278 [2024-02-14 20:30:46.407370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.278 [2024-02-14 20:30:46.407386] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.278 qpair failed and we were unable to recover it. 00:30:09.278 [2024-02-14 20:30:46.407769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.278 [2024-02-14 20:30:46.408091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.278 [2024-02-14 20:30:46.408107] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.278 qpair failed and we were unable to recover it. 00:30:09.278 [2024-02-14 20:30:46.408558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.278 [2024-02-14 20:30:46.408949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.278 [2024-02-14 20:30:46.408980] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.278 qpair failed and we were unable to recover it. 00:30:09.278 [2024-02-14 20:30:46.409410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.278 [2024-02-14 20:30:46.409886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.278 [2024-02-14 20:30:46.409917] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.278 qpair failed and we were unable to recover it. 00:30:09.278 [2024-02-14 20:30:46.410369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.278 [2024-02-14 20:30:46.410856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.278 [2024-02-14 20:30:46.410896] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.278 qpair failed and we were unable to recover it. 00:30:09.278 [2024-02-14 20:30:46.411348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.278 [2024-02-14 20:30:46.411815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.278 [2024-02-14 20:30:46.411846] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.278 qpair failed and we were unable to recover it. 00:30:09.278 [2024-02-14 20:30:46.412201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.278 [2024-02-14 20:30:46.412671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.278 [2024-02-14 20:30:46.412701] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.278 qpair failed and we were unable to recover it. 00:30:09.278 [2024-02-14 20:30:46.413047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.278 [2024-02-14 20:30:46.413463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.278 [2024-02-14 20:30:46.413493] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.278 qpair failed and we were unable to recover it. 00:30:09.278 [2024-02-14 20:30:46.413896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.278 [2024-02-14 20:30:46.414304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.278 [2024-02-14 20:30:46.414334] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.278 qpair failed and we were unable to recover it. 00:30:09.278 [2024-02-14 20:30:46.414787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.278 [2024-02-14 20:30:46.415256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.278 [2024-02-14 20:30:46.415286] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.278 qpair failed and we were unable to recover it. 00:30:09.278 [2024-02-14 20:30:46.415761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.278 [2024-02-14 20:30:46.416173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.278 [2024-02-14 20:30:46.416203] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.278 qpair failed and we were unable to recover it. 00:30:09.278 [2024-02-14 20:30:46.416694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.278 [2024-02-14 20:30:46.417040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.278 [2024-02-14 20:30:46.417070] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.278 qpair failed and we were unable to recover it. 00:30:09.278 [2024-02-14 20:30:46.417519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.278 [2024-02-14 20:30:46.417980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.278 [2024-02-14 20:30:46.418011] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.278 qpair failed and we were unable to recover it. 00:30:09.278 [2024-02-14 20:30:46.418495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.278 [2024-02-14 20:30:46.418931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.278 [2024-02-14 20:30:46.418961] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.278 qpair failed and we were unable to recover it. 00:30:09.278 [2024-02-14 20:30:46.419374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.278 [2024-02-14 20:30:46.419777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.278 [2024-02-14 20:30:46.419808] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.278 qpair failed and we were unable to recover it. 00:30:09.278 [2024-02-14 20:30:46.420260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.278 [2024-02-14 20:30:46.420728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.279 [2024-02-14 20:30:46.420759] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.279 qpair failed and we were unable to recover it. 00:30:09.279 [2024-02-14 20:30:46.421102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.279 [2024-02-14 20:30:46.421457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.279 [2024-02-14 20:30:46.421473] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.279 qpair failed and we were unable to recover it. 00:30:09.279 [2024-02-14 20:30:46.421899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.279 [2024-02-14 20:30:46.422301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.279 [2024-02-14 20:30:46.422330] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.279 qpair failed and we were unable to recover it. 00:30:09.279 [2024-02-14 20:30:46.422813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.279 [2024-02-14 20:30:46.423213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.279 [2024-02-14 20:30:46.423242] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.279 qpair failed and we were unable to recover it. 00:30:09.279 [2024-02-14 20:30:46.423636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.279 [2024-02-14 20:30:46.424091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.279 [2024-02-14 20:30:46.424121] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.279 qpair failed and we were unable to recover it. 00:30:09.279 [2024-02-14 20:30:46.424602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.279 [2024-02-14 20:30:46.425003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.279 [2024-02-14 20:30:46.425033] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.279 qpair failed and we were unable to recover it. 00:30:09.279 [2024-02-14 20:30:46.425458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.279 [2024-02-14 20:30:46.425930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.279 [2024-02-14 20:30:46.425961] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.279 qpair failed and we were unable to recover it. 00:30:09.279 [2024-02-14 20:30:46.426295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.279 [2024-02-14 20:30:46.426705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.279 [2024-02-14 20:30:46.426736] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.279 qpair failed and we were unable to recover it. 00:30:09.279 [2024-02-14 20:30:46.427139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.279 [2024-02-14 20:30:46.427605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.279 [2024-02-14 20:30:46.427636] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.279 qpair failed and we were unable to recover it. 00:30:09.279 [2024-02-14 20:30:46.428103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.279 [2024-02-14 20:30:46.428637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.279 [2024-02-14 20:30:46.428659] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.279 qpair failed and we were unable to recover it. 00:30:09.279 [2024-02-14 20:30:46.429034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.279 [2024-02-14 20:30:46.429351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.279 [2024-02-14 20:30:46.429367] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.279 qpair failed and we were unable to recover it. 00:30:09.279 [2024-02-14 20:30:46.429756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.279 [2024-02-14 20:30:46.430131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.279 [2024-02-14 20:30:46.430160] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.279 qpair failed and we were unable to recover it. 00:30:09.279 [2024-02-14 20:30:46.430590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.279 [2024-02-14 20:30:46.430981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.279 [2024-02-14 20:30:46.431012] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.279 qpair failed and we were unable to recover it. 00:30:09.279 [2024-02-14 20:30:46.431364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.279 [2024-02-14 20:30:46.431822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.279 [2024-02-14 20:30:46.431838] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.279 qpair failed and we were unable to recover it. 00:30:09.279 [2024-02-14 20:30:46.432210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.279 [2024-02-14 20:30:46.432596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.279 [2024-02-14 20:30:46.432612] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.279 qpair failed and we were unable to recover it. 00:30:09.279 [2024-02-14 20:30:46.432985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.279 [2024-02-14 20:30:46.433399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.279 [2024-02-14 20:30:46.433429] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.279 qpair failed and we were unable to recover it. 00:30:09.279 [2024-02-14 20:30:46.433818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.279 [2024-02-14 20:30:46.434150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.279 [2024-02-14 20:30:46.434180] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.279 qpair failed and we were unable to recover it. 00:30:09.279 [2024-02-14 20:30:46.434700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.279 [2024-02-14 20:30:46.435148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.279 [2024-02-14 20:30:46.435191] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.279 qpair failed and we were unable to recover it. 00:30:09.279 [2024-02-14 20:30:46.435688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.279 [2024-02-14 20:30:46.436043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.279 [2024-02-14 20:30:46.436078] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.279 qpair failed and we were unable to recover it. 00:30:09.279 [2024-02-14 20:30:46.436598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.279 [2024-02-14 20:30:46.436995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.279 [2024-02-14 20:30:46.437033] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.279 qpair failed and we were unable to recover it. 00:30:09.279 [2024-02-14 20:30:46.437564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.279 [2024-02-14 20:30:46.437984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.279 [2024-02-14 20:30:46.438000] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.279 qpair failed and we were unable to recover it. 00:30:09.279 [2024-02-14 20:30:46.438375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.279 [2024-02-14 20:30:46.438773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.279 [2024-02-14 20:30:46.438804] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.279 qpair failed and we were unable to recover it. 00:30:09.279 [2024-02-14 20:30:46.439194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.279 [2024-02-14 20:30:46.439628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.279 [2024-02-14 20:30:46.439671] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.279 qpair failed and we were unable to recover it. 00:30:09.279 [2024-02-14 20:30:46.440014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.279 [2024-02-14 20:30:46.440469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.279 [2024-02-14 20:30:46.440499] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.279 qpair failed and we were unable to recover it. 00:30:09.279 [2024-02-14 20:30:46.440904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.279 [2024-02-14 20:30:46.441379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.279 [2024-02-14 20:30:46.441408] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.279 qpair failed and we were unable to recover it. 00:30:09.279 [2024-02-14 20:30:46.441825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.279 [2024-02-14 20:30:46.442176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.279 [2024-02-14 20:30:46.442206] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.279 qpair failed and we were unable to recover it. 00:30:09.279 [2024-02-14 20:30:46.442601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.279 [2024-02-14 20:30:46.443013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.279 [2024-02-14 20:30:46.443043] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.279 qpair failed and we were unable to recover it. 00:30:09.280 [2024-02-14 20:30:46.443393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.280 [2024-02-14 20:30:46.443881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.280 [2024-02-14 20:30:46.443912] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.280 qpair failed and we were unable to recover it. 00:30:09.280 [2024-02-14 20:30:46.444256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.280 [2024-02-14 20:30:46.444661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.280 [2024-02-14 20:30:46.444680] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.280 qpair failed and we were unable to recover it. 00:30:09.280 [2024-02-14 20:30:46.444995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.280 [2024-02-14 20:30:46.445319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.280 [2024-02-14 20:30:46.445350] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.280 qpair failed and we were unable to recover it. 00:30:09.280 [2024-02-14 20:30:46.445759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.280 [2024-02-14 20:30:46.446160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.280 [2024-02-14 20:30:46.446190] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.280 qpair failed and we were unable to recover it. 00:30:09.280 [2024-02-14 20:30:46.446608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.280 [2024-02-14 20:30:46.446983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.280 [2024-02-14 20:30:46.447014] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.280 qpair failed and we were unable to recover it. 00:30:09.280 [2024-02-14 20:30:46.447407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.280 [2024-02-14 20:30:46.447873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.280 [2024-02-14 20:30:46.447905] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.280 qpair failed and we were unable to recover it. 00:30:09.280 [2024-02-14 20:30:46.448356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.280 [2024-02-14 20:30:46.448775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.280 [2024-02-14 20:30:46.448805] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.280 qpair failed and we were unable to recover it. 00:30:09.280 [2024-02-14 20:30:46.449207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.280 [2024-02-14 20:30:46.449603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.280 [2024-02-14 20:30:46.449634] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.280 qpair failed and we were unable to recover it. 00:30:09.280 [2024-02-14 20:30:46.450100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.280 [2024-02-14 20:30:46.450463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.280 [2024-02-14 20:30:46.450493] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.280 qpair failed and we were unable to recover it. 00:30:09.280 [2024-02-14 20:30:46.450943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.280 [2024-02-14 20:30:46.451339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.280 [2024-02-14 20:30:46.451355] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.280 qpair failed and we were unable to recover it. 00:30:09.280 [2024-02-14 20:30:46.451735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.280 [2024-02-14 20:30:46.452134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.280 [2024-02-14 20:30:46.452164] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.280 qpair failed and we were unable to recover it. 00:30:09.280 [2024-02-14 20:30:46.452676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.280 [2024-02-14 20:30:46.453015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.280 [2024-02-14 20:30:46.453058] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.280 qpair failed and we were unable to recover it. 00:30:09.280 [2024-02-14 20:30:46.453398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.280 [2024-02-14 20:30:46.453761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.280 [2024-02-14 20:30:46.453791] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.280 qpair failed and we were unable to recover it. 00:30:09.280 [2024-02-14 20:30:46.454195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.280 [2024-02-14 20:30:46.454562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.280 [2024-02-14 20:30:46.454593] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.280 qpair failed and we were unable to recover it. 00:30:09.280 [2024-02-14 20:30:46.454967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.280 [2024-02-14 20:30:46.455308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.280 [2024-02-14 20:30:46.455338] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.280 qpair failed and we were unable to recover it. 00:30:09.280 [2024-02-14 20:30:46.455745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.280 [2024-02-14 20:30:46.456091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.280 [2024-02-14 20:30:46.456121] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.280 qpair failed and we were unable to recover it. 00:30:09.280 [2024-02-14 20:30:46.456519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.280 [2024-02-14 20:30:46.456948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.280 [2024-02-14 20:30:46.456979] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.280 qpair failed and we were unable to recover it. 00:30:09.280 [2024-02-14 20:30:46.457437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.280 [2024-02-14 20:30:46.457818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.280 [2024-02-14 20:30:46.457849] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.280 qpair failed and we were unable to recover it. 00:30:09.280 [2024-02-14 20:30:46.458298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.280 [2024-02-14 20:30:46.458779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.280 [2024-02-14 20:30:46.458809] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.280 qpair failed and we were unable to recover it. 00:30:09.280 [2024-02-14 20:30:46.459169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.280 [2024-02-14 20:30:46.459668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.280 [2024-02-14 20:30:46.459698] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.280 qpair failed and we were unable to recover it. 00:30:09.280 [2024-02-14 20:30:46.460109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.280 [2024-02-14 20:30:46.460515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.280 [2024-02-14 20:30:46.460530] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.280 qpair failed and we were unable to recover it. 00:30:09.280 [2024-02-14 20:30:46.460955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.280 [2024-02-14 20:30:46.461328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.280 [2024-02-14 20:30:46.461357] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.280 qpair failed and we were unable to recover it. 00:30:09.280 [2024-02-14 20:30:46.461844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.280 [2024-02-14 20:30:46.462250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.280 [2024-02-14 20:30:46.462280] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.280 qpair failed and we were unable to recover it. 00:30:09.280 [2024-02-14 20:30:46.462750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.280 [2024-02-14 20:30:46.463152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.280 [2024-02-14 20:30:46.463182] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.280 qpair failed and we were unable to recover it. 00:30:09.280 [2024-02-14 20:30:46.463602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.280 [2024-02-14 20:30:46.464034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.280 [2024-02-14 20:30:46.464065] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.280 qpair failed and we were unable to recover it. 00:30:09.280 [2024-02-14 20:30:46.464420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.280 [2024-02-14 20:30:46.464838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.281 [2024-02-14 20:30:46.464869] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.281 qpair failed and we were unable to recover it. 00:30:09.281 [2024-02-14 20:30:46.465219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.281 [2024-02-14 20:30:46.465624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.281 [2024-02-14 20:30:46.465662] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.281 qpair failed and we were unable to recover it. 00:30:09.281 [2024-02-14 20:30:46.466067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.281 [2024-02-14 20:30:46.466472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.281 [2024-02-14 20:30:46.466487] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.281 qpair failed and we were unable to recover it. 00:30:09.281 [2024-02-14 20:30:46.466879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.281 [2024-02-14 20:30:46.467256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.281 [2024-02-14 20:30:46.467286] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.281 qpair failed and we were unable to recover it. 00:30:09.281 [2024-02-14 20:30:46.467767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.281 [2024-02-14 20:30:46.468155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.281 [2024-02-14 20:30:46.468184] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.281 qpair failed and we were unable to recover it. 00:30:09.281 [2024-02-14 20:30:46.468676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.281 [2024-02-14 20:30:46.469127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.281 [2024-02-14 20:30:46.469156] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.281 qpair failed and we were unable to recover it. 00:30:09.281 [2024-02-14 20:30:46.469576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.281 [2024-02-14 20:30:46.470042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.281 [2024-02-14 20:30:46.470072] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.281 qpair failed and we were unable to recover it. 00:30:09.281 [2024-02-14 20:30:46.470568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.281 [2024-02-14 20:30:46.470955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.281 [2024-02-14 20:30:46.470987] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.281 qpair failed and we were unable to recover it. 00:30:09.281 [2024-02-14 20:30:46.471351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.281 [2024-02-14 20:30:46.471800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.281 [2024-02-14 20:30:46.471831] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.281 qpair failed and we were unable to recover it. 00:30:09.281 [2024-02-14 20:30:46.472284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.281 [2024-02-14 20:30:46.472728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.281 [2024-02-14 20:30:46.472744] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.281 qpair failed and we were unable to recover it. 00:30:09.281 [2024-02-14 20:30:46.473067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.281 [2024-02-14 20:30:46.473484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.281 [2024-02-14 20:30:46.473499] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.281 qpair failed and we were unable to recover it. 00:30:09.281 [2024-02-14 20:30:46.473861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.281 [2024-02-14 20:30:46.474282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.281 [2024-02-14 20:30:46.474312] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.281 qpair failed and we were unable to recover it. 00:30:09.281 [2024-02-14 20:30:46.474821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.281 [2024-02-14 20:30:46.475294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.281 [2024-02-14 20:30:46.475309] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.281 qpair failed and we were unable to recover it. 00:30:09.281 [2024-02-14 20:30:46.475792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.281 [2024-02-14 20:30:46.476151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.281 [2024-02-14 20:30:46.476166] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.281 qpair failed and we were unable to recover it. 00:30:09.281 [2024-02-14 20:30:46.476581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.281 [2024-02-14 20:30:46.477020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.281 [2024-02-14 20:30:46.477036] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.281 qpair failed and we were unable to recover it. 00:30:09.281 [2024-02-14 20:30:46.477351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.281 [2024-02-14 20:30:46.477716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.281 [2024-02-14 20:30:46.477746] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.281 qpair failed and we were unable to recover it. 00:30:09.281 [2024-02-14 20:30:46.478144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.281 [2024-02-14 20:30:46.478591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.281 [2024-02-14 20:30:46.478606] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.281 qpair failed and we were unable to recover it. 00:30:09.281 [2024-02-14 20:30:46.479049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.281 [2024-02-14 20:30:46.479492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.281 [2024-02-14 20:30:46.479507] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.281 qpair failed and we were unable to recover it. 00:30:09.281 [2024-02-14 20:30:46.479894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.281 [2024-02-14 20:30:46.480333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.281 [2024-02-14 20:30:46.480363] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.281 qpair failed and we were unable to recover it. 00:30:09.281 [2024-02-14 20:30:46.480863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.281 [2024-02-14 20:30:46.481259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.281 [2024-02-14 20:30:46.481288] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.281 qpair failed and we were unable to recover it. 00:30:09.281 [2024-02-14 20:30:46.481752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.281 [2024-02-14 20:30:46.482199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.281 [2024-02-14 20:30:46.482229] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.281 qpair failed and we were unable to recover it. 00:30:09.281 [2024-02-14 20:30:46.482723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.281 [2024-02-14 20:30:46.483023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.281 [2024-02-14 20:30:46.483038] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.281 qpair failed and we were unable to recover it. 00:30:09.281 [2024-02-14 20:30:46.483432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.281 [2024-02-14 20:30:46.483819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.281 [2024-02-14 20:30:46.483835] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.281 qpair failed and we were unable to recover it. 00:30:09.281 [2024-02-14 20:30:46.484218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.281 [2024-02-14 20:30:46.484603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.281 [2024-02-14 20:30:46.484633] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.281 qpair failed and we were unable to recover it. 00:30:09.281 [2024-02-14 20:30:46.485035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.281 [2024-02-14 20:30:46.485450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.281 [2024-02-14 20:30:46.485465] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.281 qpair failed and we were unable to recover it. 00:30:09.281 [2024-02-14 20:30:46.485860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.281 [2024-02-14 20:30:46.486229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.281 [2024-02-14 20:30:46.486244] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.281 qpair failed and we were unable to recover it. 00:30:09.281 [2024-02-14 20:30:46.486697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.281 [2024-02-14 20:30:46.487139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.281 [2024-02-14 20:30:46.487154] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.281 qpair failed and we were unable to recover it. 00:30:09.281 [2024-02-14 20:30:46.487574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.281 [2024-02-14 20:30:46.487941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.282 [2024-02-14 20:30:46.487960] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.282 qpair failed and we were unable to recover it. 00:30:09.282 [2024-02-14 20:30:46.488415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.282 [2024-02-14 20:30:46.488891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.282 [2024-02-14 20:30:46.488922] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.282 qpair failed and we were unable to recover it. 00:30:09.282 [2024-02-14 20:30:46.489317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.282 [2024-02-14 20:30:46.489773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.282 [2024-02-14 20:30:46.489789] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.282 qpair failed and we were unable to recover it. 00:30:09.282 [2024-02-14 20:30:46.490183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.282 [2024-02-14 20:30:46.490609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.282 [2024-02-14 20:30:46.490625] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.282 qpair failed and we were unable to recover it. 00:30:09.282 [2024-02-14 20:30:46.490939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.282 [2024-02-14 20:30:46.491375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.282 [2024-02-14 20:30:46.491390] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.282 qpair failed and we were unable to recover it. 00:30:09.282 [2024-02-14 20:30:46.491747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.282 [2024-02-14 20:30:46.492221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.282 [2024-02-14 20:30:46.492251] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.282 qpair failed and we were unable to recover it. 00:30:09.282 [2024-02-14 20:30:46.492739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.282 [2024-02-14 20:30:46.493130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.282 [2024-02-14 20:30:46.493146] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.282 qpair failed and we were unable to recover it. 00:30:09.282 [2024-02-14 20:30:46.493508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.282 [2024-02-14 20:30:46.493876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.282 [2024-02-14 20:30:46.493892] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.282 qpair failed and we were unable to recover it. 00:30:09.282 [2024-02-14 20:30:46.494309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.282 [2024-02-14 20:30:46.494745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.282 [2024-02-14 20:30:46.494762] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.282 qpair failed and we were unable to recover it. 00:30:09.282 [2024-02-14 20:30:46.495078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.282 [2024-02-14 20:30:46.495443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.282 [2024-02-14 20:30:46.495459] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.282 qpair failed and we were unable to recover it. 00:30:09.282 [2024-02-14 20:30:46.495897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.282 [2024-02-14 20:30:46.496284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.282 [2024-02-14 20:30:46.496313] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.282 qpair failed and we were unable to recover it. 00:30:09.282 [2024-02-14 20:30:46.496725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.282 [2024-02-14 20:30:46.497123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.282 [2024-02-14 20:30:46.497152] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.282 qpair failed and we were unable to recover it. 00:30:09.282 [2024-02-14 20:30:46.497553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.282 [2024-02-14 20:30:46.497990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.282 [2024-02-14 20:30:46.498021] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.282 qpair failed and we were unable to recover it. 00:30:09.282 [2024-02-14 20:30:46.498455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.282 [2024-02-14 20:30:46.498939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.282 [2024-02-14 20:30:46.498970] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.282 qpair failed and we were unable to recover it. 00:30:09.282 [2024-02-14 20:30:46.499462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.282 [2024-02-14 20:30:46.499929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.282 [2024-02-14 20:30:46.499960] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.282 qpair failed and we were unable to recover it. 00:30:09.282 [2024-02-14 20:30:46.500375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.282 [2024-02-14 20:30:46.500816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.282 [2024-02-14 20:30:46.500832] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.282 qpair failed and we were unable to recover it. 00:30:09.282 [2024-02-14 20:30:46.501218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.282 [2024-02-14 20:30:46.501660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.282 [2024-02-14 20:30:46.501676] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.282 qpair failed and we were unable to recover it. 00:30:09.282 [2024-02-14 20:30:46.502101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.282 [2024-02-14 20:30:46.502561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.282 [2024-02-14 20:30:46.502577] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.282 qpair failed and we were unable to recover it. 00:30:09.282 [2024-02-14 20:30:46.503022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.282 [2024-02-14 20:30:46.503470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.282 [2024-02-14 20:30:46.503485] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.282 qpair failed and we were unable to recover it. 00:30:09.282 [2024-02-14 20:30:46.503926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.282 [2024-02-14 20:30:46.504366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.282 [2024-02-14 20:30:46.504381] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.282 qpair failed and we were unable to recover it. 00:30:09.282 [2024-02-14 20:30:46.504819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.282 [2024-02-14 20:30:46.505258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.282 [2024-02-14 20:30:46.505274] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.282 qpair failed and we were unable to recover it. 00:30:09.282 [2024-02-14 20:30:46.505645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.282 [2024-02-14 20:30:46.506022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.282 [2024-02-14 20:30:46.506037] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.282 qpair failed and we were unable to recover it. 00:30:09.282 [2024-02-14 20:30:46.506389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.282 [2024-02-14 20:30:46.506765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.282 [2024-02-14 20:30:46.506781] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.282 qpair failed and we were unable to recover it. 00:30:09.282 [2024-02-14 20:30:46.507226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.282 [2024-02-14 20:30:46.507611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.282 [2024-02-14 20:30:46.507640] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.282 qpair failed and we were unable to recover it. 00:30:09.282 [2024-02-14 20:30:46.508150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.282 [2024-02-14 20:30:46.508635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.282 [2024-02-14 20:30:46.508655] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.282 qpair failed and we were unable to recover it. 00:30:09.282 [2024-02-14 20:30:46.509122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.282 [2024-02-14 20:30:46.509615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.282 [2024-02-14 20:30:46.509644] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.282 qpair failed and we were unable to recover it. 00:30:09.282 [2024-02-14 20:30:46.510058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.282 [2024-02-14 20:30:46.510527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.283 [2024-02-14 20:30:46.510556] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.283 qpair failed and we were unable to recover it. 00:30:09.283 [2024-02-14 20:30:46.511066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.283 [2024-02-14 20:30:46.511551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.283 [2024-02-14 20:30:46.511581] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.283 qpair failed and we were unable to recover it. 00:30:09.283 [2024-02-14 20:30:46.512028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.283 [2024-02-14 20:30:46.512495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.283 [2024-02-14 20:30:46.512510] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.283 qpair failed and we were unable to recover it. 00:30:09.283 [2024-02-14 20:30:46.512999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.283 [2024-02-14 20:30:46.513445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.283 [2024-02-14 20:30:46.513460] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.283 qpair failed and we were unable to recover it. 00:30:09.283 [2024-02-14 20:30:46.513824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.283 [2024-02-14 20:30:46.514180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.283 [2024-02-14 20:30:46.514195] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.283 qpair failed and we were unable to recover it. 00:30:09.283 [2024-02-14 20:30:46.514667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.283 [2024-02-14 20:30:46.515156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.283 [2024-02-14 20:30:46.515172] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.283 qpair failed and we were unable to recover it. 00:30:09.283 [2024-02-14 20:30:46.515628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.283 [2024-02-14 20:30:46.516143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.283 [2024-02-14 20:30:46.516173] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.283 qpair failed and we were unable to recover it. 00:30:09.283 [2024-02-14 20:30:46.516671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.283 [2024-02-14 20:30:46.517067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.283 [2024-02-14 20:30:46.517096] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.283 qpair failed and we were unable to recover it. 00:30:09.283 [2024-02-14 20:30:46.517569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.283 [2024-02-14 20:30:46.518007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.283 [2024-02-14 20:30:46.518023] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.283 qpair failed and we were unable to recover it. 00:30:09.283 [2024-02-14 20:30:46.518407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.283 [2024-02-14 20:30:46.518844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.283 [2024-02-14 20:30:46.518878] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.283 qpair failed and we were unable to recover it. 00:30:09.283 [2024-02-14 20:30:46.519365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.283 [2024-02-14 20:30:46.519838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.283 [2024-02-14 20:30:46.519868] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.283 qpair failed and we were unable to recover it. 00:30:09.283 [2024-02-14 20:30:46.520346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.283 [2024-02-14 20:30:46.520814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.283 [2024-02-14 20:30:46.520831] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.283 qpair failed and we were unable to recover it. 00:30:09.283 [2024-02-14 20:30:46.521218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.283 [2024-02-14 20:30:46.521520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.283 [2024-02-14 20:30:46.521535] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.283 qpair failed and we were unable to recover it. 00:30:09.283 [2024-02-14 20:30:46.521919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.283 [2024-02-14 20:30:46.522395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.283 [2024-02-14 20:30:46.522424] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.283 qpair failed and we were unable to recover it. 00:30:09.283 [2024-02-14 20:30:46.522870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.283 [2024-02-14 20:30:46.523321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.283 [2024-02-14 20:30:46.523350] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.283 qpair failed and we were unable to recover it. 00:30:09.283 [2024-02-14 20:30:46.523821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.283 [2024-02-14 20:30:46.524220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.283 [2024-02-14 20:30:46.524250] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.283 qpair failed and we were unable to recover it. 00:30:09.283 [2024-02-14 20:30:46.524716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.283 [2024-02-14 20:30:46.525185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.283 [2024-02-14 20:30:46.525215] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.283 qpair failed and we were unable to recover it. 00:30:09.283 [2024-02-14 20:30:46.525697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.283 [2024-02-14 20:30:46.526202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.283 [2024-02-14 20:30:46.526231] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.283 qpair failed and we were unable to recover it. 00:30:09.283 [2024-02-14 20:30:46.526684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.283 [2024-02-14 20:30:46.527156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.283 [2024-02-14 20:30:46.527186] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.283 qpair failed and we were unable to recover it. 00:30:09.283 [2024-02-14 20:30:46.527661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.283 [2024-02-14 20:30:46.528109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.283 [2024-02-14 20:30:46.528137] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.283 qpair failed and we were unable to recover it. 00:30:09.283 [2024-02-14 20:30:46.528630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-02-14 20:30:46.529045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-02-14 20:30:46.529089] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.284 qpair failed and we were unable to recover it. 00:30:09.284 [2024-02-14 20:30:46.529532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-02-14 20:30:46.529901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-02-14 20:30:46.529931] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.284 qpair failed and we were unable to recover it. 00:30:09.284 [2024-02-14 20:30:46.530321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-02-14 20:30:46.530786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-02-14 20:30:46.530817] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.284 qpair failed and we were unable to recover it. 00:30:09.284 [2024-02-14 20:30:46.531264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-02-14 20:30:46.531741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-02-14 20:30:46.531771] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.284 qpair failed and we were unable to recover it. 00:30:09.284 [2024-02-14 20:30:46.532243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-02-14 20:30:46.532641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-02-14 20:30:46.532680] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.284 qpair failed and we were unable to recover it. 00:30:09.284 [2024-02-14 20:30:46.533152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-02-14 20:30:46.533544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-02-14 20:30:46.533583] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.284 qpair failed and we were unable to recover it. 00:30:09.284 [2024-02-14 20:30:46.534067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-02-14 20:30:46.534469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-02-14 20:30:46.534484] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.284 qpair failed and we were unable to recover it. 00:30:09.284 [2024-02-14 20:30:46.534945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-02-14 20:30:46.535438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-02-14 20:30:46.535467] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.284 qpair failed and we were unable to recover it. 00:30:09.284 [2024-02-14 20:30:46.535939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-02-14 20:30:46.536408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-02-14 20:30:46.536438] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.284 qpair failed and we were unable to recover it. 00:30:09.284 [2024-02-14 20:30:46.536898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-02-14 20:30:46.537347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-02-14 20:30:46.537376] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.284 qpair failed and we were unable to recover it. 00:30:09.284 [2024-02-14 20:30:46.537808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-02-14 20:30:46.538303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-02-14 20:30:46.538332] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.284 qpair failed and we were unable to recover it. 00:30:09.284 [2024-02-14 20:30:46.538788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-02-14 20:30:46.539179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-02-14 20:30:46.539208] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.284 qpair failed and we were unable to recover it. 00:30:09.284 [2024-02-14 20:30:46.539655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-02-14 20:30:46.540110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-02-14 20:30:46.540139] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.284 qpair failed and we were unable to recover it. 00:30:09.284 [2024-02-14 20:30:46.540598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-02-14 20:30:46.541050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-02-14 20:30:46.541081] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.284 qpair failed and we were unable to recover it. 00:30:09.284 [2024-02-14 20:30:46.541578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-02-14 20:30:46.542073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-02-14 20:30:46.542104] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.284 qpair failed and we were unable to recover it. 00:30:09.284 [2024-02-14 20:30:46.542633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-02-14 20:30:46.543121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-02-14 20:30:46.543151] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.284 qpair failed and we were unable to recover it. 00:30:09.284 [2024-02-14 20:30:46.543631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-02-14 20:30:46.544132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-02-14 20:30:46.544162] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.284 qpair failed and we were unable to recover it. 00:30:09.284 [2024-02-14 20:30:46.544677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-02-14 20:30:46.545170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-02-14 20:30:46.545200] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.284 qpair failed and we were unable to recover it. 00:30:09.284 [2024-02-14 20:30:46.545609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-02-14 20:30:46.546091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-02-14 20:30:46.546122] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.284 qpair failed and we were unable to recover it. 00:30:09.284 [2024-02-14 20:30:46.546615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-02-14 20:30:46.547025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-02-14 20:30:46.547056] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.284 qpair failed and we were unable to recover it. 00:30:09.284 [2024-02-14 20:30:46.547554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-02-14 20:30:46.547890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-02-14 20:30:46.547921] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.284 qpair failed and we were unable to recover it. 00:30:09.284 [2024-02-14 20:30:46.548390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-02-14 20:30:46.548835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-02-14 20:30:46.548865] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.284 qpair failed and we were unable to recover it. 00:30:09.284 [2024-02-14 20:30:46.549362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-02-14 20:30:46.549771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-02-14 20:30:46.549787] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.284 qpair failed and we were unable to recover it. 00:30:09.284 [2024-02-14 20:30:46.550243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-02-14 20:30:46.550745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-02-14 20:30:46.550776] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.284 qpair failed and we were unable to recover it. 00:30:09.284 [2024-02-14 20:30:46.551195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-02-14 20:30:46.551669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-02-14 20:30:46.551700] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.285 qpair failed and we were unable to recover it. 00:30:09.285 [2024-02-14 20:30:46.552116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-02-14 20:30:46.552516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-02-14 20:30:46.552545] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.285 qpair failed and we were unable to recover it. 00:30:09.285 [2024-02-14 20:30:46.552944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-02-14 20:30:46.553423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-02-14 20:30:46.553452] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.285 qpair failed and we were unable to recover it. 00:30:09.285 [2024-02-14 20:30:46.553960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-02-14 20:30:46.554409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-02-14 20:30:46.554438] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.285 qpair failed and we were unable to recover it. 00:30:09.285 [2024-02-14 20:30:46.554890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-02-14 20:30:46.555361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-02-14 20:30:46.555390] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.285 qpair failed and we were unable to recover it. 00:30:09.285 [2024-02-14 20:30:46.555822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-02-14 20:30:46.556314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-02-14 20:30:46.556344] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.285 qpair failed and we were unable to recover it. 00:30:09.285 [2024-02-14 20:30:46.556880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-02-14 20:30:46.557277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-02-14 20:30:46.557306] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.285 qpair failed and we were unable to recover it. 00:30:09.285 [2024-02-14 20:30:46.557789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-02-14 20:30:46.558260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-02-14 20:30:46.558289] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.285 qpair failed and we were unable to recover it. 00:30:09.285 [2024-02-14 20:30:46.558808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-02-14 20:30:46.559289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-02-14 20:30:46.559320] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.285 qpair failed and we were unable to recover it. 00:30:09.285 [2024-02-14 20:30:46.559818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-02-14 20:30:46.560317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-02-14 20:30:46.560347] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.285 qpair failed and we were unable to recover it. 00:30:09.285 [2024-02-14 20:30:46.560856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-02-14 20:30:46.561349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-02-14 20:30:46.561378] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.285 qpair failed and we were unable to recover it. 00:30:09.285 [2024-02-14 20:30:46.561874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-02-14 20:30:46.562285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-02-14 20:30:46.562314] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.285 qpair failed and we were unable to recover it. 00:30:09.285 [2024-02-14 20:30:46.562810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-02-14 20:30:46.563120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-02-14 20:30:46.563149] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.285 qpair failed and we were unable to recover it. 00:30:09.285 [2024-02-14 20:30:46.563669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-02-14 20:30:46.564185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-02-14 20:30:46.564215] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.285 qpair failed and we were unable to recover it. 00:30:09.285 [2024-02-14 20:30:46.564698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-02-14 20:30:46.565093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-02-14 20:30:46.565122] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.285 qpair failed and we were unable to recover it. 00:30:09.285 [2024-02-14 20:30:46.565597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-02-14 20:30:46.565998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-02-14 20:30:46.566014] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.285 qpair failed and we were unable to recover it. 00:30:09.285 [2024-02-14 20:30:46.566372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-02-14 20:30:46.566707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-02-14 20:30:46.566737] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.285 qpair failed and we were unable to recover it. 00:30:09.285 [2024-02-14 20:30:46.567208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-02-14 20:30:46.567602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-02-14 20:30:46.567631] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.285 qpair failed and we were unable to recover it. 00:30:09.285 [2024-02-14 20:30:46.568124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-02-14 20:30:46.568616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-02-14 20:30:46.568645] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.285 qpair failed and we were unable to recover it. 00:30:09.285 [2024-02-14 20:30:46.569172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-02-14 20:30:46.569669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-02-14 20:30:46.569700] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.285 qpair failed and we were unable to recover it. 00:30:09.285 [2024-02-14 20:30:46.570205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-02-14 20:30:46.570614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-02-14 20:30:46.570643] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.285 qpair failed and we were unable to recover it. 00:30:09.285 [2024-02-14 20:30:46.571156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-02-14 20:30:46.571620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-02-14 20:30:46.571660] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.285 qpair failed and we were unable to recover it. 00:30:09.285 [2024-02-14 20:30:46.572137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-02-14 20:30:46.572611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-02-14 20:30:46.572641] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.285 qpair failed and we were unable to recover it. 00:30:09.285 [2024-02-14 20:30:46.573073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-02-14 20:30:46.573591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-02-14 20:30:46.573620] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.285 qpair failed and we were unable to recover it. 00:30:09.285 [2024-02-14 20:30:46.574132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-02-14 20:30:46.574596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-02-14 20:30:46.574611] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.285 qpair failed and we were unable to recover it. 00:30:09.285 [2024-02-14 20:30:46.575105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-02-14 20:30:46.575603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-02-14 20:30:46.575632] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.285 qpair failed and we were unable to recover it. 00:30:09.286 [2024-02-14 20:30:46.576162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.286 [2024-02-14 20:30:46.576630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.286 [2024-02-14 20:30:46.576668] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.286 qpair failed and we were unable to recover it. 00:30:09.286 [2024-02-14 20:30:46.577074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.286 [2024-02-14 20:30:46.577471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.286 [2024-02-14 20:30:46.577501] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.286 qpair failed and we were unable to recover it. 00:30:09.286 [2024-02-14 20:30:46.577973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.286 [2024-02-14 20:30:46.578316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.286 [2024-02-14 20:30:46.578346] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.286 qpair failed and we were unable to recover it. 00:30:09.286 [2024-02-14 20:30:46.578808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.286 [2024-02-14 20:30:46.579238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.286 [2024-02-14 20:30:46.579253] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.286 qpair failed and we were unable to recover it. 00:30:09.286 [2024-02-14 20:30:46.579673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.286 [2024-02-14 20:30:46.580104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.286 [2024-02-14 20:30:46.580133] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.286 qpair failed and we were unable to recover it. 00:30:09.286 [2024-02-14 20:30:46.580637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.286 [2024-02-14 20:30:46.581087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.286 [2024-02-14 20:30:46.581117] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.286 qpair failed and we were unable to recover it. 00:30:09.286 [2024-02-14 20:30:46.581520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.286 [2024-02-14 20:30:46.581972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.286 [2024-02-14 20:30:46.582008] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.286 qpair failed and we were unable to recover it. 00:30:09.286 [2024-02-14 20:30:46.582464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.286 [2024-02-14 20:30:46.582931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.286 [2024-02-14 20:30:46.582963] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.286 qpair failed and we were unable to recover it. 00:30:09.286 [2024-02-14 20:30:46.583449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.286 [2024-02-14 20:30:46.583948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.286 [2024-02-14 20:30:46.583978] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.286 qpair failed and we were unable to recover it. 00:30:09.286 [2024-02-14 20:30:46.584417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.286 [2024-02-14 20:30:46.584819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.286 [2024-02-14 20:30:46.584850] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.286 qpair failed and we were unable to recover it. 00:30:09.286 [2024-02-14 20:30:46.585189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.286 [2024-02-14 20:30:46.585663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.286 [2024-02-14 20:30:46.585694] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.286 qpair failed and we were unable to recover it. 00:30:09.286 [2024-02-14 20:30:46.586095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.286 [2024-02-14 20:30:46.586502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.286 [2024-02-14 20:30:46.586532] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.286 qpair failed and we were unable to recover it. 00:30:09.286 [2024-02-14 20:30:46.586980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.286 [2024-02-14 20:30:46.587452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.286 [2024-02-14 20:30:46.587491] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.286 qpair failed and we were unable to recover it. 00:30:09.286 [2024-02-14 20:30:46.587933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.286 [2024-02-14 20:30:46.588293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.286 [2024-02-14 20:30:46.588322] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.286 qpair failed and we were unable to recover it. 00:30:09.286 [2024-02-14 20:30:46.588778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.286 [2024-02-14 20:30:46.589241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.286 [2024-02-14 20:30:46.589270] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.286 qpair failed and we were unable to recover it. 00:30:09.286 [2024-02-14 20:30:46.589768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.286 [2024-02-14 20:30:46.590216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.286 [2024-02-14 20:30:46.590245] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.286 qpair failed and we were unable to recover it. 00:30:09.286 [2024-02-14 20:30:46.590750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.286 [2024-02-14 20:30:46.591245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.286 [2024-02-14 20:30:46.591291] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.286 qpair failed and we were unable to recover it. 00:30:09.286 [2024-02-14 20:30:46.591748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.286 [2024-02-14 20:30:46.592147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.286 [2024-02-14 20:30:46.592176] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.286 qpair failed and we were unable to recover it. 00:30:09.286 [2024-02-14 20:30:46.592568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.286 [2024-02-14 20:30:46.593033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.286 [2024-02-14 20:30:46.593064] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.286 qpair failed and we were unable to recover it. 00:30:09.286 [2024-02-14 20:30:46.593491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.286 [2024-02-14 20:30:46.593970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.286 [2024-02-14 20:30:46.594000] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.286 qpair failed and we were unable to recover it. 00:30:09.286 [2024-02-14 20:30:46.594504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.286 [2024-02-14 20:30:46.594951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.286 [2024-02-14 20:30:46.594983] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.286 qpair failed and we were unable to recover it. 00:30:09.286 [2024-02-14 20:30:46.595455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.286 [2024-02-14 20:30:46.595927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.286 [2024-02-14 20:30:46.595958] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.286 qpair failed and we were unable to recover it. 00:30:09.286 [2024-02-14 20:30:46.596435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.286 [2024-02-14 20:30:46.596811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.286 [2024-02-14 20:30:46.596842] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.286 qpair failed and we were unable to recover it. 00:30:09.286 [2024-02-14 20:30:46.597342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.286 [2024-02-14 20:30:46.597830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.286 [2024-02-14 20:30:46.597860] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.286 qpair failed and we were unable to recover it. 00:30:09.286 [2024-02-14 20:30:46.598336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.286 [2024-02-14 20:30:46.598733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.286 [2024-02-14 20:30:46.598763] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.286 qpair failed and we were unable to recover it. 00:30:09.286 [2024-02-14 20:30:46.599161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.286 [2024-02-14 20:30:46.599635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.286 [2024-02-14 20:30:46.599674] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.286 qpair failed and we were unable to recover it. 00:30:09.286 [2024-02-14 20:30:46.600185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-02-14 20:30:46.600605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-02-14 20:30:46.600635] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.287 qpair failed and we were unable to recover it. 00:30:09.287 [2024-02-14 20:30:46.601143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-02-14 20:30:46.601598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-02-14 20:30:46.601627] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.287 qpair failed and we were unable to recover it. 00:30:09.287 [2024-02-14 20:30:46.602137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-02-14 20:30:46.602506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-02-14 20:30:46.602537] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.287 qpair failed and we were unable to recover it. 00:30:09.287 [2024-02-14 20:30:46.602958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-02-14 20:30:46.603362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-02-14 20:30:46.603393] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.287 qpair failed and we were unable to recover it. 00:30:09.287 [2024-02-14 20:30:46.603844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-02-14 20:30:46.604244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-02-14 20:30:46.604273] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.287 qpair failed and we were unable to recover it. 00:30:09.287 [2024-02-14 20:30:46.604683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-02-14 20:30:46.605149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-02-14 20:30:46.605177] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.287 qpair failed and we were unable to recover it. 00:30:09.287 [2024-02-14 20:30:46.605633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-02-14 20:30:46.606086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-02-14 20:30:46.606117] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.287 qpair failed and we were unable to recover it. 00:30:09.287 [2024-02-14 20:30:46.606595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-02-14 20:30:46.607095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-02-14 20:30:46.607126] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.287 qpair failed and we were unable to recover it. 00:30:09.287 [2024-02-14 20:30:46.607622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-02-14 20:30:46.608098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-02-14 20:30:46.608128] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.287 qpair failed and we were unable to recover it. 00:30:09.287 [2024-02-14 20:30:46.608520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-02-14 20:30:46.608938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-02-14 20:30:46.608970] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.287 qpair failed and we were unable to recover it. 00:30:09.287 [2024-02-14 20:30:46.609372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-02-14 20:30:46.609773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-02-14 20:30:46.609789] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.287 qpair failed and we were unable to recover it. 00:30:09.287 [2024-02-14 20:30:46.610170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-02-14 20:30:46.610637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-02-14 20:30:46.610684] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.287 qpair failed and we were unable to recover it. 00:30:09.287 [2024-02-14 20:30:46.611213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-02-14 20:30:46.611704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-02-14 20:30:46.611735] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.287 qpair failed and we were unable to recover it. 00:30:09.287 [2024-02-14 20:30:46.612249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-02-14 20:30:46.612741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-02-14 20:30:46.612771] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.287 qpair failed and we were unable to recover it. 00:30:09.287 [2024-02-14 20:30:46.613204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-02-14 20:30:46.613691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-02-14 20:30:46.613721] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.287 qpair failed and we were unable to recover it. 00:30:09.287 [2024-02-14 20:30:46.614259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-02-14 20:30:46.614758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-02-14 20:30:46.614809] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.287 qpair failed and we were unable to recover it. 00:30:09.287 [2024-02-14 20:30:46.615342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-02-14 20:30:46.615808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-02-14 20:30:46.615838] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.287 qpair failed and we were unable to recover it. 00:30:09.287 [2024-02-14 20:30:46.616324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-02-14 20:30:46.616736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-02-14 20:30:46.616766] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.287 qpair failed and we were unable to recover it. 00:30:09.287 [2024-02-14 20:30:46.617244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-02-14 20:30:46.617691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-02-14 20:30:46.617721] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.287 qpair failed and we were unable to recover it. 00:30:09.287 [2024-02-14 20:30:46.618127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-02-14 20:30:46.618532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-02-14 20:30:46.618561] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.287 qpair failed and we were unable to recover it. 00:30:09.287 [2024-02-14 20:30:46.619058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-02-14 20:30:46.619552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-02-14 20:30:46.619581] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.287 qpair failed and we were unable to recover it. 00:30:09.287 [2024-02-14 20:30:46.620084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-02-14 20:30:46.620459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-02-14 20:30:46.620488] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.287 qpair failed and we were unable to recover it. 00:30:09.287 [2024-02-14 20:30:46.620968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-02-14 20:30:46.621472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-02-14 20:30:46.621502] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.287 qpair failed and we were unable to recover it. 00:30:09.287 [2024-02-14 20:30:46.621922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-02-14 20:30:46.622317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-02-14 20:30:46.622346] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.287 qpair failed and we were unable to recover it. 00:30:09.287 [2024-02-14 20:30:46.622884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-02-14 20:30:46.623386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-02-14 20:30:46.623426] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.287 qpair failed and we were unable to recover it. 00:30:09.287 [2024-02-14 20:30:46.623908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-02-14 20:30:46.624373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-02-14 20:30:46.624402] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.287 qpair failed and we were unable to recover it. 00:30:09.287 [2024-02-14 20:30:46.624851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-02-14 20:30:46.625319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-02-14 20:30:46.625348] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.287 qpair failed and we were unable to recover it. 00:30:09.287 [2024-02-14 20:30:46.625764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-02-14 20:30:46.626154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-02-14 20:30:46.626183] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.288 qpair failed and we were unable to recover it. 00:30:09.288 [2024-02-14 20:30:46.626606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.288 [2024-02-14 20:30:46.627114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.288 [2024-02-14 20:30:46.627144] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.288 qpair failed and we were unable to recover it. 00:30:09.288 [2024-02-14 20:30:46.627666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.288 [2024-02-14 20:30:46.628157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.288 [2024-02-14 20:30:46.628186] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.288 qpair failed and we were unable to recover it. 00:30:09.288 [2024-02-14 20:30:46.628675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.288 [2024-02-14 20:30:46.629028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.288 [2024-02-14 20:30:46.629058] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.288 qpair failed and we were unable to recover it. 00:30:09.288 [2024-02-14 20:30:46.629534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.288 [2024-02-14 20:30:46.629937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.288 [2024-02-14 20:30:46.629967] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.288 qpair failed and we were unable to recover it. 00:30:09.288 [2024-02-14 20:30:46.630380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.288 [2024-02-14 20:30:46.630845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.288 [2024-02-14 20:30:46.630876] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.288 qpair failed and we were unable to recover it. 00:30:09.288 [2024-02-14 20:30:46.631278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.288 [2024-02-14 20:30:46.631746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.288 [2024-02-14 20:30:46.631776] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.288 qpair failed and we were unable to recover it. 00:30:09.288 [2024-02-14 20:30:46.632251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.288 [2024-02-14 20:30:46.632699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.288 [2024-02-14 20:30:46.632730] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.288 qpair failed and we were unable to recover it. 00:30:09.288 [2024-02-14 20:30:46.633225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.288 [2024-02-14 20:30:46.633716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.288 [2024-02-14 20:30:46.633747] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.288 qpair failed and we were unable to recover it. 00:30:09.288 [2024-02-14 20:30:46.634152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.288 [2024-02-14 20:30:46.634619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.288 [2024-02-14 20:30:46.634666] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.288 qpair failed and we were unable to recover it. 00:30:09.288 [2024-02-14 20:30:46.635192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.288 [2024-02-14 20:30:46.635665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.288 [2024-02-14 20:30:46.635695] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.288 qpair failed and we were unable to recover it. 00:30:09.288 [2024-02-14 20:30:46.636171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.288 [2024-02-14 20:30:46.636638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.288 [2024-02-14 20:30:46.636680] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.288 qpair failed and we were unable to recover it. 00:30:09.288 [2024-02-14 20:30:46.637139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.288 [2024-02-14 20:30:46.637582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.288 [2024-02-14 20:30:46.637611] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.288 qpair failed and we were unable to recover it. 00:30:09.288 [2024-02-14 20:30:46.638037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.288 [2024-02-14 20:30:46.638432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.288 [2024-02-14 20:30:46.638462] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.288 qpair failed and we were unable to recover it. 00:30:09.288 [2024-02-14 20:30:46.638940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.288 [2024-02-14 20:30:46.639450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.288 [2024-02-14 20:30:46.639485] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.288 qpair failed and we were unable to recover it. 00:30:09.288 [2024-02-14 20:30:46.639910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.288 [2024-02-14 20:30:46.640407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.288 [2024-02-14 20:30:46.640436] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.288 qpair failed and we were unable to recover it. 00:30:09.288 [2024-02-14 20:30:46.640838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.288 [2024-02-14 20:30:46.641309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.288 [2024-02-14 20:30:46.641338] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.288 qpair failed and we were unable to recover it. 00:30:09.288 [2024-02-14 20:30:46.641812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.288 [2024-02-14 20:30:46.642229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.288 [2024-02-14 20:30:46.642258] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.288 qpair failed and we were unable to recover it. 00:30:09.288 [2024-02-14 20:30:46.642757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.288 [2024-02-14 20:30:46.643139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.288 [2024-02-14 20:30:46.643168] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.288 qpair failed and we were unable to recover it. 00:30:09.288 [2024-02-14 20:30:46.643654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.288 [2024-02-14 20:30:46.644042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.288 [2024-02-14 20:30:46.644072] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.288 qpair failed and we were unable to recover it. 00:30:09.288 [2024-02-14 20:30:46.644571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.288 [2024-02-14 20:30:46.645059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.288 [2024-02-14 20:30:46.645089] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.288 qpair failed and we were unable to recover it. 00:30:09.288 [2024-02-14 20:30:46.645608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.288 [2024-02-14 20:30:46.646032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.288 [2024-02-14 20:30:46.646062] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.288 qpair failed and we were unable to recover it. 00:30:09.288 [2024-02-14 20:30:46.646588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.288 [2024-02-14 20:30:46.647063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-02-14 20:30:46.647095] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.289 qpair failed and we were unable to recover it. 00:30:09.289 [2024-02-14 20:30:46.647574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-02-14 20:30:46.648049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-02-14 20:30:46.648080] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.289 qpair failed and we were unable to recover it. 00:30:09.289 [2024-02-14 20:30:46.648493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-02-14 20:30:46.648963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-02-14 20:30:46.648993] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.289 qpair failed and we were unable to recover it. 00:30:09.289 [2024-02-14 20:30:46.649506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-02-14 20:30:46.649974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-02-14 20:30:46.650004] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.289 qpair failed and we were unable to recover it. 00:30:09.289 [2024-02-14 20:30:46.650482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-02-14 20:30:46.650957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-02-14 20:30:46.650988] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.289 qpair failed and we were unable to recover it. 00:30:09.289 [2024-02-14 20:30:46.651443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-02-14 20:30:46.651910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-02-14 20:30:46.651940] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.289 qpair failed and we were unable to recover it. 00:30:09.289 [2024-02-14 20:30:46.652418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-02-14 20:30:46.652833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-02-14 20:30:46.652873] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.289 qpair failed and we were unable to recover it. 00:30:09.289 [2024-02-14 20:30:46.653343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-02-14 20:30:46.653798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-02-14 20:30:46.653828] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.289 qpair failed and we were unable to recover it. 00:30:09.289 [2024-02-14 20:30:46.654316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-02-14 20:30:46.654805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-02-14 20:30:46.654836] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.289 qpair failed and we were unable to recover it. 00:30:09.289 [2024-02-14 20:30:46.655267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-02-14 20:30:46.655710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-02-14 20:30:46.655741] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.289 qpair failed and we were unable to recover it. 00:30:09.289 [2024-02-14 20:30:46.656244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-02-14 20:30:46.656726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-02-14 20:30:46.656756] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.289 qpair failed and we were unable to recover it. 00:30:09.289 [2024-02-14 20:30:46.657291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-02-14 20:30:46.657686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-02-14 20:30:46.657717] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.289 qpair failed and we were unable to recover it. 00:30:09.289 [2024-02-14 20:30:46.658063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-02-14 20:30:46.658541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-02-14 20:30:46.658571] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.289 qpair failed and we were unable to recover it. 00:30:09.289 [2024-02-14 20:30:46.658970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-02-14 20:30:46.659413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-02-14 20:30:46.659443] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.289 qpair failed and we were unable to recover it. 00:30:09.289 [2024-02-14 20:30:46.659926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-02-14 20:30:46.660336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-02-14 20:30:46.660379] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.289 qpair failed and we were unable to recover it. 00:30:09.289 [2024-02-14 20:30:46.660854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-02-14 20:30:46.661269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-02-14 20:30:46.661298] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.289 qpair failed and we were unable to recover it. 00:30:09.289 [2024-02-14 20:30:46.661781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-02-14 20:30:46.662178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-02-14 20:30:46.662207] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.289 qpair failed and we were unable to recover it. 00:30:09.289 [2024-02-14 20:30:46.662684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-02-14 20:30:46.663157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-02-14 20:30:46.663186] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.289 qpair failed and we were unable to recover it. 00:30:09.289 [2024-02-14 20:30:46.663637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-02-14 20:30:46.664124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-02-14 20:30:46.664153] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.289 qpair failed and we were unable to recover it. 00:30:09.289 [2024-02-14 20:30:46.664613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-02-14 20:30:46.665093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-02-14 20:30:46.665123] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.289 qpair failed and we were unable to recover it. 00:30:09.289 [2024-02-14 20:30:46.665598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-02-14 20:30:46.666107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-02-14 20:30:46.666138] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.289 qpair failed and we were unable to recover it. 00:30:09.289 [2024-02-14 20:30:46.666669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-02-14 20:30:46.667150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-02-14 20:30:46.667179] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.289 qpair failed and we were unable to recover it. 00:30:09.289 [2024-02-14 20:30:46.667681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-02-14 20:30:46.668175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-02-14 20:30:46.668204] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.289 qpair failed and we were unable to recover it. 00:30:09.289 [2024-02-14 20:30:46.668718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-02-14 20:30:46.669166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-02-14 20:30:46.669195] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.289 qpair failed and we were unable to recover it. 00:30:09.289 [2024-02-14 20:30:46.669695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-02-14 20:30:46.670185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-02-14 20:30:46.670215] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.289 qpair failed and we were unable to recover it. 00:30:09.289 [2024-02-14 20:30:46.670729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-02-14 20:30:46.671197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-02-14 20:30:46.671226] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.289 qpair failed and we were unable to recover it. 00:30:09.289 [2024-02-14 20:30:46.671682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-02-14 20:30:46.672149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.290 [2024-02-14 20:30:46.672178] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.290 qpair failed and we were unable to recover it. 00:30:09.290 [2024-02-14 20:30:46.672665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.290 [2024-02-14 20:30:46.673161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.290 [2024-02-14 20:30:46.673190] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.290 qpair failed and we were unable to recover it. 00:30:09.290 [2024-02-14 20:30:46.673710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.290 [2024-02-14 20:30:46.674133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.290 [2024-02-14 20:30:46.674162] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.290 qpair failed and we were unable to recover it. 00:30:09.290 [2024-02-14 20:30:46.674616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.290 [2024-02-14 20:30:46.675086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.290 [2024-02-14 20:30:46.675102] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.290 qpair failed and we were unable to recover it. 00:30:09.290 [2024-02-14 20:30:46.675595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.290 [2024-02-14 20:30:46.676011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.290 [2024-02-14 20:30:46.676042] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.290 qpair failed and we were unable to recover it. 00:30:09.290 [2024-02-14 20:30:46.676521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.290 [2024-02-14 20:30:46.676996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.290 [2024-02-14 20:30:46.677026] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.290 qpair failed and we were unable to recover it. 00:30:09.290 [2024-02-14 20:30:46.677522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.290 [2024-02-14 20:30:46.677945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.290 [2024-02-14 20:30:46.677976] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.290 qpair failed and we were unable to recover it. 00:30:09.290 [2024-02-14 20:30:46.678472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.290 [2024-02-14 20:30:46.678892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.290 [2024-02-14 20:30:46.678923] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.290 qpair failed and we were unable to recover it. 00:30:09.290 [2024-02-14 20:30:46.679398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.290 [2024-02-14 20:30:46.679872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.290 [2024-02-14 20:30:46.679902] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.290 qpair failed and we were unable to recover it. 00:30:09.290 [2024-02-14 20:30:46.680388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.290 [2024-02-14 20:30:46.680883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.290 [2024-02-14 20:30:46.680914] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.290 qpair failed and we were unable to recover it. 00:30:09.290 [2024-02-14 20:30:46.681439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.290 [2024-02-14 20:30:46.681926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.290 [2024-02-14 20:30:46.681957] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.290 qpair failed and we were unable to recover it. 00:30:09.290 [2024-02-14 20:30:46.682443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.290 [2024-02-14 20:30:46.682896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.290 [2024-02-14 20:30:46.682927] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.290 qpair failed and we were unable to recover it. 00:30:09.290 [2024-02-14 20:30:46.683319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.290 [2024-02-14 20:30:46.683783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.290 [2024-02-14 20:30:46.683814] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.290 qpair failed and we were unable to recover it. 00:30:09.290 [2024-02-14 20:30:46.684257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.290 [2024-02-14 20:30:46.684695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.290 [2024-02-14 20:30:46.684711] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.290 qpair failed and we were unable to recover it. 00:30:09.290 [2024-02-14 20:30:46.685129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.290 [2024-02-14 20:30:46.685496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.290 [2024-02-14 20:30:46.685526] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.290 qpair failed and we were unable to recover it. 00:30:09.290 [2024-02-14 20:30:46.686002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.290 [2024-02-14 20:30:46.686441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.290 [2024-02-14 20:30:46.686456] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.290 qpair failed and we were unable to recover it. 00:30:09.290 [2024-02-14 20:30:46.686925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.290 [2024-02-14 20:30:46.687419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.290 [2024-02-14 20:30:46.687448] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.290 qpair failed and we were unable to recover it. 00:30:09.290 [2024-02-14 20:30:46.687866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.290 [2024-02-14 20:30:46.688258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.290 [2024-02-14 20:30:46.688276] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.290 qpair failed and we were unable to recover it. 00:30:09.555 [2024-02-14 20:30:46.688690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.555 [2024-02-14 20:30:46.689059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.555 [2024-02-14 20:30:46.689074] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.555 qpair failed and we were unable to recover it. 00:30:09.555 [2024-02-14 20:30:46.689434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.555 [2024-02-14 20:30:46.689827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.555 [2024-02-14 20:30:46.689843] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.555 qpair failed and we were unable to recover it. 00:30:09.555 [2024-02-14 20:30:46.690269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.555 [2024-02-14 20:30:46.690661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.555 [2024-02-14 20:30:46.690692] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.555 qpair failed and we were unable to recover it. 00:30:09.555 [2024-02-14 20:30:46.691141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.555 [2024-02-14 20:30:46.691550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.555 [2024-02-14 20:30:46.691579] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.555 qpair failed and we were unable to recover it. 00:30:09.555 [2024-02-14 20:30:46.692053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.555 [2024-02-14 20:30:46.692518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.555 [2024-02-14 20:30:46.692547] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.555 qpair failed and we were unable to recover it. 00:30:09.555 [2024-02-14 20:30:46.692939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.555 [2024-02-14 20:30:46.693405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.555 [2024-02-14 20:30:46.693434] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.555 qpair failed and we were unable to recover it. 00:30:09.555 [2024-02-14 20:30:46.693891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.555 [2024-02-14 20:30:46.694365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.555 [2024-02-14 20:30:46.694394] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.555 qpair failed and we were unable to recover it. 00:30:09.555 [2024-02-14 20:30:46.694800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.555 [2024-02-14 20:30:46.695253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.555 [2024-02-14 20:30:46.695282] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.555 qpair failed and we were unable to recover it. 00:30:09.555 [2024-02-14 20:30:46.695738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.555 [2024-02-14 20:30:46.696204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.555 [2024-02-14 20:30:46.696233] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.555 qpair failed and we were unable to recover it. 00:30:09.556 [2024-02-14 20:30:46.696712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.556 [2024-02-14 20:30:46.697103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.556 [2024-02-14 20:30:46.697132] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.556 qpair failed and we were unable to recover it. 00:30:09.556 [2024-02-14 20:30:46.697595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.556 [2024-02-14 20:30:46.698064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.556 [2024-02-14 20:30:46.698094] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.556 qpair failed and we were unable to recover it. 00:30:09.556 [2024-02-14 20:30:46.698546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.556 [2024-02-14 20:30:46.698991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.556 [2024-02-14 20:30:46.699021] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.556 qpair failed and we were unable to recover it. 00:30:09.556 [2024-02-14 20:30:46.699497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.556 [2024-02-14 20:30:46.699937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.556 [2024-02-14 20:30:46.699968] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.556 qpair failed and we were unable to recover it. 00:30:09.556 [2024-02-14 20:30:46.700444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.556 [2024-02-14 20:30:46.700831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.556 [2024-02-14 20:30:46.700862] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.556 qpair failed and we were unable to recover it. 00:30:09.556 [2024-02-14 20:30:46.701329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.556 [2024-02-14 20:30:46.701733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.556 [2024-02-14 20:30:46.701763] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.556 qpair failed and we were unable to recover it. 00:30:09.556 [2024-02-14 20:30:46.702251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.556 [2024-02-14 20:30:46.702717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.556 [2024-02-14 20:30:46.702748] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.556 qpair failed and we were unable to recover it. 00:30:09.556 [2024-02-14 20:30:46.703131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.556 [2024-02-14 20:30:46.703613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.556 [2024-02-14 20:30:46.703644] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.556 qpair failed and we were unable to recover it. 00:30:09.556 [2024-02-14 20:30:46.704156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.556 [2024-02-14 20:30:46.704570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.556 [2024-02-14 20:30:46.704585] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.556 qpair failed and we were unable to recover it. 00:30:09.556 [2024-02-14 20:30:46.705029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.556 [2024-02-14 20:30:46.705499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.556 [2024-02-14 20:30:46.705529] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.556 qpair failed and we were unable to recover it. 00:30:09.556 [2024-02-14 20:30:46.706006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.556 [2024-02-14 20:30:46.706477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.556 [2024-02-14 20:30:46.706517] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.556 qpair failed and we were unable to recover it. 00:30:09.556 [2024-02-14 20:30:46.706908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.556 [2024-02-14 20:30:46.707354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.556 [2024-02-14 20:30:46.707384] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.556 qpair failed and we were unable to recover it. 00:30:09.556 [2024-02-14 20:30:46.707801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.556 [2024-02-14 20:30:46.708188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.556 [2024-02-14 20:30:46.708203] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.556 qpair failed and we were unable to recover it. 00:30:09.556 [2024-02-14 20:30:46.708652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.556 [2024-02-14 20:30:46.709028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.556 [2024-02-14 20:30:46.709043] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.556 qpair failed and we were unable to recover it. 00:30:09.556 [2024-02-14 20:30:46.709470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.556 [2024-02-14 20:30:46.709858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.556 [2024-02-14 20:30:46.709888] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.556 qpair failed and we were unable to recover it. 00:30:09.556 [2024-02-14 20:30:46.710336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.556 [2024-02-14 20:30:46.710807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.556 [2024-02-14 20:30:46.710839] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.556 qpair failed and we were unable to recover it. 00:30:09.556 [2024-02-14 20:30:46.711325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.556 [2024-02-14 20:30:46.711708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.556 [2024-02-14 20:30:46.711738] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.556 qpair failed and we were unable to recover it. 00:30:09.556 [2024-02-14 20:30:46.712211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.556 [2024-02-14 20:30:46.712681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.556 [2024-02-14 20:30:46.712712] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.556 qpair failed and we were unable to recover it. 00:30:09.556 [2024-02-14 20:30:46.713090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.556 [2024-02-14 20:30:46.713559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.556 [2024-02-14 20:30:46.713589] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.556 qpair failed and we were unable to recover it. 00:30:09.556 [2024-02-14 20:30:46.713989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.556 [2024-02-14 20:30:46.714437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.556 [2024-02-14 20:30:46.714465] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.556 qpair failed and we were unable to recover it. 00:30:09.556 [2024-02-14 20:30:46.714868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.556 [2024-02-14 20:30:46.715338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.556 [2024-02-14 20:30:46.715367] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.556 qpair failed and we were unable to recover it. 00:30:09.556 [2024-02-14 20:30:46.715848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.556 [2024-02-14 20:30:46.716325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.556 [2024-02-14 20:30:46.716354] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.556 qpair failed and we were unable to recover it. 00:30:09.556 [2024-02-14 20:30:46.716805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.556 [2024-02-14 20:30:46.717199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.556 [2024-02-14 20:30:46.717229] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.556 qpair failed and we were unable to recover it. 00:30:09.556 [2024-02-14 20:30:46.717705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.556 [2024-02-14 20:30:46.718179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.556 [2024-02-14 20:30:46.718209] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.556 qpair failed and we were unable to recover it. 00:30:09.556 [2024-02-14 20:30:46.718700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.556 [2024-02-14 20:30:46.719194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.556 [2024-02-14 20:30:46.719227] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.556 qpair failed and we were unable to recover it. 00:30:09.556 [2024-02-14 20:30:46.719705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.556 [2024-02-14 20:30:46.720159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.556 [2024-02-14 20:30:46.720187] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.556 qpair failed and we were unable to recover it. 00:30:09.556 [2024-02-14 20:30:46.720588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.556 [2024-02-14 20:30:46.721066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.556 [2024-02-14 20:30:46.721081] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.556 qpair failed and we were unable to recover it. 00:30:09.556 [2024-02-14 20:30:46.721522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.556 [2024-02-14 20:30:46.721889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.556 [2024-02-14 20:30:46.721904] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.556 qpair failed and we were unable to recover it. 00:30:09.556 [2024-02-14 20:30:46.722348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.556 [2024-02-14 20:30:46.722713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.557 [2024-02-14 20:30:46.722729] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.557 qpair failed and we were unable to recover it. 00:30:09.557 [2024-02-14 20:30:46.723170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.557 [2024-02-14 20:30:46.723526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.557 [2024-02-14 20:30:46.723555] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.557 qpair failed and we were unable to recover it. 00:30:09.557 [2024-02-14 20:30:46.724009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.557 [2024-02-14 20:30:46.724477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.557 [2024-02-14 20:30:46.724507] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.557 qpair failed and we were unable to recover it. 00:30:09.557 [2024-02-14 20:30:46.724915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.557 [2024-02-14 20:30:46.725319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.557 [2024-02-14 20:30:46.725348] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.557 qpair failed and we were unable to recover it. 00:30:09.557 [2024-02-14 20:30:46.725808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.557 [2024-02-14 20:30:46.726193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.557 [2024-02-14 20:30:46.726209] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.557 qpair failed and we were unable to recover it. 00:30:09.557 [2024-02-14 20:30:46.726635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.557 [2024-02-14 20:30:46.727149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.557 [2024-02-14 20:30:46.727165] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.557 qpair failed and we were unable to recover it. 00:30:09.557 [2024-02-14 20:30:46.727678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.557 [2024-02-14 20:30:46.728079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.557 [2024-02-14 20:30:46.728109] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.557 qpair failed and we were unable to recover it. 00:30:09.557 [2024-02-14 20:30:46.728583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.557 [2024-02-14 20:30:46.728905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.557 [2024-02-14 20:30:46.728921] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.557 qpair failed and we were unable to recover it. 00:30:09.557 [2024-02-14 20:30:46.729362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.557 [2024-02-14 20:30:46.729802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.557 [2024-02-14 20:30:46.729818] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.557 qpair failed and we were unable to recover it. 00:30:09.557 [2024-02-14 20:30:46.730257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.557 [2024-02-14 20:30:46.730660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.557 [2024-02-14 20:30:46.730676] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.557 qpair failed and we were unable to recover it. 00:30:09.557 [2024-02-14 20:30:46.731061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.557 [2024-02-14 20:30:46.731475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.557 [2024-02-14 20:30:46.731490] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.557 qpair failed and we were unable to recover it. 00:30:09.557 [2024-02-14 20:30:46.731792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.557 [2024-02-14 20:30:46.732238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.557 [2024-02-14 20:30:46.732268] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.557 qpair failed and we were unable to recover it. 00:30:09.557 [2024-02-14 20:30:46.732751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.557 [2024-02-14 20:30:46.733128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.557 [2024-02-14 20:30:46.733143] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.557 qpair failed and we were unable to recover it. 00:30:09.557 [2024-02-14 20:30:46.733607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.557 [2024-02-14 20:30:46.734020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.557 [2024-02-14 20:30:46.734039] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.557 qpair failed and we were unable to recover it. 00:30:09.557 [2024-02-14 20:30:46.734488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.557 [2024-02-14 20:30:46.734997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.557 [2024-02-14 20:30:46.735028] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.557 qpair failed and we were unable to recover it. 00:30:09.557 [2024-02-14 20:30:46.735514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.557 [2024-02-14 20:30:46.735982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.557 [2024-02-14 20:30:46.735998] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.557 qpair failed and we were unable to recover it. 00:30:09.557 [2024-02-14 20:30:46.736486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.557 [2024-02-14 20:30:46.736963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.557 [2024-02-14 20:30:46.737002] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.557 qpair failed and we were unable to recover it. 00:30:09.557 [2024-02-14 20:30:46.737465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.557 [2024-02-14 20:30:46.737913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.557 [2024-02-14 20:30:46.737929] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.557 qpair failed and we were unable to recover it. 00:30:09.557 [2024-02-14 20:30:46.738372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.557 [2024-02-14 20:30:46.738810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.557 [2024-02-14 20:30:46.738827] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.557 qpair failed and we were unable to recover it. 00:30:09.557 [2024-02-14 20:30:46.739157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.557 [2024-02-14 20:30:46.739552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.557 [2024-02-14 20:30:46.739568] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.557 qpair failed and we were unable to recover it. 00:30:09.557 [2024-02-14 20:30:46.739938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.557 [2024-02-14 20:30:46.740322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.557 [2024-02-14 20:30:46.740352] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.557 qpair failed and we were unable to recover it. 00:30:09.557 [2024-02-14 20:30:46.740825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.557 [2024-02-14 20:30:46.741244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.557 [2024-02-14 20:30:46.741260] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.557 qpair failed and we were unable to recover it. 00:30:09.557 [2024-02-14 20:30:46.741688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.557 [2024-02-14 20:30:46.742127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.557 [2024-02-14 20:30:46.742142] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.557 qpair failed and we were unable to recover it. 00:30:09.557 [2024-02-14 20:30:46.742557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.557 [2024-02-14 20:30:46.742938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.557 [2024-02-14 20:30:46.742956] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.557 qpair failed and we were unable to recover it. 00:30:09.557 [2024-02-14 20:30:46.743398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.557 [2024-02-14 20:30:46.743777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.557 [2024-02-14 20:30:46.743808] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.557 qpair failed and we were unable to recover it. 00:30:09.557 [2024-02-14 20:30:46.744281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.557 [2024-02-14 20:30:46.744729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.557 [2024-02-14 20:30:46.744760] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.557 qpair failed and we were unable to recover it. 00:30:09.557 [2024-02-14 20:30:46.745176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.557 [2024-02-14 20:30:46.745641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.557 [2024-02-14 20:30:46.745680] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.557 qpair failed and we were unable to recover it. 00:30:09.557 [2024-02-14 20:30:46.746073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.557 [2024-02-14 20:30:46.746491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.557 [2024-02-14 20:30:46.746506] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.557 qpair failed and we were unable to recover it. 00:30:09.557 [2024-02-14 20:30:46.746969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.557 [2024-02-14 20:30:46.747381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.557 [2024-02-14 20:30:46.747396] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.557 qpair failed and we were unable to recover it. 00:30:09.557 [2024-02-14 20:30:46.747797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.558 [2024-02-14 20:30:46.748263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.558 [2024-02-14 20:30:46.748292] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.558 qpair failed and we were unable to recover it. 00:30:09.558 [2024-02-14 20:30:46.748776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.558 [2024-02-14 20:30:46.749218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.558 [2024-02-14 20:30:46.749233] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.558 qpair failed and we were unable to recover it. 00:30:09.558 [2024-02-14 20:30:46.749673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.558 [2024-02-14 20:30:46.750037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.558 [2024-02-14 20:30:46.750066] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.558 qpair failed and we were unable to recover it. 00:30:09.558 [2024-02-14 20:30:46.750462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.558 [2024-02-14 20:30:46.750920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.558 [2024-02-14 20:30:46.750936] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.558 qpair failed and we were unable to recover it. 00:30:09.558 [2024-02-14 20:30:46.751377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.558 [2024-02-14 20:30:46.751820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.558 [2024-02-14 20:30:46.751851] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.558 qpair failed and we were unable to recover it. 00:30:09.558 [2024-02-14 20:30:46.752257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.558 [2024-02-14 20:30:46.752718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.558 [2024-02-14 20:30:46.752735] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.558 qpair failed and we were unable to recover it. 00:30:09.558 [2024-02-14 20:30:46.753180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.558 [2024-02-14 20:30:46.753652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.558 [2024-02-14 20:30:46.753668] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.558 qpair failed and we were unable to recover it. 00:30:09.558 [2024-02-14 20:30:46.754148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.558 [2024-02-14 20:30:46.754523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.558 [2024-02-14 20:30:46.754539] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.558 qpair failed and we were unable to recover it. 00:30:09.558 [2024-02-14 20:30:46.754985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.558 [2024-02-14 20:30:46.755386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.558 [2024-02-14 20:30:46.755402] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.558 qpair failed and we were unable to recover it. 00:30:09.558 [2024-02-14 20:30:46.755826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.558 [2024-02-14 20:30:46.756277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.558 [2024-02-14 20:30:46.756292] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.558 qpair failed and we were unable to recover it. 00:30:09.558 [2024-02-14 20:30:46.756660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.558 [2024-02-14 20:30:46.757045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.558 [2024-02-14 20:30:46.757060] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.558 qpair failed and we were unable to recover it. 00:30:09.558 [2024-02-14 20:30:46.757446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.558 [2024-02-14 20:30:46.757808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.558 [2024-02-14 20:30:46.757824] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.558 qpair failed and we were unable to recover it. 00:30:09.558 [2024-02-14 20:30:46.758265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.558 [2024-02-14 20:30:46.758658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.558 [2024-02-14 20:30:46.758688] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.558 qpair failed and we were unable to recover it. 00:30:09.558 [2024-02-14 20:30:46.759154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.558 [2024-02-14 20:30:46.759637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.558 [2024-02-14 20:30:46.759663] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.558 qpair failed and we were unable to recover it. 00:30:09.558 [2024-02-14 20:30:46.760144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.558 [2024-02-14 20:30:46.760588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.558 [2024-02-14 20:30:46.760618] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.558 qpair failed and we were unable to recover it. 00:30:09.558 [2024-02-14 20:30:46.761057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.558 [2024-02-14 20:30:46.761497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.558 [2024-02-14 20:30:46.761513] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.558 qpair failed and we were unable to recover it. 00:30:09.558 [2024-02-14 20:30:46.761955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.558 [2024-02-14 20:30:46.762394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.558 [2024-02-14 20:30:46.762409] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.558 qpair failed and we were unable to recover it. 00:30:09.558 [2024-02-14 20:30:46.762873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.558 [2024-02-14 20:30:46.763266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.558 [2024-02-14 20:30:46.763282] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.558 qpair failed and we were unable to recover it. 00:30:09.558 [2024-02-14 20:30:46.763636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.558 [2024-02-14 20:30:46.764031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.558 [2024-02-14 20:30:46.764047] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.558 qpair failed and we were unable to recover it. 00:30:09.558 [2024-02-14 20:30:46.764399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.558 [2024-02-14 20:30:46.764831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.558 [2024-02-14 20:30:46.764846] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.558 qpair failed and we were unable to recover it. 00:30:09.558 [2024-02-14 20:30:46.765212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.558 [2024-02-14 20:30:46.765660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.558 [2024-02-14 20:30:46.765676] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.558 qpair failed and we were unable to recover it. 00:30:09.558 [2024-02-14 20:30:46.766101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.558 [2024-02-14 20:30:46.766443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.558 [2024-02-14 20:30:46.766458] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.558 qpair failed and we were unable to recover it. 00:30:09.558 [2024-02-14 20:30:46.766874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.558 [2024-02-14 20:30:46.767236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.558 [2024-02-14 20:30:46.767252] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.558 qpair failed and we were unable to recover it. 00:30:09.558 [2024-02-14 20:30:46.767628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.558 [2024-02-14 20:30:46.768096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.558 [2024-02-14 20:30:46.768127] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.558 qpair failed and we were unable to recover it. 00:30:09.558 [2024-02-14 20:30:46.768549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.558 [2024-02-14 20:30:46.769017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.558 [2024-02-14 20:30:46.769048] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.558 qpair failed and we were unable to recover it. 00:30:09.558 [2024-02-14 20:30:46.769526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.558 [2024-02-14 20:30:46.770025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.558 [2024-02-14 20:30:46.770055] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.558 qpair failed and we were unable to recover it. 00:30:09.558 [2024-02-14 20:30:46.770463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.558 [2024-02-14 20:30:46.770857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.558 [2024-02-14 20:30:46.770889] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.558 qpair failed and we were unable to recover it. 00:30:09.558 [2024-02-14 20:30:46.771347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.558 [2024-02-14 20:30:46.771769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.558 [2024-02-14 20:30:46.771800] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.558 qpair failed and we were unable to recover it. 00:30:09.558 [2024-02-14 20:30:46.772292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.558 [2024-02-14 20:30:46.772738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.558 [2024-02-14 20:30:46.772769] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.559 qpair failed and we were unable to recover it. 00:30:09.559 [2024-02-14 20:30:46.773118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.559 [2024-02-14 20:30:46.773579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.559 [2024-02-14 20:30:46.773608] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.559 qpair failed and we were unable to recover it. 00:30:09.559 [2024-02-14 20:30:46.774000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.559 [2024-02-14 20:30:46.774412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.559 [2024-02-14 20:30:46.774442] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.559 qpair failed and we were unable to recover it. 00:30:09.559 [2024-02-14 20:30:46.774890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.559 [2024-02-14 20:30:46.775372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.559 [2024-02-14 20:30:46.775402] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.559 qpair failed and we were unable to recover it. 00:30:09.559 [2024-02-14 20:30:46.775880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.559 [2024-02-14 20:30:46.776280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.559 [2024-02-14 20:30:46.776308] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.559 qpair failed and we were unable to recover it. 00:30:09.559 [2024-02-14 20:30:46.776759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.559 [2024-02-14 20:30:46.777227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.559 [2024-02-14 20:30:46.777256] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.559 qpair failed and we were unable to recover it. 00:30:09.559 [2024-02-14 20:30:46.777744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.559 [2024-02-14 20:30:46.778182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.559 [2024-02-14 20:30:46.778197] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.559 qpair failed and we were unable to recover it. 00:30:09.559 [2024-02-14 20:30:46.778570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.559 [2024-02-14 20:30:46.778975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.559 [2024-02-14 20:30:46.779007] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.559 qpair failed and we were unable to recover it. 00:30:09.559 [2024-02-14 20:30:46.779408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.559 [2024-02-14 20:30:46.779855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.559 [2024-02-14 20:30:46.779886] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.559 qpair failed and we were unable to recover it. 00:30:09.559 [2024-02-14 20:30:46.780277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.559 [2024-02-14 20:30:46.780666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.559 [2024-02-14 20:30:46.780696] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.559 qpair failed and we were unable to recover it. 00:30:09.559 [2024-02-14 20:30:46.781169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.559 [2024-02-14 20:30:46.781579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.559 [2024-02-14 20:30:46.781609] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.559 qpair failed and we were unable to recover it. 00:30:09.559 [2024-02-14 20:30:46.782097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.559 [2024-02-14 20:30:46.782477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.559 [2024-02-14 20:30:46.782493] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.559 qpair failed and we were unable to recover it. 00:30:09.559 [2024-02-14 20:30:46.782937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.559 [2024-02-14 20:30:46.783329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.559 [2024-02-14 20:30:46.783359] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.559 qpair failed and we were unable to recover it. 00:30:09.559 [2024-02-14 20:30:46.783841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.559 [2024-02-14 20:30:46.784313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.559 [2024-02-14 20:30:46.784342] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.559 qpair failed and we were unable to recover it. 00:30:09.559 [2024-02-14 20:30:46.784797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.559 [2024-02-14 20:30:46.785179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.559 [2024-02-14 20:30:46.785208] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.559 qpair failed and we were unable to recover it. 00:30:09.559 [2024-02-14 20:30:46.785597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.559 [2024-02-14 20:30:46.786073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.559 [2024-02-14 20:30:46.786103] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.559 qpair failed and we were unable to recover it. 00:30:09.559 [2024-02-14 20:30:46.786553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.559 [2024-02-14 20:30:46.787023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.559 [2024-02-14 20:30:46.787055] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.559 qpair failed and we were unable to recover it. 00:30:09.559 [2024-02-14 20:30:46.787539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.559 [2024-02-14 20:30:46.787981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.559 [2024-02-14 20:30:46.788000] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.559 qpair failed and we were unable to recover it. 00:30:09.559 [2024-02-14 20:30:46.788367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.559 [2024-02-14 20:30:46.788767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.559 [2024-02-14 20:30:46.788799] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.559 qpair failed and we were unable to recover it. 00:30:09.559 [2024-02-14 20:30:46.789274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.559 [2024-02-14 20:30:46.789741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.559 [2024-02-14 20:30:46.789772] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.559 qpair failed and we were unable to recover it. 00:30:09.559 [2024-02-14 20:30:46.790169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.559 [2024-02-14 20:30:46.790667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.559 [2024-02-14 20:30:46.790698] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.559 qpair failed and we were unable to recover it. 00:30:09.559 [2024-02-14 20:30:46.791212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.559 [2024-02-14 20:30:46.791682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.559 [2024-02-14 20:30:46.791713] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.559 qpair failed and we were unable to recover it. 00:30:09.559 [2024-02-14 20:30:46.792136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.559 [2024-02-14 20:30:46.792606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.559 [2024-02-14 20:30:46.792635] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.559 qpair failed and we were unable to recover it. 00:30:09.559 [2024-02-14 20:30:46.793094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.559 [2024-02-14 20:30:46.793510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.559 [2024-02-14 20:30:46.793540] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.559 qpair failed and we were unable to recover it. 00:30:09.559 [2024-02-14 20:30:46.793995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.559 [2024-02-14 20:30:46.794442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.559 [2024-02-14 20:30:46.794472] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.559 qpair failed and we were unable to recover it. 00:30:09.559 [2024-02-14 20:30:46.794971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.559 [2024-02-14 20:30:46.795363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.559 [2024-02-14 20:30:46.795393] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.559 qpair failed and we were unable to recover it. 00:30:09.559 [2024-02-14 20:30:46.795867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.559 [2024-02-14 20:30:46.796335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.559 [2024-02-14 20:30:46.796365] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.559 qpair failed and we were unable to recover it. 00:30:09.559 [2024-02-14 20:30:46.796788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.559 [2024-02-14 20:30:46.797257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.559 [2024-02-14 20:30:46.797286] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.559 qpair failed and we were unable to recover it. 00:30:09.559 [2024-02-14 20:30:46.797777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.559 [2024-02-14 20:30:46.798187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.559 [2024-02-14 20:30:46.798217] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.560 qpair failed and we were unable to recover it. 00:30:09.560 [2024-02-14 20:30:46.798704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.560 [2024-02-14 20:30:46.799093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.560 [2024-02-14 20:30:46.799122] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.560 qpair failed and we were unable to recover it. 00:30:09.560 [2024-02-14 20:30:46.799528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.560 [2024-02-14 20:30:46.799999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.560 [2024-02-14 20:30:46.800031] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.560 qpair failed and we were unable to recover it. 00:30:09.560 [2024-02-14 20:30:46.800499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.560 [2024-02-14 20:30:46.800967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.560 [2024-02-14 20:30:46.800999] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.560 qpair failed and we were unable to recover it. 00:30:09.560 [2024-02-14 20:30:46.801472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.560 [2024-02-14 20:30:46.801941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.560 [2024-02-14 20:30:46.801972] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.560 qpair failed and we were unable to recover it. 00:30:09.560 [2024-02-14 20:30:46.802459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.560 [2024-02-14 20:30:46.802859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.560 [2024-02-14 20:30:46.802890] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.560 qpair failed and we were unable to recover it. 00:30:09.560 [2024-02-14 20:30:46.803357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.560 [2024-02-14 20:30:46.803784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.560 [2024-02-14 20:30:46.803815] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.560 qpair failed and we were unable to recover it. 00:30:09.560 [2024-02-14 20:30:46.804219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.560 [2024-02-14 20:30:46.804610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.560 [2024-02-14 20:30:46.804640] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.560 qpair failed and we were unable to recover it. 00:30:09.560 [2024-02-14 20:30:46.805138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.560 [2024-02-14 20:30:46.805583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.560 [2024-02-14 20:30:46.805612] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.560 qpair failed and we were unable to recover it. 00:30:09.560 [2024-02-14 20:30:46.806128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.560 [2024-02-14 20:30:46.806605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.560 [2024-02-14 20:30:46.806635] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.560 qpair failed and we were unable to recover it. 00:30:09.560 [2024-02-14 20:30:46.807196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.560 [2024-02-14 20:30:46.807668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.560 [2024-02-14 20:30:46.807700] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.560 qpair failed and we were unable to recover it. 00:30:09.560 [2024-02-14 20:30:46.808116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.560 [2024-02-14 20:30:46.808585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.560 [2024-02-14 20:30:46.808614] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.560 qpair failed and we were unable to recover it. 00:30:09.560 [2024-02-14 20:30:46.809035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.560 [2024-02-14 20:30:46.809431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.560 [2024-02-14 20:30:46.809461] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.560 qpair failed and we were unable to recover it. 00:30:09.560 [2024-02-14 20:30:46.809911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.560 [2024-02-14 20:30:46.810389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.560 [2024-02-14 20:30:46.810419] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.560 qpair failed and we were unable to recover it. 00:30:09.560 [2024-02-14 20:30:46.810898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.560 [2024-02-14 20:30:46.811363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.560 [2024-02-14 20:30:46.811392] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.560 qpair failed and we were unable to recover it. 00:30:09.560 [2024-02-14 20:30:46.811869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.560 [2024-02-14 20:30:46.812269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.560 [2024-02-14 20:30:46.812284] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.560 qpair failed and we were unable to recover it. 00:30:09.560 [2024-02-14 20:30:46.812748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.560 [2024-02-14 20:30:46.813155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.560 [2024-02-14 20:30:46.813184] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.560 qpair failed and we were unable to recover it. 00:30:09.560 [2024-02-14 20:30:46.813664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.560 [2024-02-14 20:30:46.814073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.560 [2024-02-14 20:30:46.814089] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.560 qpair failed and we were unable to recover it. 00:30:09.560 [2024-02-14 20:30:46.814427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.560 [2024-02-14 20:30:46.814871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.560 [2024-02-14 20:30:46.814901] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.560 qpair failed and we were unable to recover it. 00:30:09.560 [2024-02-14 20:30:46.815352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.560 [2024-02-14 20:30:46.815748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.560 [2024-02-14 20:30:46.815779] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.560 qpair failed and we were unable to recover it. 00:30:09.560 [2024-02-14 20:30:46.816255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.560 [2024-02-14 20:30:46.816743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.560 [2024-02-14 20:30:46.816774] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.560 qpair failed and we were unable to recover it. 00:30:09.560 [2024-02-14 20:30:46.817252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.560 [2024-02-14 20:30:46.817666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.560 [2024-02-14 20:30:46.817696] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.560 qpair failed and we were unable to recover it. 00:30:09.560 [2024-02-14 20:30:46.818173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.560 [2024-02-14 20:30:46.818637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.560 [2024-02-14 20:30:46.818685] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.560 qpair failed and we were unable to recover it. 00:30:09.560 [2024-02-14 20:30:46.819219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.560 [2024-02-14 20:30:46.819611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.560 [2024-02-14 20:30:46.819640] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.560 qpair failed and we were unable to recover it. 00:30:09.560 [2024-02-14 20:30:46.820046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.560 [2024-02-14 20:30:46.820554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.561 [2024-02-14 20:30:46.820584] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.561 qpair failed and we were unable to recover it. 00:30:09.561 [2024-02-14 20:30:46.821086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.561 [2024-02-14 20:30:46.821558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.561 [2024-02-14 20:30:46.821587] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.561 qpair failed and we were unable to recover it. 00:30:09.561 [2024-02-14 20:30:46.822061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.561 [2024-02-14 20:30:46.822530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.561 [2024-02-14 20:30:46.822559] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.561 qpair failed and we were unable to recover it. 00:30:09.561 [2024-02-14 20:30:46.823068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.561 [2024-02-14 20:30:46.823459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.561 [2024-02-14 20:30:46.823488] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.561 qpair failed and we were unable to recover it. 00:30:09.561 [2024-02-14 20:30:46.823881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.561 [2024-02-14 20:30:46.824337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.561 [2024-02-14 20:30:46.824366] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.561 qpair failed and we were unable to recover it. 00:30:09.561 [2024-02-14 20:30:46.824864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.561 [2024-02-14 20:30:46.825351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.561 [2024-02-14 20:30:46.825381] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.561 qpair failed and we were unable to recover it. 00:30:09.561 [2024-02-14 20:30:46.825831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.561 [2024-02-14 20:30:46.826173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.561 [2024-02-14 20:30:46.826202] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.561 qpair failed and we were unable to recover it. 00:30:09.561 [2024-02-14 20:30:46.826693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.561 [2024-02-14 20:30:46.827172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.561 [2024-02-14 20:30:46.827201] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.561 qpair failed and we were unable to recover it. 00:30:09.561 [2024-02-14 20:30:46.827692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.561 [2024-02-14 20:30:46.828186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.561 [2024-02-14 20:30:46.828216] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.561 qpair failed and we were unable to recover it. 00:30:09.561 [2024-02-14 20:30:46.828635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.561 [2024-02-14 20:30:46.829137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.561 [2024-02-14 20:30:46.829167] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.561 qpair failed and we were unable to recover it. 00:30:09.561 [2024-02-14 20:30:46.829675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.561 [2024-02-14 20:30:46.830142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.561 [2024-02-14 20:30:46.830172] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.561 qpair failed and we were unable to recover it. 00:30:09.561 [2024-02-14 20:30:46.830644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.561 [2024-02-14 20:30:46.831154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.561 [2024-02-14 20:30:46.831183] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.561 qpair failed and we were unable to recover it. 00:30:09.561 [2024-02-14 20:30:46.831605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.561 [2024-02-14 20:30:46.832077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.561 [2024-02-14 20:30:46.832108] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.561 qpair failed and we were unable to recover it. 00:30:09.561 [2024-02-14 20:30:46.832505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.561 [2024-02-14 20:30:46.832974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.561 [2024-02-14 20:30:46.833005] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.561 qpair failed and we were unable to recover it. 00:30:09.561 [2024-02-14 20:30:46.833452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.561 [2024-02-14 20:30:46.833846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.561 [2024-02-14 20:30:46.833877] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.561 qpair failed and we were unable to recover it. 00:30:09.561 [2024-02-14 20:30:46.834350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.561 [2024-02-14 20:30:46.834742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.561 [2024-02-14 20:30:46.834772] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.561 qpair failed and we were unable to recover it. 00:30:09.561 [2024-02-14 20:30:46.835206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.561 [2024-02-14 20:30:46.835659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.561 [2024-02-14 20:30:46.835695] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.561 qpair failed and we were unable to recover it. 00:30:09.561 [2024-02-14 20:30:46.836172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.561 [2024-02-14 20:30:46.836515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.561 [2024-02-14 20:30:46.836544] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.561 qpair failed and we were unable to recover it. 00:30:09.561 [2024-02-14 20:30:46.837025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.561 [2024-02-14 20:30:46.837446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.561 [2024-02-14 20:30:46.837475] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.561 qpair failed and we were unable to recover it. 00:30:09.561 [2024-02-14 20:30:46.837909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.561 [2024-02-14 20:30:46.838296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.561 [2024-02-14 20:30:46.838325] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.561 qpair failed and we were unable to recover it. 00:30:09.561 [2024-02-14 20:30:46.838795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.561 [2024-02-14 20:30:46.839190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.561 [2024-02-14 20:30:46.839220] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.561 qpair failed and we were unable to recover it. 00:30:09.561 [2024-02-14 20:30:46.839666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.561 [2024-02-14 20:30:46.839999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.561 [2024-02-14 20:30:46.840028] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.561 qpair failed and we were unable to recover it. 00:30:09.561 [2024-02-14 20:30:46.840500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.561 [2024-02-14 20:30:46.840897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.561 [2024-02-14 20:30:46.840927] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.561 qpair failed and we were unable to recover it. 00:30:09.561 [2024-02-14 20:30:46.841406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.561 [2024-02-14 20:30:46.841876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.561 [2024-02-14 20:30:46.841907] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.561 qpair failed and we were unable to recover it. 00:30:09.561 [2024-02-14 20:30:46.842311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.561 [2024-02-14 20:30:46.842703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.561 [2024-02-14 20:30:46.842734] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.561 qpair failed and we were unable to recover it. 00:30:09.561 [2024-02-14 20:30:46.843144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.561 [2024-02-14 20:30:46.843611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.561 [2024-02-14 20:30:46.843641] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.561 qpair failed and we were unable to recover it. 00:30:09.561 [2024-02-14 20:30:46.844123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.561 [2024-02-14 20:30:46.844539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.561 [2024-02-14 20:30:46.844569] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.561 qpair failed and we were unable to recover it. 00:30:09.561 [2024-02-14 20:30:46.845050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.561 [2024-02-14 20:30:46.845521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.561 [2024-02-14 20:30:46.845550] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.561 qpair failed and we were unable to recover it. 00:30:09.561 [2024-02-14 20:30:46.846024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.561 [2024-02-14 20:30:46.846467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.561 [2024-02-14 20:30:46.846497] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.561 qpair failed and we were unable to recover it. 00:30:09.561 [2024-02-14 20:30:46.846956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.562 [2024-02-14 20:30:46.847350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.562 [2024-02-14 20:30:46.847380] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.562 qpair failed and we were unable to recover it. 00:30:09.562 [2024-02-14 20:30:46.847761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.562 [2024-02-14 20:30:46.848228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.562 [2024-02-14 20:30:46.848257] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.562 qpair failed and we were unable to recover it. 00:30:09.562 [2024-02-14 20:30:46.848690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.562 [2024-02-14 20:30:46.849100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.562 [2024-02-14 20:30:46.849129] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.562 qpair failed and we were unable to recover it. 00:30:09.562 [2024-02-14 20:30:46.849594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.562 [2024-02-14 20:30:46.850055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.562 [2024-02-14 20:30:46.850086] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.562 qpair failed and we were unable to recover it. 00:30:09.562 [2024-02-14 20:30:46.850486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.562 [2024-02-14 20:30:46.850902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.562 [2024-02-14 20:30:46.850933] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.562 qpair failed and we were unable to recover it. 00:30:09.562 [2024-02-14 20:30:46.851410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.562 [2024-02-14 20:30:46.851878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.562 [2024-02-14 20:30:46.851909] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.562 qpair failed and we were unable to recover it. 00:30:09.562 [2024-02-14 20:30:46.852313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.562 [2024-02-14 20:30:46.852780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.562 [2024-02-14 20:30:46.852812] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.562 qpair failed and we were unable to recover it. 00:30:09.562 [2024-02-14 20:30:46.853283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.562 [2024-02-14 20:30:46.853750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.562 [2024-02-14 20:30:46.853780] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.562 qpair failed and we were unable to recover it. 00:30:09.562 [2024-02-14 20:30:46.854204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.562 [2024-02-14 20:30:46.854707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.562 [2024-02-14 20:30:46.854738] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.562 qpair failed and we were unable to recover it. 00:30:09.562 [2024-02-14 20:30:46.855145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.562 [2024-02-14 20:30:46.855610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.562 [2024-02-14 20:30:46.855639] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.562 qpair failed and we were unable to recover it. 00:30:09.562 [2024-02-14 20:30:46.856134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.562 [2024-02-14 20:30:46.856531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.562 [2024-02-14 20:30:46.856561] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.562 qpair failed and we were unable to recover it. 00:30:09.562 [2024-02-14 20:30:46.857036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.562 [2024-02-14 20:30:46.857477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.562 [2024-02-14 20:30:46.857507] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.562 qpair failed and we were unable to recover it. 00:30:09.562 [2024-02-14 20:30:46.857971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.562 [2024-02-14 20:30:46.858425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.562 [2024-02-14 20:30:46.858456] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.562 qpair failed and we were unable to recover it. 00:30:09.562 [2024-02-14 20:30:46.858929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.562 [2024-02-14 20:30:46.859452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.562 [2024-02-14 20:30:46.859481] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.562 qpair failed and we were unable to recover it. 00:30:09.562 [2024-02-14 20:30:46.859936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.562 [2024-02-14 20:30:46.860403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.562 [2024-02-14 20:30:46.860432] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.562 qpair failed and we were unable to recover it. 00:30:09.562 [2024-02-14 20:30:46.860909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.562 [2024-02-14 20:30:46.861381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.562 [2024-02-14 20:30:46.861410] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.562 qpair failed and we were unable to recover it. 00:30:09.562 [2024-02-14 20:30:46.861827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.562 [2024-02-14 20:30:46.862216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.562 [2024-02-14 20:30:46.862246] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.562 qpair failed and we were unable to recover it. 00:30:09.562 [2024-02-14 20:30:46.862720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.562 [2024-02-14 20:30:46.863118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.562 [2024-02-14 20:30:46.863146] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.562 qpair failed and we were unable to recover it. 00:30:09.562 [2024-02-14 20:30:46.863588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.562 [2024-02-14 20:30:46.864076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.562 [2024-02-14 20:30:46.864107] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.562 qpair failed and we were unable to recover it. 00:30:09.562 [2024-02-14 20:30:46.864588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.562 [2024-02-14 20:30:46.865054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.562 [2024-02-14 20:30:46.865084] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.562 qpair failed and we were unable to recover it. 00:30:09.562 [2024-02-14 20:30:46.865536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.562 [2024-02-14 20:30:46.866011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.562 [2024-02-14 20:30:46.866042] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.562 qpair failed and we were unable to recover it. 00:30:09.562 [2024-02-14 20:30:46.866493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.562 [2024-02-14 20:30:46.866825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.562 [2024-02-14 20:30:46.866856] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.562 qpair failed and we were unable to recover it. 00:30:09.562 [2024-02-14 20:30:46.867330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.562 [2024-02-14 20:30:46.867730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.562 [2024-02-14 20:30:46.867760] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.562 qpair failed and we were unable to recover it. 00:30:09.562 [2024-02-14 20:30:46.868163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.562 [2024-02-14 20:30:46.868582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.562 [2024-02-14 20:30:46.868611] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.562 qpair failed and we were unable to recover it. 00:30:09.562 [2024-02-14 20:30:46.869105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.562 [2024-02-14 20:30:46.869510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.562 [2024-02-14 20:30:46.869539] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.562 qpair failed and we were unable to recover it. 00:30:09.562 [2024-02-14 20:30:46.870018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.562 [2024-02-14 20:30:46.870486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.562 [2024-02-14 20:30:46.870516] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.562 qpair failed and we were unable to recover it. 00:30:09.562 [2024-02-14 20:30:46.870967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.562 [2024-02-14 20:30:46.871364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.562 [2024-02-14 20:30:46.871393] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.562 qpair failed and we were unable to recover it. 00:30:09.562 [2024-02-14 20:30:46.871864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.562 [2024-02-14 20:30:46.872339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.562 [2024-02-14 20:30:46.872369] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.562 qpair failed and we were unable to recover it. 00:30:09.562 [2024-02-14 20:30:46.872770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.562 [2024-02-14 20:30:46.873243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.563 [2024-02-14 20:30:46.873274] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.563 qpair failed and we were unable to recover it. 00:30:09.563 [2024-02-14 20:30:46.873681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.563 [2024-02-14 20:30:46.874154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.563 [2024-02-14 20:30:46.874183] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.563 qpair failed and we were unable to recover it. 00:30:09.563 [2024-02-14 20:30:46.874599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.563 [2024-02-14 20:30:46.875102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.563 [2024-02-14 20:30:46.875133] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.563 qpair failed and we were unable to recover it. 00:30:09.563 [2024-02-14 20:30:46.875570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.563 [2024-02-14 20:30:46.875986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.563 [2024-02-14 20:30:46.876018] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.563 qpair failed and we were unable to recover it. 00:30:09.563 [2024-02-14 20:30:46.876413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.563 [2024-02-14 20:30:46.876756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.563 [2024-02-14 20:30:46.876786] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.563 qpair failed and we were unable to recover it. 00:30:09.563 [2024-02-14 20:30:46.877261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.563 [2024-02-14 20:30:46.877728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.563 [2024-02-14 20:30:46.877759] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.563 qpair failed and we were unable to recover it. 00:30:09.563 [2024-02-14 20:30:46.878239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.563 [2024-02-14 20:30:46.878714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.563 [2024-02-14 20:30:46.878745] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.563 qpair failed and we were unable to recover it. 00:30:09.563 [2024-02-14 20:30:46.879230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.563 [2024-02-14 20:30:46.879730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.563 [2024-02-14 20:30:46.879761] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.563 qpair failed and we were unable to recover it. 00:30:09.563 [2024-02-14 20:30:46.880185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.563 [2024-02-14 20:30:46.880657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.563 [2024-02-14 20:30:46.880687] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.563 qpair failed and we were unable to recover it. 00:30:09.563 [2024-02-14 20:30:46.881086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.563 [2024-02-14 20:30:46.881576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.563 [2024-02-14 20:30:46.881605] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.563 qpair failed and we were unable to recover it. 00:30:09.563 [2024-02-14 20:30:46.882074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.563 [2024-02-14 20:30:46.882539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.563 [2024-02-14 20:30:46.882574] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.563 qpair failed and we were unable to recover it. 00:30:09.563 [2024-02-14 20:30:46.883049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.563 [2024-02-14 20:30:46.883444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.563 [2024-02-14 20:30:46.883459] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.563 qpair failed and we were unable to recover it. 00:30:09.563 [2024-02-14 20:30:46.883814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.563 [2024-02-14 20:30:46.884256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.563 [2024-02-14 20:30:46.884271] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.563 qpair failed and we were unable to recover it. 00:30:09.563 [2024-02-14 20:30:46.884770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.563 [2024-02-14 20:30:46.885242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.563 [2024-02-14 20:30:46.885272] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.563 qpair failed and we were unable to recover it. 00:30:09.563 [2024-02-14 20:30:46.885751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.563 [2024-02-14 20:30:46.886160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.563 [2024-02-14 20:30:46.886189] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.563 qpair failed and we were unable to recover it. 00:30:09.563 [2024-02-14 20:30:46.886670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.563 [2024-02-14 20:30:46.887163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.563 [2024-02-14 20:30:46.887193] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.563 qpair failed and we were unable to recover it. 00:30:09.563 [2024-02-14 20:30:46.887712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.563 [2024-02-14 20:30:46.888159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.563 [2024-02-14 20:30:46.888188] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.563 qpair failed and we were unable to recover it. 00:30:09.563 [2024-02-14 20:30:46.888594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.563 [2024-02-14 20:30:46.889061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.563 [2024-02-14 20:30:46.889092] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.563 qpair failed and we were unable to recover it. 00:30:09.563 [2024-02-14 20:30:46.889539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.563 [2024-02-14 20:30:46.889939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.563 [2024-02-14 20:30:46.889969] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.563 qpair failed and we were unable to recover it. 00:30:09.563 [2024-02-14 20:30:46.890433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.563 [2024-02-14 20:30:46.890815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.563 [2024-02-14 20:30:46.890846] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.563 qpair failed and we were unable to recover it. 00:30:09.563 [2024-02-14 20:30:46.891241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.563 [2024-02-14 20:30:46.891628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.563 [2024-02-14 20:30:46.891677] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.563 qpair failed and we were unable to recover it. 00:30:09.563 [2024-02-14 20:30:46.892166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.563 [2024-02-14 20:30:46.892580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.563 [2024-02-14 20:30:46.892609] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.563 qpair failed and we were unable to recover it. 00:30:09.563 [2024-02-14 20:30:46.893077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.563 [2024-02-14 20:30:46.893555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.563 [2024-02-14 20:30:46.893570] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.563 qpair failed and we were unable to recover it. 00:30:09.563 [2024-02-14 20:30:46.894026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.563 [2024-02-14 20:30:46.894411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.563 [2024-02-14 20:30:46.894441] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.563 qpair failed and we were unable to recover it. 00:30:09.563 [2024-02-14 20:30:46.894831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.563 [2024-02-14 20:30:46.895310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.563 [2024-02-14 20:30:46.895340] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.563 qpair failed and we were unable to recover it. 00:30:09.563 [2024-02-14 20:30:46.895733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.563 [2024-02-14 20:30:46.896205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.563 [2024-02-14 20:30:46.896234] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.563 qpair failed and we were unable to recover it. 00:30:09.563 [2024-02-14 20:30:46.896719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.563 [2024-02-14 20:30:46.897138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.563 [2024-02-14 20:30:46.897167] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.563 qpair failed and we were unable to recover it. 00:30:09.563 [2024-02-14 20:30:46.897561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.563 [2024-02-14 20:30:46.897951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.563 [2024-02-14 20:30:46.897982] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.563 qpair failed and we were unable to recover it. 00:30:09.563 [2024-02-14 20:30:46.898442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.563 [2024-02-14 20:30:46.898908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.563 [2024-02-14 20:30:46.898939] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.564 qpair failed and we were unable to recover it. 00:30:09.564 [2024-02-14 20:30:46.899361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.564 [2024-02-14 20:30:46.899780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.564 [2024-02-14 20:30:46.899811] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.564 qpair failed and we were unable to recover it. 00:30:09.564 [2024-02-14 20:30:46.900236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.564 [2024-02-14 20:30:46.900622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.564 [2024-02-14 20:30:46.900660] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.564 qpair failed and we were unable to recover it. 00:30:09.564 [2024-02-14 20:30:46.901146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.564 [2024-02-14 20:30:46.901612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.564 [2024-02-14 20:30:46.901641] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.564 qpair failed and we were unable to recover it. 00:30:09.564 [2024-02-14 20:30:46.902129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.564 [2024-02-14 20:30:46.902529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.564 [2024-02-14 20:30:46.902558] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.564 qpair failed and we were unable to recover it. 00:30:09.564 [2024-02-14 20:30:46.903021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.564 [2024-02-14 20:30:46.903378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.564 [2024-02-14 20:30:46.903410] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.564 qpair failed and we were unable to recover it. 00:30:09.564 [2024-02-14 20:30:46.903886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.564 [2024-02-14 20:30:46.904354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.564 [2024-02-14 20:30:46.904382] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.564 qpair failed and we were unable to recover it. 00:30:09.564 [2024-02-14 20:30:46.904832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.564 [2024-02-14 20:30:46.905309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.564 [2024-02-14 20:30:46.905338] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.564 qpair failed and we were unable to recover it. 00:30:09.564 [2024-02-14 20:30:46.905756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.564 [2024-02-14 20:30:46.906146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.564 [2024-02-14 20:30:46.906175] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.564 qpair failed and we were unable to recover it. 00:30:09.564 [2024-02-14 20:30:46.906664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.564 [2024-02-14 20:30:46.907159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.564 [2024-02-14 20:30:46.907189] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.564 qpair failed and we were unable to recover it. 00:30:09.564 [2024-02-14 20:30:46.907626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.564 [2024-02-14 20:30:46.908122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.564 [2024-02-14 20:30:46.908152] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.564 qpair failed and we were unable to recover it. 00:30:09.564 [2024-02-14 20:30:46.908597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.564 [2024-02-14 20:30:46.908967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.564 [2024-02-14 20:30:46.909009] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.564 qpair failed and we were unable to recover it. 00:30:09.564 [2024-02-14 20:30:46.909439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.564 [2024-02-14 20:30:46.909765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.564 [2024-02-14 20:30:46.909795] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.564 qpair failed and we were unable to recover it. 00:30:09.564 [2024-02-14 20:30:46.910195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.564 [2024-02-14 20:30:46.910639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.564 [2024-02-14 20:30:46.910678] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.564 qpair failed and we were unable to recover it. 00:30:09.564 [2024-02-14 20:30:46.911153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.564 [2024-02-14 20:30:46.911606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.564 [2024-02-14 20:30:46.911635] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.564 qpair failed and we were unable to recover it. 00:30:09.564 [2024-02-14 20:30:46.912043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.564 [2024-02-14 20:30:46.912484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.564 [2024-02-14 20:30:46.912513] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.564 qpair failed and we were unable to recover it. 00:30:09.564 [2024-02-14 20:30:46.912996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.564 [2024-02-14 20:30:46.913397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.564 [2024-02-14 20:30:46.913426] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.564 qpair failed and we were unable to recover it. 00:30:09.564 [2024-02-14 20:30:46.913826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.564 [2024-02-14 20:30:46.914167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.564 [2024-02-14 20:30:46.914197] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.564 qpair failed and we were unable to recover it. 00:30:09.564 [2024-02-14 20:30:46.914668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.564 [2024-02-14 20:30:46.915090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.564 [2024-02-14 20:30:46.915119] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.564 qpair failed and we were unable to recover it. 00:30:09.564 [2024-02-14 20:30:46.915573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.564 [2024-02-14 20:30:46.916041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.564 [2024-02-14 20:30:46.916071] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.564 qpair failed and we were unable to recover it. 00:30:09.564 [2024-02-14 20:30:46.916558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.564 [2024-02-14 20:30:46.917050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.564 [2024-02-14 20:30:46.917081] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.564 qpair failed and we were unable to recover it. 00:30:09.564 [2024-02-14 20:30:46.917592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.564 [2024-02-14 20:30:46.917980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.564 [2024-02-14 20:30:46.918010] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.564 qpair failed and we were unable to recover it. 00:30:09.564 [2024-02-14 20:30:46.918405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.564 [2024-02-14 20:30:46.918871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.564 [2024-02-14 20:30:46.918902] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.564 qpair failed and we were unable to recover it. 00:30:09.564 [2024-02-14 20:30:46.919300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.564 [2024-02-14 20:30:46.919781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.564 [2024-02-14 20:30:46.919812] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.564 qpair failed and we were unable to recover it. 00:30:09.564 [2024-02-14 20:30:46.920288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.564 [2024-02-14 20:30:46.920676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.564 [2024-02-14 20:30:46.920708] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.564 qpair failed and we were unable to recover it. 00:30:09.564 [2024-02-14 20:30:46.921158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.564 [2024-02-14 20:30:46.921623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.564 [2024-02-14 20:30:46.921660] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.564 qpair failed and we were unable to recover it. 00:30:09.564 [2024-02-14 20:30:46.922138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.565 [2024-02-14 20:30:46.922586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.565 [2024-02-14 20:30:46.922615] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.565 qpair failed and we were unable to recover it. 00:30:09.565 [2024-02-14 20:30:46.923033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.565 [2024-02-14 20:30:46.923499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.565 [2024-02-14 20:30:46.923529] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.565 qpair failed and we were unable to recover it. 00:30:09.565 [2024-02-14 20:30:46.923929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.565 [2024-02-14 20:30:46.924395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.565 [2024-02-14 20:30:46.924424] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.565 qpair failed and we were unable to recover it. 00:30:09.565 [2024-02-14 20:30:46.924871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.565 [2024-02-14 20:30:46.925314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.565 [2024-02-14 20:30:46.925344] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.565 qpair failed and we were unable to recover it. 00:30:09.565 [2024-02-14 20:30:46.925705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.565 [2024-02-14 20:30:46.926116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.565 [2024-02-14 20:30:46.926145] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.565 qpair failed and we were unable to recover it. 00:30:09.565 [2024-02-14 20:30:46.926642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.565 [2024-02-14 20:30:46.927083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.565 [2024-02-14 20:30:46.927113] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.565 qpair failed and we were unable to recover it. 00:30:09.565 [2024-02-14 20:30:46.927523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.565 [2024-02-14 20:30:46.927967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.565 [2024-02-14 20:30:46.927997] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.565 qpair failed and we were unable to recover it. 00:30:09.565 [2024-02-14 20:30:46.928411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.565 [2024-02-14 20:30:46.928914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.565 [2024-02-14 20:30:46.928949] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.565 qpair failed and we were unable to recover it. 00:30:09.565 [2024-02-14 20:30:46.929403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.565 [2024-02-14 20:30:46.929817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.565 [2024-02-14 20:30:46.929847] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.565 qpair failed and we were unable to recover it. 00:30:09.565 [2024-02-14 20:30:46.930299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.565 [2024-02-14 20:30:46.930765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.565 [2024-02-14 20:30:46.930796] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.565 qpair failed and we were unable to recover it. 00:30:09.565 [2024-02-14 20:30:46.931225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.565 [2024-02-14 20:30:46.931732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.565 [2024-02-14 20:30:46.931762] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.565 qpair failed and we were unable to recover it. 00:30:09.565 [2024-02-14 20:30:46.932244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.565 [2024-02-14 20:30:46.932639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.565 [2024-02-14 20:30:46.932680] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.565 qpair failed and we were unable to recover it. 00:30:09.565 [2024-02-14 20:30:46.933132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.565 [2024-02-14 20:30:46.933615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.565 [2024-02-14 20:30:46.933644] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.565 qpair failed and we were unable to recover it. 00:30:09.565 [2024-02-14 20:30:46.934173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.565 [2024-02-14 20:30:46.934634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.565 [2024-02-14 20:30:46.934675] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.565 qpair failed and we were unable to recover it. 00:30:09.565 [2024-02-14 20:30:46.935090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.565 [2024-02-14 20:30:46.935556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.565 [2024-02-14 20:30:46.935585] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.565 qpair failed and we were unable to recover it. 00:30:09.565 [2024-02-14 20:30:46.935983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.565 [2024-02-14 20:30:46.936446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.565 [2024-02-14 20:30:46.936461] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.565 qpair failed and we were unable to recover it. 00:30:09.565 [2024-02-14 20:30:46.936924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.565 [2024-02-14 20:30:46.937349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.565 [2024-02-14 20:30:46.937378] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.565 qpair failed and we were unable to recover it. 00:30:09.565 [2024-02-14 20:30:46.937822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.565 [2024-02-14 20:30:46.938252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.565 [2024-02-14 20:30:46.938288] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.565 qpair failed and we were unable to recover it. 00:30:09.565 [2024-02-14 20:30:46.938781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.565 [2024-02-14 20:30:46.939253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.565 [2024-02-14 20:30:46.939283] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.565 qpair failed and we were unable to recover it. 00:30:09.565 [2024-02-14 20:30:46.939681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.565 [2024-02-14 20:30:46.940091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.565 [2024-02-14 20:30:46.940121] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.565 qpair failed and we were unable to recover it. 00:30:09.565 [2024-02-14 20:30:46.940612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.565 [2024-02-14 20:30:46.941028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.565 [2024-02-14 20:30:46.941058] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.565 qpair failed and we were unable to recover it. 00:30:09.565 [2024-02-14 20:30:46.941510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.565 [2024-02-14 20:30:46.941976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.565 [2024-02-14 20:30:46.942006] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.565 qpair failed and we were unable to recover it. 00:30:09.565 [2024-02-14 20:30:46.942484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.565 [2024-02-14 20:30:46.942951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.565 [2024-02-14 20:30:46.942981] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.565 qpair failed and we were unable to recover it. 00:30:09.565 [2024-02-14 20:30:46.943383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.565 [2024-02-14 20:30:46.943795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.565 [2024-02-14 20:30:46.943825] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.565 qpair failed and we were unable to recover it. 00:30:09.565 [2024-02-14 20:30:46.944463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.565 [2024-02-14 20:30:46.944909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.565 [2024-02-14 20:30:46.944940] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.565 qpair failed and we were unable to recover it. 00:30:09.565 [2024-02-14 20:30:46.945339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.565 [2024-02-14 20:30:46.945807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.565 [2024-02-14 20:30:46.945837] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.565 qpair failed and we were unable to recover it. 00:30:09.565 [2024-02-14 20:30:46.946234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.565 [2024-02-14 20:30:46.946701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.565 [2024-02-14 20:30:46.946731] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.565 qpair failed and we were unable to recover it. 00:30:09.565 [2024-02-14 20:30:46.947214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.566 [2024-02-14 20:30:46.947620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.566 [2024-02-14 20:30:46.947662] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.566 qpair failed and we were unable to recover it. 00:30:09.566 [2024-02-14 20:30:46.948191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.566 [2024-02-14 20:30:46.948664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.566 [2024-02-14 20:30:46.948694] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.566 qpair failed and we were unable to recover it. 00:30:09.566 [2024-02-14 20:30:46.949087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.566 [2024-02-14 20:30:46.949502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.566 [2024-02-14 20:30:46.949531] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.566 qpair failed and we were unable to recover it. 00:30:09.566 [2024-02-14 20:30:46.950000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.566 [2024-02-14 20:30:46.950469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.566 [2024-02-14 20:30:46.950510] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.566 qpair failed and we were unable to recover it. 00:30:09.566 [2024-02-14 20:30:46.950876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.566 [2024-02-14 20:30:46.951342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.566 [2024-02-14 20:30:46.951386] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.566 qpair failed and we were unable to recover it. 00:30:09.566 [2024-02-14 20:30:46.951761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.566 [2024-02-14 20:30:46.952203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.566 [2024-02-14 20:30:46.952233] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.566 qpair failed and we were unable to recover it. 00:30:09.566 [2024-02-14 20:30:46.952737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.566 [2024-02-14 20:30:46.953233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.566 [2024-02-14 20:30:46.953262] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.566 qpair failed and we were unable to recover it. 00:30:09.566 [2024-02-14 20:30:46.953772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.566 [2024-02-14 20:30:46.954175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.566 [2024-02-14 20:30:46.954203] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.566 qpair failed and we were unable to recover it. 00:30:09.566 [2024-02-14 20:30:46.954714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.566 [2024-02-14 20:30:46.955194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.566 [2024-02-14 20:30:46.955223] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.566 qpair failed and we were unable to recover it. 00:30:09.566 [2024-02-14 20:30:46.955623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.566 [2024-02-14 20:30:46.956097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.566 [2024-02-14 20:30:46.956126] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.566 qpair failed and we were unable to recover it. 00:30:09.566 [2024-02-14 20:30:46.956604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.566 [2024-02-14 20:30:46.957076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.566 [2024-02-14 20:30:46.957106] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.566 qpair failed and we were unable to recover it. 00:30:09.566 [2024-02-14 20:30:46.957587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.566 [2024-02-14 20:30:46.958054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.566 [2024-02-14 20:30:46.958094] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.566 qpair failed and we were unable to recover it. 00:30:09.566 [2024-02-14 20:30:46.958471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.566 [2024-02-14 20:30:46.958916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.566 [2024-02-14 20:30:46.958946] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.566 qpair failed and we were unable to recover it. 00:30:09.566 [2024-02-14 20:30:46.959341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.566 [2024-02-14 20:30:46.959807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.566 [2024-02-14 20:30:46.959837] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.566 qpair failed and we were unable to recover it. 00:30:09.566 [2024-02-14 20:30:46.960295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.566 [2024-02-14 20:30:46.960765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.566 [2024-02-14 20:30:46.960780] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.566 qpair failed and we were unable to recover it. 00:30:09.566 [2024-02-14 20:30:46.961267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.566 [2024-02-14 20:30:46.961629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.566 [2024-02-14 20:30:46.961644] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.566 qpair failed and we were unable to recover it. 00:30:09.566 [2024-02-14 20:30:46.962038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.566 [2024-02-14 20:30:46.962512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.566 [2024-02-14 20:30:46.962541] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.566 qpair failed and we were unable to recover it. 00:30:09.566 [2024-02-14 20:30:46.962990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.566 [2024-02-14 20:30:46.963397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.566 [2024-02-14 20:30:46.963426] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.566 qpair failed and we were unable to recover it. 00:30:09.566 [2024-02-14 20:30:46.963870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.566 [2024-02-14 20:30:46.964250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.566 [2024-02-14 20:30:46.964266] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.566 qpair failed and we were unable to recover it. 00:30:09.566 [2024-02-14 20:30:46.964653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.566 [2024-02-14 20:30:46.965024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.566 [2024-02-14 20:30:46.965054] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.566 qpair failed and we were unable to recover it. 00:30:09.566 [2024-02-14 20:30:46.965520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.831 [2024-02-14 20:30:46.965967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.831 [2024-02-14 20:30:46.965984] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.831 qpair failed and we were unable to recover it. 00:30:09.831 [2024-02-14 20:30:46.966430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.831 [2024-02-14 20:30:46.966849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.831 [2024-02-14 20:30:46.966865] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.831 qpair failed and we were unable to recover it. 00:30:09.831 [2024-02-14 20:30:46.967310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.831 [2024-02-14 20:30:46.967748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.831 [2024-02-14 20:30:46.967764] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.831 qpair failed and we were unable to recover it. 00:30:09.831 [2024-02-14 20:30:46.968136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.831 [2024-02-14 20:30:46.968592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.831 [2024-02-14 20:30:46.968621] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.831 qpair failed and we were unable to recover it. 00:30:09.831 [2024-02-14 20:30:46.969091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.831 [2024-02-14 20:30:46.969561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.831 [2024-02-14 20:30:46.969590] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.831 qpair failed and we were unable to recover it. 00:30:09.831 [2024-02-14 20:30:46.969991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.831 [2024-02-14 20:30:46.970458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.831 [2024-02-14 20:30:46.970488] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.831 qpair failed and we were unable to recover it. 00:30:09.831 [2024-02-14 20:30:46.970942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.831 [2024-02-14 20:30:46.971411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.831 [2024-02-14 20:30:46.971450] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.831 qpair failed and we were unable to recover it. 00:30:09.831 [2024-02-14 20:30:46.971879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.831 [2024-02-14 20:30:46.972271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.831 [2024-02-14 20:30:46.972287] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.831 qpair failed and we were unable to recover it. 00:30:09.831 [2024-02-14 20:30:46.972722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.831 [2024-02-14 20:30:46.973072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.831 [2024-02-14 20:30:46.973086] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.831 qpair failed and we were unable to recover it. 00:30:09.831 [2024-02-14 20:30:46.973531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.831 [2024-02-14 20:30:46.973999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.831 [2024-02-14 20:30:46.974029] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.831 qpair failed and we were unable to recover it. 00:30:09.831 [2024-02-14 20:30:46.974505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.831 [2024-02-14 20:30:46.974900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.831 [2024-02-14 20:30:46.974931] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.831 qpair failed and we were unable to recover it. 00:30:09.831 [2024-02-14 20:30:46.975382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.831 [2024-02-14 20:30:46.975857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.831 [2024-02-14 20:30:46.975888] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.831 qpair failed and we were unable to recover it. 00:30:09.831 [2024-02-14 20:30:46.976300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.831 [2024-02-14 20:30:46.976695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.831 [2024-02-14 20:30:46.976711] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.831 qpair failed and we were unable to recover it. 00:30:09.831 [2024-02-14 20:30:46.977134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.831 [2024-02-14 20:30:46.977579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.831 [2024-02-14 20:30:46.977609] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.831 qpair failed and we were unable to recover it. 00:30:09.831 [2024-02-14 20:30:46.978096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.831 [2024-02-14 20:30:46.978565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.831 [2024-02-14 20:30:46.978595] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.831 qpair failed and we were unable to recover it. 00:30:09.831 [2024-02-14 20:30:46.979089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.831 [2024-02-14 20:30:46.979481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.831 [2024-02-14 20:30:46.979496] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.831 qpair failed and we were unable to recover it. 00:30:09.831 [2024-02-14 20:30:46.979925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.831 [2024-02-14 20:30:46.980308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.831 [2024-02-14 20:30:46.980338] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.831 qpair failed and we were unable to recover it. 00:30:09.832 [2024-02-14 20:30:46.980824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.832 [2024-02-14 20:30:46.981216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.832 [2024-02-14 20:30:46.981246] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.832 qpair failed and we were unable to recover it. 00:30:09.832 [2024-02-14 20:30:46.981659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.832 [2024-02-14 20:30:46.982128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.832 [2024-02-14 20:30:46.982157] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.832 qpair failed and we were unable to recover it. 00:30:09.832 [2024-02-14 20:30:46.982634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.832 [2024-02-14 20:30:46.982970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.832 [2024-02-14 20:30:46.982985] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.832 qpair failed and we were unable to recover it. 00:30:09.832 [2024-02-14 20:30:46.983404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.832 [2024-02-14 20:30:46.983821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.832 [2024-02-14 20:30:46.983837] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.832 qpair failed and we were unable to recover it. 00:30:09.832 [2024-02-14 20:30:46.984291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.832 [2024-02-14 20:30:46.984660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.832 [2024-02-14 20:30:46.984679] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.832 qpair failed and we were unable to recover it. 00:30:09.832 [2024-02-14 20:30:46.985121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.832 [2024-02-14 20:30:46.985501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.832 [2024-02-14 20:30:46.985516] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.832 qpair failed and we were unable to recover it. 00:30:09.832 [2024-02-14 20:30:46.985962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.832 [2024-02-14 20:30:46.986403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.832 [2024-02-14 20:30:46.986418] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.832 qpair failed and we were unable to recover it. 00:30:09.832 [2024-02-14 20:30:46.986860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.832 [2024-02-14 20:30:46.987301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.832 [2024-02-14 20:30:46.987317] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.832 qpair failed and we were unable to recover it. 00:30:09.832 [2024-02-14 20:30:46.987630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.832 [2024-02-14 20:30:46.988144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.832 [2024-02-14 20:30:46.988174] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.832 qpair failed and we were unable to recover it. 00:30:09.832 [2024-02-14 20:30:46.988512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.832 [2024-02-14 20:30:46.988888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.832 [2024-02-14 20:30:46.988919] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.832 qpair failed and we were unable to recover it. 00:30:09.832 [2024-02-14 20:30:46.989275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.832 [2024-02-14 20:30:46.989745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.832 [2024-02-14 20:30:46.989775] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.832 qpair failed and we were unable to recover it. 00:30:09.832 [2024-02-14 20:30:46.990187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.832 [2024-02-14 20:30:46.990563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.832 [2024-02-14 20:30:46.990592] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.832 qpair failed and we were unable to recover it. 00:30:09.832 [2024-02-14 20:30:46.990998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.832 [2024-02-14 20:30:46.991611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.832 [2024-02-14 20:30:46.991626] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.832 qpair failed and we were unable to recover it. 00:30:09.832 [2024-02-14 20:30:46.992091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.832 [2024-02-14 20:30:46.992405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.832 [2024-02-14 20:30:46.992420] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.832 qpair failed and we were unable to recover it. 00:30:09.832 [2024-02-14 20:30:46.992787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.832 [2024-02-14 20:30:46.993206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.832 [2024-02-14 20:30:46.993221] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.832 qpair failed and we were unable to recover it. 00:30:09.832 [2024-02-14 20:30:46.993643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.832 [2024-02-14 20:30:46.994064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.832 [2024-02-14 20:30:46.994079] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.832 qpair failed and we were unable to recover it. 00:30:09.832 [2024-02-14 20:30:46.994447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.832 [2024-02-14 20:30:46.994834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.832 [2024-02-14 20:30:46.994850] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.832 qpair failed and we were unable to recover it. 00:30:09.832 [2024-02-14 20:30:46.995237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.832 [2024-02-14 20:30:46.995688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.832 [2024-02-14 20:30:46.995703] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.832 qpair failed and we were unable to recover it. 00:30:09.832 [2024-02-14 20:30:46.996163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.832 [2024-02-14 20:30:46.996681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.832 [2024-02-14 20:30:46.996712] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.832 qpair failed and we were unable to recover it. 00:30:09.832 [2024-02-14 20:30:46.997136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.832 [2024-02-14 20:30:46.997629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.832 [2024-02-14 20:30:46.997653] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.832 qpair failed and we were unable to recover it. 00:30:09.832 [2024-02-14 20:30:46.998078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.832 [2024-02-14 20:30:46.998529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.832 [2024-02-14 20:30:46.998558] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.832 qpair failed and we were unable to recover it. 00:30:09.832 [2024-02-14 20:30:46.999047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.832 [2024-02-14 20:30:46.999418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.832 [2024-02-14 20:30:46.999447] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.832 qpair failed and we were unable to recover it. 00:30:09.832 [2024-02-14 20:30:46.999916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.832 [2024-02-14 20:30:47.000363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.832 [2024-02-14 20:30:47.000393] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.832 qpair failed and we were unable to recover it. 00:30:09.832 [2024-02-14 20:30:47.000829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.832 [2024-02-14 20:30:47.001275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.832 [2024-02-14 20:30:47.001303] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.832 qpair failed and we were unable to recover it. 00:30:09.832 [2024-02-14 20:30:47.001755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.832 [2024-02-14 20:30:47.002190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.832 [2024-02-14 20:30:47.002205] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.832 qpair failed and we were unable to recover it. 00:30:09.832 [2024-02-14 20:30:47.002654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.832 [2024-02-14 20:30:47.003051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.832 [2024-02-14 20:30:47.003066] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.832 qpair failed and we were unable to recover it. 00:30:09.832 [2024-02-14 20:30:47.003436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.832 [2024-02-14 20:30:47.003885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.832 [2024-02-14 20:30:47.003901] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.832 qpair failed and we were unable to recover it. 00:30:09.832 [2024-02-14 20:30:47.004346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.833 [2024-02-14 20:30:47.004717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.833 [2024-02-14 20:30:47.004732] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.833 qpair failed and we were unable to recover it. 00:30:09.833 [2024-02-14 20:30:47.005095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.833 [2024-02-14 20:30:47.005452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.833 [2024-02-14 20:30:47.005467] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.833 qpair failed and we were unable to recover it. 00:30:09.833 [2024-02-14 20:30:47.005858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.833 [2024-02-14 20:30:47.006221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.833 [2024-02-14 20:30:47.006236] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.833 qpair failed and we were unable to recover it. 00:30:09.833 [2024-02-14 20:30:47.006628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.833 [2024-02-14 20:30:47.007054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.833 [2024-02-14 20:30:47.007070] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.833 qpair failed and we were unable to recover it. 00:30:09.833 [2024-02-14 20:30:47.007467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.833 [2024-02-14 20:30:47.007905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.833 [2024-02-14 20:30:47.007920] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.833 qpair failed and we were unable to recover it. 00:30:09.833 [2024-02-14 20:30:47.008480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.833 [2024-02-14 20:30:47.008992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.833 [2024-02-14 20:30:47.009023] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.833 qpair failed and we were unable to recover it. 00:30:09.833 [2024-02-14 20:30:47.009453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.833 [2024-02-14 20:30:47.009781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.833 [2024-02-14 20:30:47.009797] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.833 qpair failed and we were unable to recover it. 00:30:09.833 [2024-02-14 20:30:47.010178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.833 [2024-02-14 20:30:47.010622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.833 [2024-02-14 20:30:47.010661] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.833 qpair failed and we were unable to recover it. 00:30:09.833 [2024-02-14 20:30:47.011166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.833 [2024-02-14 20:30:47.011552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.833 [2024-02-14 20:30:47.011581] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.833 qpair failed and we were unable to recover it. 00:30:09.833 [2024-02-14 20:30:47.011962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.833 [2024-02-14 20:30:47.012414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.833 [2024-02-14 20:30:47.012444] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.833 qpair failed and we were unable to recover it. 00:30:09.833 [2024-02-14 20:30:47.012934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.833 [2024-02-14 20:30:47.013377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.833 [2024-02-14 20:30:47.013392] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.833 qpair failed and we were unable to recover it. 00:30:09.833 [2024-02-14 20:30:47.013761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.833 [2024-02-14 20:30:47.014307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.833 [2024-02-14 20:30:47.014337] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.833 qpair failed and we were unable to recover it. 00:30:09.833 [2024-02-14 20:30:47.014834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.833 [2024-02-14 20:30:47.015305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.833 [2024-02-14 20:30:47.015334] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.833 qpair failed and we were unable to recover it. 00:30:09.833 [2024-02-14 20:30:47.015806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.833 [2024-02-14 20:30:47.016251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.833 [2024-02-14 20:30:47.016266] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.833 qpair failed and we were unable to recover it. 00:30:09.833 [2024-02-14 20:30:47.016710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.833 [2024-02-14 20:30:47.017176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.833 [2024-02-14 20:30:47.017205] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.833 qpair failed and we were unable to recover it. 00:30:09.833 [2024-02-14 20:30:47.017689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.833 [2024-02-14 20:30:47.018056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.833 [2024-02-14 20:30:47.018071] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.833 qpair failed and we were unable to recover it. 00:30:09.833 [2024-02-14 20:30:47.018521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.833 [2024-02-14 20:30:47.018888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.833 [2024-02-14 20:30:47.018903] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.833 qpair failed and we were unable to recover it. 00:30:09.833 [2024-02-14 20:30:47.019343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.833 [2024-02-14 20:30:47.019865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.833 [2024-02-14 20:30:47.019896] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.833 qpair failed and we were unable to recover it. 00:30:09.833 [2024-02-14 20:30:47.020417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.833 [2024-02-14 20:30:47.020883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.833 [2024-02-14 20:30:47.020914] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.833 qpair failed and we were unable to recover it. 00:30:09.833 [2024-02-14 20:30:47.021397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.833 [2024-02-14 20:30:47.021791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.833 [2024-02-14 20:30:47.021821] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.833 qpair failed and we were unable to recover it. 00:30:09.833 [2024-02-14 20:30:47.022294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.833 [2024-02-14 20:30:47.022690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.833 [2024-02-14 20:30:47.022720] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.833 qpair failed and we were unable to recover it. 00:30:09.833 [2024-02-14 20:30:47.023070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.833 [2024-02-14 20:30:47.023481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.833 [2024-02-14 20:30:47.023510] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.833 qpair failed and we were unable to recover it. 00:30:09.833 [2024-02-14 20:30:47.023984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.833 [2024-02-14 20:30:47.024430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.833 [2024-02-14 20:30:47.024459] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.833 qpair failed and we were unable to recover it. 00:30:09.833 [2024-02-14 20:30:47.024958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.833 [2024-02-14 20:30:47.025448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.833 [2024-02-14 20:30:47.025477] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.833 qpair failed and we were unable to recover it. 00:30:09.833 [2024-02-14 20:30:47.026022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.833 [2024-02-14 20:30:47.026407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.833 [2024-02-14 20:30:47.026437] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.833 qpair failed and we were unable to recover it. 00:30:09.833 [2024-02-14 20:30:47.026914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.833 [2024-02-14 20:30:47.027385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.833 [2024-02-14 20:30:47.027413] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.833 qpair failed and we were unable to recover it. 00:30:09.833 [2024-02-14 20:30:47.027866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.833 [2024-02-14 20:30:47.028338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.833 [2024-02-14 20:30:47.028367] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.833 qpair failed and we were unable to recover it. 00:30:09.833 [2024-02-14 20:30:47.028843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.833 [2024-02-14 20:30:47.029259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.833 [2024-02-14 20:30:47.029289] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.833 qpair failed and we were unable to recover it. 00:30:09.833 [2024-02-14 20:30:47.029744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.833 [2024-02-14 20:30:47.030214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.833 [2024-02-14 20:30:47.030249] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.833 qpair failed and we were unable to recover it. 00:30:09.834 [2024-02-14 20:30:47.030728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.834 [2024-02-14 20:30:47.031192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.834 [2024-02-14 20:30:47.031225] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.834 qpair failed and we were unable to recover it. 00:30:09.834 [2024-02-14 20:30:47.031728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.834 [2024-02-14 20:30:47.032172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.834 [2024-02-14 20:30:47.032202] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.834 qpair failed and we were unable to recover it. 00:30:09.834 [2024-02-14 20:30:47.032678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.834 [2024-02-14 20:30:47.033120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.834 [2024-02-14 20:30:47.033149] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.834 qpair failed and we were unable to recover it. 00:30:09.834 [2024-02-14 20:30:47.033554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.834 [2024-02-14 20:30:47.034032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.834 [2024-02-14 20:30:47.034047] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.834 qpair failed and we were unable to recover it. 00:30:09.834 [2024-02-14 20:30:47.034495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.834 [2024-02-14 20:30:47.034866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.834 [2024-02-14 20:30:47.034896] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.834 qpair failed and we were unable to recover it. 00:30:09.834 [2024-02-14 20:30:47.035295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.834 [2024-02-14 20:30:47.035762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.834 [2024-02-14 20:30:47.035792] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.834 qpair failed and we were unable to recover it. 00:30:09.834 [2024-02-14 20:30:47.036188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.834 [2024-02-14 20:30:47.036597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.834 [2024-02-14 20:30:47.036626] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.834 qpair failed and we were unable to recover it. 00:30:09.834 [2024-02-14 20:30:47.037131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.834 [2024-02-14 20:30:47.037576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.834 [2024-02-14 20:30:47.037605] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.834 qpair failed and we were unable to recover it. 00:30:09.834 [2024-02-14 20:30:47.038115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.834 [2024-02-14 20:30:47.038608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.834 [2024-02-14 20:30:47.038637] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.834 qpair failed and we were unable to recover it. 00:30:09.834 [2024-02-14 20:30:47.039160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.834 [2024-02-14 20:30:47.039606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.834 [2024-02-14 20:30:47.039635] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.834 qpair failed and we were unable to recover it. 00:30:09.834 [2024-02-14 20:30:47.040144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.834 [2024-02-14 20:30:47.040635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.834 [2024-02-14 20:30:47.040677] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.834 qpair failed and we were unable to recover it. 00:30:09.834 [2024-02-14 20:30:47.041174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.834 [2024-02-14 20:30:47.041616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.834 [2024-02-14 20:30:47.041659] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.834 qpair failed and we were unable to recover it. 00:30:09.834 [2024-02-14 20:30:47.042103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.834 [2024-02-14 20:30:47.042565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.834 [2024-02-14 20:30:47.042595] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.834 qpair failed and we were unable to recover it. 00:30:09.834 [2024-02-14 20:30:47.043085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.834 [2024-02-14 20:30:47.043579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.834 [2024-02-14 20:30:47.043609] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.834 qpair failed and we were unable to recover it. 00:30:09.834 [2024-02-14 20:30:47.044119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.834 [2024-02-14 20:30:47.044562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.834 [2024-02-14 20:30:47.044592] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.834 qpair failed and we were unable to recover it. 00:30:09.834 [2024-02-14 20:30:47.044996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.834 [2024-02-14 20:30:47.045470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.834 [2024-02-14 20:30:47.045499] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.834 qpair failed and we were unable to recover it. 00:30:09.834 [2024-02-14 20:30:47.045923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.834 [2024-02-14 20:30:47.046398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.834 [2024-02-14 20:30:47.046427] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.834 qpair failed and we were unable to recover it. 00:30:09.834 [2024-02-14 20:30:47.046912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.834 [2024-02-14 20:30:47.047406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.834 [2024-02-14 20:30:47.047435] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.834 qpair failed and we were unable to recover it. 00:30:09.834 [2024-02-14 20:30:47.047868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.834 [2024-02-14 20:30:47.048354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.834 [2024-02-14 20:30:47.048393] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.834 qpair failed and we were unable to recover it. 00:30:09.834 [2024-02-14 20:30:47.048865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.834 [2024-02-14 20:30:47.049357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.834 [2024-02-14 20:30:47.049386] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.834 qpair failed and we were unable to recover it. 00:30:09.834 [2024-02-14 20:30:47.049904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.834 [2024-02-14 20:30:47.050350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.834 [2024-02-14 20:30:47.050379] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.834 qpair failed and we were unable to recover it. 00:30:09.834 [2024-02-14 20:30:47.050776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.834 [2024-02-14 20:30:47.051257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.834 [2024-02-14 20:30:47.051287] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.834 qpair failed and we were unable to recover it. 00:30:09.834 [2024-02-14 20:30:47.051759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.834 [2024-02-14 20:30:47.052181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.834 [2024-02-14 20:30:47.052211] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.834 qpair failed and we were unable to recover it. 00:30:09.834 [2024-02-14 20:30:47.052686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.834 [2024-02-14 20:30:47.053161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.834 [2024-02-14 20:30:47.053189] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.834 qpair failed and we were unable to recover it. 00:30:09.834 [2024-02-14 20:30:47.053674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.834 [2024-02-14 20:30:47.054167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.834 [2024-02-14 20:30:47.054196] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.834 qpair failed and we were unable to recover it. 00:30:09.834 [2024-02-14 20:30:47.054708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.834 [2024-02-14 20:30:47.055170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.834 [2024-02-14 20:30:47.055199] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.834 qpair failed and we were unable to recover it. 00:30:09.834 [2024-02-14 20:30:47.055602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.834 [2024-02-14 20:30:47.056000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.834 [2024-02-14 20:30:47.056030] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.834 qpair failed and we were unable to recover it. 00:30:09.834 [2024-02-14 20:30:47.056507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.834 [2024-02-14 20:30:47.056952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.834 [2024-02-14 20:30:47.056982] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.834 qpair failed and we were unable to recover it. 00:30:09.834 [2024-02-14 20:30:47.057433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.834 [2024-02-14 20:30:47.057829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.835 [2024-02-14 20:30:47.057860] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.835 qpair failed and we were unable to recover it. 00:30:09.835 [2024-02-14 20:30:47.058261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.835 [2024-02-14 20:30:47.058716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.835 [2024-02-14 20:30:47.058746] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.835 qpair failed and we were unable to recover it. 00:30:09.835 [2024-02-14 20:30:47.059247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.835 [2024-02-14 20:30:47.059680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.835 [2024-02-14 20:30:47.059711] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.835 qpair failed and we were unable to recover it. 00:30:09.835 [2024-02-14 20:30:47.060181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.835 [2024-02-14 20:30:47.060659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.835 [2024-02-14 20:30:47.060689] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.835 qpair failed and we were unable to recover it. 00:30:09.835 [2024-02-14 20:30:47.061169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.835 [2024-02-14 20:30:47.061610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.835 [2024-02-14 20:30:47.061640] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.835 qpair failed and we were unable to recover it. 00:30:09.835 [2024-02-14 20:30:47.062146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.835 [2024-02-14 20:30:47.062561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.835 [2024-02-14 20:30:47.062590] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.835 qpair failed and we were unable to recover it. 00:30:09.835 [2024-02-14 20:30:47.063069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.835 [2024-02-14 20:30:47.063437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.835 [2024-02-14 20:30:47.063465] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.835 qpair failed and we were unable to recover it. 00:30:09.835 [2024-02-14 20:30:47.063858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.835 [2024-02-14 20:30:47.064329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.835 [2024-02-14 20:30:47.064358] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.835 qpair failed and we were unable to recover it. 00:30:09.835 [2024-02-14 20:30:47.064749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.835 [2024-02-14 20:30:47.065216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.835 [2024-02-14 20:30:47.065245] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.835 qpair failed and we were unable to recover it. 00:30:09.835 [2024-02-14 20:30:47.065721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.835 [2024-02-14 20:30:47.066165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.835 [2024-02-14 20:30:47.066195] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.835 qpair failed and we were unable to recover it. 00:30:09.835 [2024-02-14 20:30:47.066583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.835 [2024-02-14 20:30:47.066999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.835 [2024-02-14 20:30:47.067030] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.835 qpair failed and we were unable to recover it. 00:30:09.835 [2024-02-14 20:30:47.067508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.835 [2024-02-14 20:30:47.068018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.835 [2024-02-14 20:30:47.068049] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.835 qpair failed and we were unable to recover it. 00:30:09.835 [2024-02-14 20:30:47.068507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.835 [2024-02-14 20:30:47.068989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.835 [2024-02-14 20:30:47.069020] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.835 qpair failed and we were unable to recover it. 00:30:09.835 [2024-02-14 20:30:47.069442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.835 [2024-02-14 20:30:47.069905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.835 [2024-02-14 20:30:47.069935] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.835 qpair failed and we were unable to recover it. 00:30:09.835 [2024-02-14 20:30:47.070416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.835 [2024-02-14 20:30:47.070859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.835 [2024-02-14 20:30:47.070874] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.835 qpair failed and we were unable to recover it. 00:30:09.835 [2024-02-14 20:30:47.071322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.835 [2024-02-14 20:30:47.071786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.835 [2024-02-14 20:30:47.071817] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.835 qpair failed and we were unable to recover it. 00:30:09.835 [2024-02-14 20:30:47.072170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.835 [2024-02-14 20:30:47.072563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.835 [2024-02-14 20:30:47.072592] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.835 qpair failed and we were unable to recover it. 00:30:09.835 [2024-02-14 20:30:47.073119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.835 [2024-02-14 20:30:47.073592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.835 [2024-02-14 20:30:47.073621] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.835 qpair failed and we were unable to recover it. 00:30:09.835 [2024-02-14 20:30:47.074107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.835 [2024-02-14 20:30:47.074554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.835 [2024-02-14 20:30:47.074583] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.835 qpair failed and we were unable to recover it. 00:30:09.835 [2024-02-14 20:30:47.075060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.835 [2024-02-14 20:30:47.075471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.835 [2024-02-14 20:30:47.075514] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.835 qpair failed and we were unable to recover it. 00:30:09.835 [2024-02-14 20:30:47.075905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.835 [2024-02-14 20:30:47.076399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.835 [2024-02-14 20:30:47.076429] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.835 qpair failed and we were unable to recover it. 00:30:09.835 [2024-02-14 20:30:47.076942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.835 [2024-02-14 20:30:47.077336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.835 [2024-02-14 20:30:47.077365] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.835 qpair failed and we were unable to recover it. 00:30:09.835 [2024-02-14 20:30:47.077840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.835 [2024-02-14 20:30:47.078309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.835 [2024-02-14 20:30:47.078343] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.835 qpair failed and we were unable to recover it. 00:30:09.835 [2024-02-14 20:30:47.078818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.835 [2024-02-14 20:30:47.079316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.835 [2024-02-14 20:30:47.079345] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.835 qpair failed and we were unable to recover it. 00:30:09.835 [2024-02-14 20:30:47.079777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.835 [2024-02-14 20:30:47.080149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.835 [2024-02-14 20:30:47.080178] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.835 qpair failed and we were unable to recover it. 00:30:09.835 [2024-02-14 20:30:47.080569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.835 [2024-02-14 20:30:47.081036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.835 [2024-02-14 20:30:47.081066] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.835 qpair failed and we were unable to recover it. 00:30:09.835 [2024-02-14 20:30:47.081482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.835 [2024-02-14 20:30:47.081951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.835 [2024-02-14 20:30:47.081982] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.835 qpair failed and we were unable to recover it. 00:30:09.835 [2024-02-14 20:30:47.082381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.835 [2024-02-14 20:30:47.082776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.835 [2024-02-14 20:30:47.082806] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.836 qpair failed and we were unable to recover it. 00:30:09.836 [2024-02-14 20:30:47.083253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.836 [2024-02-14 20:30:47.083636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.836 [2024-02-14 20:30:47.083687] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.836 qpair failed and we were unable to recover it. 00:30:09.836 [2024-02-14 20:30:47.084150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.836 [2024-02-14 20:30:47.084464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.836 [2024-02-14 20:30:47.084497] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.836 qpair failed and we were unable to recover it. 00:30:09.836 [2024-02-14 20:30:47.084947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.836 [2024-02-14 20:30:47.085418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.836 [2024-02-14 20:30:47.085447] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.836 qpair failed and we were unable to recover it. 00:30:09.836 [2024-02-14 20:30:47.085870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.836 [2024-02-14 20:30:47.086338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.836 [2024-02-14 20:30:47.086367] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.836 qpair failed and we were unable to recover it. 00:30:09.836 [2024-02-14 20:30:47.086783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.836 [2024-02-14 20:30:47.087251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.836 [2024-02-14 20:30:47.087285] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.836 qpair failed and we were unable to recover it. 00:30:09.836 [2024-02-14 20:30:47.087678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.836 [2024-02-14 20:30:47.088148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.836 [2024-02-14 20:30:47.088178] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.836 qpair failed and we were unable to recover it. 00:30:09.836 [2024-02-14 20:30:47.088627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.836 [2024-02-14 20:30:47.089029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.836 [2024-02-14 20:30:47.089059] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.836 qpair failed and we were unable to recover it. 00:30:09.836 [2024-02-14 20:30:47.089480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.836 [2024-02-14 20:30:47.089945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.836 [2024-02-14 20:30:47.089975] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.836 qpair failed and we were unable to recover it. 00:30:09.836 [2024-02-14 20:30:47.090424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.836 [2024-02-14 20:30:47.090869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.836 [2024-02-14 20:30:47.090911] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.836 qpair failed and we were unable to recover it. 00:30:09.836 [2024-02-14 20:30:47.091295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.836 [2024-02-14 20:30:47.091759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.836 [2024-02-14 20:30:47.091789] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.836 qpair failed and we were unable to recover it. 00:30:09.836 [2024-02-14 20:30:47.092238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.836 [2024-02-14 20:30:47.092690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.836 [2024-02-14 20:30:47.092707] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.836 qpair failed and we were unable to recover it. 00:30:09.836 [2024-02-14 20:30:47.093152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.836 [2024-02-14 20:30:47.093547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.836 [2024-02-14 20:30:47.093576] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.836 qpair failed and we were unable to recover it. 00:30:09.836 [2024-02-14 20:30:47.094172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.836 [2024-02-14 20:30:47.094774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.836 [2024-02-14 20:30:47.094805] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.836 qpair failed and we were unable to recover it. 00:30:09.836 [2024-02-14 20:30:47.095272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.836 [2024-02-14 20:30:47.095669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.836 [2024-02-14 20:30:47.095700] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.836 qpair failed and we were unable to recover it. 00:30:09.836 [2024-02-14 20:30:47.096087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.836 [2024-02-14 20:30:47.096547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.836 [2024-02-14 20:30:47.096576] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.836 qpair failed and we were unable to recover it. 00:30:09.836 [2024-02-14 20:30:47.096990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.836 [2024-02-14 20:30:47.097457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.836 [2024-02-14 20:30:47.097486] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.836 qpair failed and we were unable to recover it. 00:30:09.836 [2024-02-14 20:30:47.097940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.836 [2024-02-14 20:30:47.098387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.836 [2024-02-14 20:30:47.098416] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.836 qpair failed and we were unable to recover it. 00:30:09.836 [2024-02-14 20:30:47.098826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.836 [2024-02-14 20:30:47.099293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.836 [2024-02-14 20:30:47.099324] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.836 qpair failed and we were unable to recover it. 00:30:09.836 [2024-02-14 20:30:47.099729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.836 [2024-02-14 20:30:47.100064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.836 [2024-02-14 20:30:47.100093] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.836 qpair failed and we were unable to recover it. 00:30:09.836 [2024-02-14 20:30:47.100556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.836 [2024-02-14 20:30:47.100948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.836 [2024-02-14 20:30:47.100979] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.836 qpair failed and we were unable to recover it. 00:30:09.836 [2024-02-14 20:30:47.101459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.836 [2024-02-14 20:30:47.102191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.836 [2024-02-14 20:30:47.102220] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.836 qpair failed and we were unable to recover it. 00:30:09.836 [2024-02-14 20:30:47.102695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.836 [2024-02-14 20:30:47.103057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.836 [2024-02-14 20:30:47.103074] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.836 qpair failed and we were unable to recover it. 00:30:09.836 [2024-02-14 20:30:47.103493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.836 [2024-02-14 20:30:47.103808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.836 [2024-02-14 20:30:47.103826] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.836 qpair failed and we were unable to recover it. 00:30:09.836 [2024-02-14 20:30:47.104196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.836 [2024-02-14 20:30:47.104568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.836 [2024-02-14 20:30:47.104583] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.836 qpair failed and we were unable to recover it. 00:30:09.837 [2024-02-14 20:30:47.105002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.837 [2024-02-14 20:30:47.105364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.837 [2024-02-14 20:30:47.105380] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.837 qpair failed and we were unable to recover it. 00:30:09.837 [2024-02-14 20:30:47.105823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.837 [2024-02-14 20:30:47.106191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.837 [2024-02-14 20:30:47.106206] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.837 qpair failed and we were unable to recover it. 00:30:09.837 [2024-02-14 20:30:47.106592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.837 [2024-02-14 20:30:47.107045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.837 [2024-02-14 20:30:47.107061] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.837 qpair failed and we were unable to recover it. 00:30:09.837 [2024-02-14 20:30:47.107427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.837 [2024-02-14 20:30:47.107821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.837 [2024-02-14 20:30:47.107836] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.837 qpair failed and we were unable to recover it. 00:30:09.837 [2024-02-14 20:30:47.108210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.837 [2024-02-14 20:30:47.108662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.837 [2024-02-14 20:30:47.108677] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.837 qpair failed and we were unable to recover it. 00:30:09.837 [2024-02-14 20:30:47.109100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.837 [2024-02-14 20:30:47.109554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.837 [2024-02-14 20:30:47.109570] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.837 qpair failed and we were unable to recover it. 00:30:09.837 [2024-02-14 20:30:47.110030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.837 [2024-02-14 20:30:47.110363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.837 [2024-02-14 20:30:47.110379] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.837 qpair failed and we were unable to recover it. 00:30:09.837 [2024-02-14 20:30:47.110798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.837 [2024-02-14 20:30:47.111187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.837 [2024-02-14 20:30:47.111202] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.837 qpair failed and we were unable to recover it. 00:30:09.837 [2024-02-14 20:30:47.111607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.837 [2024-02-14 20:30:47.111979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.837 [2024-02-14 20:30:47.112013] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.837 qpair failed and we were unable to recover it. 00:30:09.837 [2024-02-14 20:30:47.112462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.837 [2024-02-14 20:30:47.112909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.837 [2024-02-14 20:30:47.112924] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.837 qpair failed and we were unable to recover it. 00:30:09.837 [2024-02-14 20:30:47.113338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.837 [2024-02-14 20:30:47.113876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.837 [2024-02-14 20:30:47.113906] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.837 qpair failed and we were unable to recover it. 00:30:09.837 [2024-02-14 20:30:47.114312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.837 [2024-02-14 20:30:47.114746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.837 [2024-02-14 20:30:47.114777] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.837 qpair failed and we were unable to recover it. 00:30:09.837 [2024-02-14 20:30:47.115184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.837 [2024-02-14 20:30:47.115612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.837 [2024-02-14 20:30:47.115642] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.837 qpair failed and we were unable to recover it. 00:30:09.837 [2024-02-14 20:30:47.116077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.837 [2024-02-14 20:30:47.116554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.837 [2024-02-14 20:30:47.116583] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.837 qpair failed and we were unable to recover it. 00:30:09.837 [2024-02-14 20:30:47.117048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.837 [2024-02-14 20:30:47.117398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.837 [2024-02-14 20:30:47.117428] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.837 qpair failed and we were unable to recover it. 00:30:09.837 [2024-02-14 20:30:47.117840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.837 [2024-02-14 20:30:47.118284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.837 [2024-02-14 20:30:47.118313] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.837 qpair failed and we were unable to recover it. 00:30:09.837 [2024-02-14 20:30:47.118696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.837 [2024-02-14 20:30:47.119091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.837 [2024-02-14 20:30:47.119120] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.837 qpair failed and we were unable to recover it. 00:30:09.837 [2024-02-14 20:30:47.119471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.837 [2024-02-14 20:30:47.119957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.837 [2024-02-14 20:30:47.119988] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.837 qpair failed and we were unable to recover it. 00:30:09.837 [2024-02-14 20:30:47.120400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.837 [2024-02-14 20:30:47.120868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.837 [2024-02-14 20:30:47.120898] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.837 qpair failed and we were unable to recover it. 00:30:09.837 [2024-02-14 20:30:47.121386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.837 [2024-02-14 20:30:47.121852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.837 [2024-02-14 20:30:47.121883] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.837 qpair failed and we were unable to recover it. 00:30:09.837 [2024-02-14 20:30:47.122287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.837 [2024-02-14 20:30:47.122768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.837 [2024-02-14 20:30:47.122799] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.837 qpair failed and we were unable to recover it. 00:30:09.837 [2024-02-14 20:30:47.123277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.837 [2024-02-14 20:30:47.123728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.837 [2024-02-14 20:30:47.123759] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.837 qpair failed and we were unable to recover it. 00:30:09.837 [2024-02-14 20:30:47.124249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.837 [2024-02-14 20:30:47.124712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.837 [2024-02-14 20:30:47.124743] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.837 qpair failed and we were unable to recover it. 00:30:09.837 [2024-02-14 20:30:47.125158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.837 [2024-02-14 20:30:47.125736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.837 [2024-02-14 20:30:47.125767] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.837 qpair failed and we were unable to recover it. 00:30:09.837 [2024-02-14 20:30:47.126165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.837 [2024-02-14 20:30:47.126543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.837 [2024-02-14 20:30:47.126572] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.837 qpair failed and we were unable to recover it. 00:30:09.837 [2024-02-14 20:30:47.127033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.837 [2024-02-14 20:30:47.127378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.837 [2024-02-14 20:30:47.127407] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.837 qpair failed and we were unable to recover it. 00:30:09.837 [2024-02-14 20:30:47.127879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.837 [2024-02-14 20:30:47.128324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.837 [2024-02-14 20:30:47.128353] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.837 qpair failed and we were unable to recover it. 00:30:09.837 [2024-02-14 20:30:47.128848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.837 [2024-02-14 20:30:47.129265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.837 [2024-02-14 20:30:47.129280] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.837 qpair failed and we were unable to recover it. 00:30:09.837 [2024-02-14 20:30:47.129719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.837 [2024-02-14 20:30:47.130307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.838 [2024-02-14 20:30:47.130336] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.838 qpair failed and we were unable to recover it. 00:30:09.838 [2024-02-14 20:30:47.130791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.838 [2024-02-14 20:30:47.131153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.838 [2024-02-14 20:30:47.131168] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.838 qpair failed and we were unable to recover it. 00:30:09.838 [2024-02-14 20:30:47.131608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.838 [2024-02-14 20:30:47.132043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.838 [2024-02-14 20:30:47.132073] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.838 qpair failed and we were unable to recover it. 00:30:09.838 [2024-02-14 20:30:47.132494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.838 [2024-02-14 20:30:47.132904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.838 [2024-02-14 20:30:47.132941] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.838 qpair failed and we were unable to recover it. 00:30:09.838 [2024-02-14 20:30:47.133448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.838 [2024-02-14 20:30:47.133926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.838 [2024-02-14 20:30:47.133958] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.838 qpair failed and we were unable to recover it. 00:30:09.838 [2024-02-14 20:30:47.134412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.838 [2024-02-14 20:30:47.134807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.838 [2024-02-14 20:30:47.134838] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.838 qpair failed and we were unable to recover it. 00:30:09.838 [2024-02-14 20:30:47.135242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.838 [2024-02-14 20:30:47.135722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.838 [2024-02-14 20:30:47.135758] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.838 qpair failed and we were unable to recover it. 00:30:09.838 [2024-02-14 20:30:47.136219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.838 [2024-02-14 20:30:47.136681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.838 [2024-02-14 20:30:47.136712] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.838 qpair failed and we were unable to recover it. 00:30:09.838 [2024-02-14 20:30:47.137213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.838 [2024-02-14 20:30:47.137685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.838 [2024-02-14 20:30:47.137716] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.838 qpair failed and we were unable to recover it. 00:30:09.838 [2024-02-14 20:30:47.138142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.838 [2024-02-14 20:30:47.138487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.838 [2024-02-14 20:30:47.138517] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.838 qpair failed and we were unable to recover it. 00:30:09.838 [2024-02-14 20:30:47.139079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.838 [2024-02-14 20:30:47.139501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.838 [2024-02-14 20:30:47.139530] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.838 qpair failed and we were unable to recover it. 00:30:09.838 [2024-02-14 20:30:47.140002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.838 [2024-02-14 20:30:47.140307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.838 [2024-02-14 20:30:47.140335] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.838 qpair failed and we were unable to recover it. 00:30:09.838 [2024-02-14 20:30:47.140829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.838 [2024-02-14 20:30:47.141108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.838 [2024-02-14 20:30:47.141123] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.838 qpair failed and we were unable to recover it. 00:30:09.838 [2024-02-14 20:30:47.141562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.838 [2024-02-14 20:30:47.141960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.838 [2024-02-14 20:30:47.141991] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.838 qpair failed and we were unable to recover it. 00:30:09.838 [2024-02-14 20:30:47.142491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.838 [2024-02-14 20:30:47.142923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.838 [2024-02-14 20:30:47.142954] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.838 qpair failed and we were unable to recover it. 00:30:09.838 [2024-02-14 20:30:47.143359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.838 [2024-02-14 20:30:47.143748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.838 [2024-02-14 20:30:47.143778] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.838 qpair failed and we were unable to recover it. 00:30:09.838 [2024-02-14 20:30:47.144149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.838 [2024-02-14 20:30:47.144570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.838 [2024-02-14 20:30:47.144599] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.838 qpair failed and we were unable to recover it. 00:30:09.838 [2024-02-14 20:30:47.144990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.838 [2024-02-14 20:30:47.145400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.838 [2024-02-14 20:30:47.145429] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.838 qpair failed and we were unable to recover it. 00:30:09.838 [2024-02-14 20:30:47.145908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.838 [2024-02-14 20:30:47.146400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.838 [2024-02-14 20:30:47.146429] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.838 qpair failed and we were unable to recover it. 00:30:09.838 [2024-02-14 20:30:47.146920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.838 [2024-02-14 20:30:47.147392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.838 [2024-02-14 20:30:47.147407] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.838 qpair failed and we were unable to recover it. 00:30:09.838 [2024-02-14 20:30:47.147807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.838 [2024-02-14 20:30:47.148244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.838 [2024-02-14 20:30:47.148273] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.838 qpair failed and we were unable to recover it. 00:30:09.838 [2024-02-14 20:30:47.148693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.838 [2024-02-14 20:30:47.149083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.838 [2024-02-14 20:30:47.149113] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.838 qpair failed and we were unable to recover it. 00:30:09.838 [2024-02-14 20:30:47.149519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.838 [2024-02-14 20:30:47.149923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.838 [2024-02-14 20:30:47.149953] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.838 qpair failed and we were unable to recover it. 00:30:09.838 [2024-02-14 20:30:47.150330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.838 [2024-02-14 20:30:47.150808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.838 [2024-02-14 20:30:47.150839] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.838 qpair failed and we were unable to recover it. 00:30:09.838 [2024-02-14 20:30:47.151298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.838 [2024-02-14 20:30:47.151678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.838 [2024-02-14 20:30:47.151709] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.838 qpair failed and we were unable to recover it. 00:30:09.838 [2024-02-14 20:30:47.152071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.838 [2024-02-14 20:30:47.152461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.838 [2024-02-14 20:30:47.152490] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.838 qpair failed and we were unable to recover it. 00:30:09.838 [2024-02-14 20:30:47.152899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.838 [2024-02-14 20:30:47.153342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.838 [2024-02-14 20:30:47.153357] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.838 qpair failed and we were unable to recover it. 00:30:09.838 [2024-02-14 20:30:47.153786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.838 [2024-02-14 20:30:47.154167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.838 [2024-02-14 20:30:47.154196] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.838 qpair failed and we were unable to recover it. 00:30:09.838 [2024-02-14 20:30:47.154668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.838 [2024-02-14 20:30:47.155138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.838 [2024-02-14 20:30:47.155168] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.838 qpair failed and we were unable to recover it. 00:30:09.838 [2024-02-14 20:30:47.155666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.838 [2024-02-14 20:30:47.156058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.839 [2024-02-14 20:30:47.156088] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.839 qpair failed and we were unable to recover it. 00:30:09.839 [2024-02-14 20:30:47.156585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.839 [2024-02-14 20:30:47.156985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.839 [2024-02-14 20:30:47.157015] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.839 qpair failed and we were unable to recover it. 00:30:09.839 [2024-02-14 20:30:47.157464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.839 [2024-02-14 20:30:47.157933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.839 [2024-02-14 20:30:47.157964] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.839 qpair failed and we were unable to recover it. 00:30:09.839 [2024-02-14 20:30:47.158411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.839 [2024-02-14 20:30:47.158862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.839 [2024-02-14 20:30:47.158893] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.839 qpair failed and we were unable to recover it. 00:30:09.839 [2024-02-14 20:30:47.159298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.839 [2024-02-14 20:30:47.159686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.839 [2024-02-14 20:30:47.159716] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.839 qpair failed and we were unable to recover it. 00:30:09.839 [2024-02-14 20:30:47.160208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.839 [2024-02-14 20:30:47.160666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.839 [2024-02-14 20:30:47.160697] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.839 qpair failed and we were unable to recover it. 00:30:09.839 [2024-02-14 20:30:47.161167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.839 [2024-02-14 20:30:47.161554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.839 [2024-02-14 20:30:47.161582] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.839 qpair failed and we were unable to recover it. 00:30:09.839 [2024-02-14 20:30:47.162003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.839 [2024-02-14 20:30:47.162473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.839 [2024-02-14 20:30:47.162503] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.839 qpair failed and we were unable to recover it. 00:30:09.839 [2024-02-14 20:30:47.162991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.839 [2024-02-14 20:30:47.163489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.839 [2024-02-14 20:30:47.163518] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.839 qpair failed and we were unable to recover it. 00:30:09.839 [2024-02-14 20:30:47.164026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.839 [2024-02-14 20:30:47.164481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.839 [2024-02-14 20:30:47.164510] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.839 qpair failed and we were unable to recover it. 00:30:09.839 [2024-02-14 20:30:47.164970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.839 [2024-02-14 20:30:47.165363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.839 [2024-02-14 20:30:47.165392] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.839 qpair failed and we were unable to recover it. 00:30:09.839 [2024-02-14 20:30:47.165817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.839 [2024-02-14 20:30:47.166270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.839 [2024-02-14 20:30:47.166300] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.839 qpair failed and we were unable to recover it. 00:30:09.839 [2024-02-14 20:30:47.166761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.839 [2024-02-14 20:30:47.167134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.839 [2024-02-14 20:30:47.167163] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.839 qpair failed and we were unable to recover it. 00:30:09.839 [2024-02-14 20:30:47.167555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.839 [2024-02-14 20:30:47.167998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.839 [2024-02-14 20:30:47.168028] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.839 qpair failed and we were unable to recover it. 00:30:09.839 [2024-02-14 20:30:47.168504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.839 [2024-02-14 20:30:47.168916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.839 [2024-02-14 20:30:47.168946] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.839 qpair failed and we were unable to recover it. 00:30:09.839 [2024-02-14 20:30:47.169337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.839 [2024-02-14 20:30:47.169692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.839 [2024-02-14 20:30:47.169723] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.839 qpair failed and we were unable to recover it. 00:30:09.839 [2024-02-14 20:30:47.170192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.839 [2024-02-14 20:30:47.170671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.839 [2024-02-14 20:30:47.170702] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.839 qpair failed and we were unable to recover it. 00:30:09.839 [2024-02-14 20:30:47.171094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.839 [2024-02-14 20:30:47.171560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.839 [2024-02-14 20:30:47.171590] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.839 qpair failed and we were unable to recover it. 00:30:09.839 [2024-02-14 20:30:47.172040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.839 [2024-02-14 20:30:47.172449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.839 [2024-02-14 20:30:47.172479] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.839 qpair failed and we were unable to recover it. 00:30:09.839 [2024-02-14 20:30:47.172885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.839 [2024-02-14 20:30:47.173334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.839 [2024-02-14 20:30:47.173363] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.839 qpair failed and we were unable to recover it. 00:30:09.839 [2024-02-14 20:30:47.173762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.839 [2024-02-14 20:30:47.174138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.839 [2024-02-14 20:30:47.174167] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.839 qpair failed and we were unable to recover it. 00:30:09.839 [2024-02-14 20:30:47.174585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.839 [2024-02-14 20:30:47.175030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.839 [2024-02-14 20:30:47.175061] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.839 qpair failed and we were unable to recover it. 00:30:09.839 [2024-02-14 20:30:47.175534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.839 [2024-02-14 20:30:47.176045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.839 [2024-02-14 20:30:47.176075] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.839 qpair failed and we were unable to recover it. 00:30:09.839 [2024-02-14 20:30:47.176549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.839 [2024-02-14 20:30:47.176929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.839 [2024-02-14 20:30:47.176960] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.839 qpair failed and we were unable to recover it. 00:30:09.839 [2024-02-14 20:30:47.177352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.839 [2024-02-14 20:30:47.177688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.839 [2024-02-14 20:30:47.177718] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.839 qpair failed and we were unable to recover it. 00:30:09.839 [2024-02-14 20:30:47.178120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.839 [2024-02-14 20:30:47.178497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.839 [2024-02-14 20:30:47.178541] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.839 qpair failed and we were unable to recover it. 00:30:09.839 [2024-02-14 20:30:47.178993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.839 [2024-02-14 20:30:47.179394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.839 [2024-02-14 20:30:47.179424] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.839 qpair failed and we were unable to recover it. 00:30:09.839 [2024-02-14 20:30:47.179768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.839 [2024-02-14 20:30:47.180193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.839 [2024-02-14 20:30:47.180222] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.839 qpair failed and we were unable to recover it. 00:30:09.839 [2024-02-14 20:30:47.180642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.839 [2024-02-14 20:30:47.181057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.839 [2024-02-14 20:30:47.181088] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.839 qpair failed and we were unable to recover it. 00:30:09.840 [2024-02-14 20:30:47.181542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.840 [2024-02-14 20:30:47.181919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.840 [2024-02-14 20:30:47.181950] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.840 qpair failed and we were unable to recover it. 00:30:09.840 [2024-02-14 20:30:47.182351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.840 [2024-02-14 20:30:47.182617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.840 [2024-02-14 20:30:47.182668] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.840 qpair failed and we were unable to recover it. 00:30:09.840 [2024-02-14 20:30:47.183070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.840 [2024-02-14 20:30:47.183468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.840 [2024-02-14 20:30:47.183499] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.840 qpair failed and we were unable to recover it. 00:30:09.840 [2024-02-14 20:30:47.183970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.840 [2024-02-14 20:30:47.184386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.840 [2024-02-14 20:30:47.184401] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.840 qpair failed and we were unable to recover it. 00:30:09.840 [2024-02-14 20:30:47.184768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.840 [2024-02-14 20:30:47.185240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.840 [2024-02-14 20:30:47.185270] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.840 qpair failed and we were unable to recover it. 00:30:09.840 [2024-02-14 20:30:47.185725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.840 [2024-02-14 20:30:47.186151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.840 [2024-02-14 20:30:47.186181] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.840 qpair failed and we were unable to recover it. 00:30:09.840 [2024-02-14 20:30:47.186660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.840 [2024-02-14 20:30:47.187158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.840 [2024-02-14 20:30:47.187187] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.840 qpair failed and we were unable to recover it. 00:30:09.840 [2024-02-14 20:30:47.187642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.840 [2024-02-14 20:30:47.188044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.840 [2024-02-14 20:30:47.188062] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.840 qpair failed and we were unable to recover it. 00:30:09.840 [2024-02-14 20:30:47.188435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.840 [2024-02-14 20:30:47.188814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.840 [2024-02-14 20:30:47.188845] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.840 qpair failed and we were unable to recover it. 00:30:09.840 [2024-02-14 20:30:47.189314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.840 [2024-02-14 20:30:47.189713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.840 [2024-02-14 20:30:47.189744] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.840 qpair failed and we were unable to recover it. 00:30:09.840 [2024-02-14 20:30:47.190212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.840 [2024-02-14 20:30:47.190514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.840 [2024-02-14 20:30:47.190544] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.840 qpair failed and we were unable to recover it. 00:30:09.840 [2024-02-14 20:30:47.190963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.840 [2024-02-14 20:30:47.191298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.840 [2024-02-14 20:30:47.191328] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.840 qpair failed and we were unable to recover it. 00:30:09.840 [2024-02-14 20:30:47.191644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.840 [2024-02-14 20:30:47.192050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.840 [2024-02-14 20:30:47.192080] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.840 qpair failed and we were unable to recover it. 00:30:09.840 [2024-02-14 20:30:47.192527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.840 [2024-02-14 20:30:47.192937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.840 [2024-02-14 20:30:47.192968] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.840 qpair failed and we were unable to recover it. 00:30:09.840 [2024-02-14 20:30:47.193412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.840 [2024-02-14 20:30:47.193868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.840 [2024-02-14 20:30:47.193899] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.840 qpair failed and we were unable to recover it. 00:30:09.840 [2024-02-14 20:30:47.194490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.840 [2024-02-14 20:30:47.194903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.840 [2024-02-14 20:30:47.194934] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.840 qpair failed and we were unable to recover it. 00:30:09.840 [2024-02-14 20:30:47.195173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.840 [2024-02-14 20:30:47.195524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.840 [2024-02-14 20:30:47.195554] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.840 qpair failed and we were unable to recover it. 00:30:09.840 [2024-02-14 20:30:47.195897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.840 [2024-02-14 20:30:47.196351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.840 [2024-02-14 20:30:47.196380] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.840 qpair failed and we were unable to recover it. 00:30:09.840 [2024-02-14 20:30:47.196852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.840 [2024-02-14 20:30:47.197317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.840 [2024-02-14 20:30:47.197347] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.840 qpair failed and we were unable to recover it. 00:30:09.840 [2024-02-14 20:30:47.197745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.840 [2024-02-14 20:30:47.198187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.840 [2024-02-14 20:30:47.198217] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.840 qpair failed and we were unable to recover it. 00:30:09.840 [2024-02-14 20:30:47.198614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.840 [2024-02-14 20:30:47.199106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.840 [2024-02-14 20:30:47.199137] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.840 qpair failed and we were unable to recover it. 00:30:09.840 [2024-02-14 20:30:47.199558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.840 [2024-02-14 20:30:47.200018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.840 [2024-02-14 20:30:47.200048] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.840 qpair failed and we were unable to recover it. 00:30:09.840 [2024-02-14 20:30:47.200467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.840 [2024-02-14 20:30:47.200810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.840 [2024-02-14 20:30:47.200840] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.840 qpair failed and we were unable to recover it. 00:30:09.840 [2024-02-14 20:30:47.201309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.840 [2024-02-14 20:30:47.201605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.840 [2024-02-14 20:30:47.201635] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.840 qpair failed and we were unable to recover it. 00:30:09.840 [2024-02-14 20:30:47.202115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.840 [2024-02-14 20:30:47.202520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.841 [2024-02-14 20:30:47.202549] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.841 qpair failed and we were unable to recover it. 00:30:09.841 [2024-02-14 20:30:47.203017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.841 [2024-02-14 20:30:47.203402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.841 [2024-02-14 20:30:47.203431] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.841 qpair failed and we were unable to recover it. 00:30:09.841 [2024-02-14 20:30:47.203816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.841 [2024-02-14 20:30:47.204163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.841 [2024-02-14 20:30:47.204192] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.841 qpair failed and we were unable to recover it. 00:30:09.841 [2024-02-14 20:30:47.204669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.841 [2024-02-14 20:30:47.205134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.841 [2024-02-14 20:30:47.205163] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.841 qpair failed and we were unable to recover it. 00:30:09.841 [2024-02-14 20:30:47.205560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.841 [2024-02-14 20:30:47.205790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.841 [2024-02-14 20:30:47.205806] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.841 qpair failed and we were unable to recover it. 00:30:09.841 [2024-02-14 20:30:47.206039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.841 [2024-02-14 20:30:47.206349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.841 [2024-02-14 20:30:47.206379] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.841 qpair failed and we were unable to recover it. 00:30:09.841 [2024-02-14 20:30:47.206775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.841 [2024-02-14 20:30:47.207150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.841 [2024-02-14 20:30:47.207178] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.841 qpair failed and we were unable to recover it. 00:30:09.841 [2024-02-14 20:30:47.207569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.841 [2024-02-14 20:30:47.207805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.841 [2024-02-14 20:30:47.207835] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.841 qpair failed and we were unable to recover it. 00:30:09.841 [2024-02-14 20:30:47.208303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.841 [2024-02-14 20:30:47.208701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.841 [2024-02-14 20:30:47.208717] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.841 qpair failed and we were unable to recover it. 00:30:09.841 [2024-02-14 20:30:47.209079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.841 [2024-02-14 20:30:47.209464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.841 [2024-02-14 20:30:47.209493] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.841 qpair failed and we were unable to recover it. 00:30:09.841 [2024-02-14 20:30:47.209896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.841 [2024-02-14 20:30:47.210367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.841 [2024-02-14 20:30:47.210396] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.841 qpair failed and we were unable to recover it. 00:30:09.841 [2024-02-14 20:30:47.210731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.841 [2024-02-14 20:30:47.211134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.841 [2024-02-14 20:30:47.211164] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.841 qpair failed and we were unable to recover it. 00:30:09.841 [2024-02-14 20:30:47.211561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.841 [2024-02-14 20:30:47.212001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.841 [2024-02-14 20:30:47.212040] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.841 qpair failed and we were unable to recover it. 00:30:09.841 [2024-02-14 20:30:47.212408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.841 [2024-02-14 20:30:47.212812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.841 [2024-02-14 20:30:47.212843] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.841 qpair failed and we were unable to recover it. 00:30:09.841 [2024-02-14 20:30:47.213311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.841 [2024-02-14 20:30:47.213726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.841 [2024-02-14 20:30:47.213757] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.841 qpair failed and we were unable to recover it. 00:30:09.841 [2024-02-14 20:30:47.214246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.841 [2024-02-14 20:30:47.214568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.841 [2024-02-14 20:30:47.214598] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.841 qpair failed and we were unable to recover it. 00:30:09.841 [2024-02-14 20:30:47.215081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.841 [2024-02-14 20:30:47.215542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.841 [2024-02-14 20:30:47.215557] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.841 qpair failed and we were unable to recover it. 00:30:09.841 [2024-02-14 20:30:47.215974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.841 [2024-02-14 20:30:47.216436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.841 [2024-02-14 20:30:47.216465] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.841 qpair failed and we were unable to recover it. 00:30:09.841 [2024-02-14 20:30:47.216908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.841 [2024-02-14 20:30:47.217313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.841 [2024-02-14 20:30:47.217342] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.841 qpair failed and we were unable to recover it. 00:30:09.841 [2024-02-14 20:30:47.217740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.841 [2024-02-14 20:30:47.218131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.841 [2024-02-14 20:30:47.218161] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.841 qpair failed and we were unable to recover it. 00:30:09.841 [2024-02-14 20:30:47.218624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.841 [2024-02-14 20:30:47.218998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.841 [2024-02-14 20:30:47.219029] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.841 qpair failed and we were unable to recover it. 00:30:09.841 [2024-02-14 20:30:47.219419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.841 [2024-02-14 20:30:47.219791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.841 [2024-02-14 20:30:47.219822] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.841 qpair failed and we were unable to recover it. 00:30:09.841 [2024-02-14 20:30:47.220285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.841 [2024-02-14 20:30:47.220672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.841 [2024-02-14 20:30:47.220720] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.841 qpair failed and we were unable to recover it. 00:30:09.841 [2024-02-14 20:30:47.221108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.841 [2024-02-14 20:30:47.221472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.841 [2024-02-14 20:30:47.221490] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.841 qpair failed and we were unable to recover it. 00:30:09.841 [2024-02-14 20:30:47.221966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.841 [2024-02-14 20:30:47.222396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.841 [2024-02-14 20:30:47.222411] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.841 qpair failed and we were unable to recover it. 00:30:09.841 [2024-02-14 20:30:47.222774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.841 [2024-02-14 20:30:47.223172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.841 [2024-02-14 20:30:47.223201] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.841 qpair failed and we were unable to recover it. 00:30:09.841 [2024-02-14 20:30:47.223591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.841 [2024-02-14 20:30:47.224054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.841 [2024-02-14 20:30:47.224085] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.841 qpair failed and we were unable to recover it. 00:30:09.841 [2024-02-14 20:30:47.224472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.841 [2024-02-14 20:30:47.224811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.841 [2024-02-14 20:30:47.224827] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.841 qpair failed and we were unable to recover it. 00:30:09.841 [2024-02-14 20:30:47.225232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.841 [2024-02-14 20:30:47.225585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.841 [2024-02-14 20:30:47.225600] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.841 qpair failed and we were unable to recover it. 00:30:09.841 [2024-02-14 20:30:47.225963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.842 [2024-02-14 20:30:47.226413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.842 [2024-02-14 20:30:47.226428] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.842 qpair failed and we were unable to recover it. 00:30:09.842 [2024-02-14 20:30:47.226738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.842 [2024-02-14 20:30:47.227099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.842 [2024-02-14 20:30:47.227114] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.842 qpair failed and we were unable to recover it. 00:30:09.842 [2024-02-14 20:30:47.227467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.842 [2024-02-14 20:30:47.227846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.842 [2024-02-14 20:30:47.227861] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.842 qpair failed and we were unable to recover it. 00:30:09.842 [2024-02-14 20:30:47.228218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.842 [2024-02-14 20:30:47.228623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.842 [2024-02-14 20:30:47.228638] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.842 qpair failed and we were unable to recover it. 00:30:09.842 [2024-02-14 20:30:47.228998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.842 [2024-02-14 20:30:47.229409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.842 [2024-02-14 20:30:47.229438] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.842 qpair failed and we were unable to recover it. 00:30:09.842 [2024-02-14 20:30:47.229838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.842 [2024-02-14 20:30:47.230318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.842 [2024-02-14 20:30:47.230333] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.842 qpair failed and we were unable to recover it. 00:30:09.842 [2024-02-14 20:30:47.230641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.842 [2024-02-14 20:30:47.230952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.842 [2024-02-14 20:30:47.230966] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.842 qpair failed and we were unable to recover it. 00:30:09.842 [2024-02-14 20:30:47.231318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.842 [2024-02-14 20:30:47.231609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.842 [2024-02-14 20:30:47.231623] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.842 qpair failed and we were unable to recover it. 00:30:09.842 [2024-02-14 20:30:47.232013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.842 [2024-02-14 20:30:47.232359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.842 [2024-02-14 20:30:47.232373] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.842 qpair failed and we were unable to recover it. 00:30:09.842 [2024-02-14 20:30:47.232783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.842 [2024-02-14 20:30:47.233037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.842 [2024-02-14 20:30:47.233052] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.842 qpair failed and we were unable to recover it. 00:30:09.842 [2024-02-14 20:30:47.233488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.842 [2024-02-14 20:30:47.233921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.842 [2024-02-14 20:30:47.233936] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.842 qpair failed and we were unable to recover it. 00:30:09.842 [2024-02-14 20:30:47.234303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.842 [2024-02-14 20:30:47.234738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.842 [2024-02-14 20:30:47.234768] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.842 qpair failed and we were unable to recover it. 00:30:09.842 [2024-02-14 20:30:47.235140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.842 [2024-02-14 20:30:47.235442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.842 [2024-02-14 20:30:47.235457] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.842 qpair failed and we were unable to recover it. 00:30:09.842 [2024-02-14 20:30:47.235863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.842 [2024-02-14 20:30:47.236290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.842 [2024-02-14 20:30:47.236305] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.842 qpair failed and we were unable to recover it. 00:30:09.842 [2024-02-14 20:30:47.236709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.842 [2024-02-14 20:30:47.237145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.842 [2024-02-14 20:30:47.237174] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.842 qpair failed and we were unable to recover it. 00:30:09.842 [2024-02-14 20:30:47.237577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.842 [2024-02-14 20:30:47.237867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.842 [2024-02-14 20:30:47.237882] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.842 qpair failed and we were unable to recover it. 00:30:09.842 [2024-02-14 20:30:47.238299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.842 [2024-02-14 20:30:47.238703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.842 [2024-02-14 20:30:47.238718] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.842 qpair failed and we were unable to recover it. 00:30:09.842 [2024-02-14 20:30:47.239033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.842 [2024-02-14 20:30:47.239387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.842 [2024-02-14 20:30:47.239402] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.842 qpair failed and we were unable to recover it. 00:30:09.842 [2024-02-14 20:30:47.239777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.842 [2024-02-14 20:30:47.240132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.842 [2024-02-14 20:30:47.240146] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.842 qpair failed and we were unable to recover it. 00:30:09.842 [2024-02-14 20:30:47.240546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.842 [2024-02-14 20:30:47.241001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.842 [2024-02-14 20:30:47.241016] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:09.842 qpair failed and we were unable to recover it. 00:30:09.842 [2024-02-14 20:30:47.241396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.108 [2024-02-14 20:30:47.241750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.108 [2024-02-14 20:30:47.241765] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.108 qpair failed and we were unable to recover it. 00:30:10.108 [2024-02-14 20:30:47.242042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.108 [2024-02-14 20:30:47.242255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.108 [2024-02-14 20:30:47.242270] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.108 qpair failed and we were unable to recover it. 00:30:10.108 [2024-02-14 20:30:47.242675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.108 [2024-02-14 20:30:47.242975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.108 [2024-02-14 20:30:47.242990] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.108 qpair failed and we were unable to recover it. 00:30:10.108 [2024-02-14 20:30:47.243426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.108 [2024-02-14 20:30:47.243884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.108 [2024-02-14 20:30:47.243914] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.108 qpair failed and we were unable to recover it. 00:30:10.108 [2024-02-14 20:30:47.244378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.108 [2024-02-14 20:30:47.244777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.108 [2024-02-14 20:30:47.244807] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.108 qpair failed and we were unable to recover it. 00:30:10.108 [2024-02-14 20:30:47.245231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.108 [2024-02-14 20:30:47.245676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.108 [2024-02-14 20:30:47.245692] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.108 qpair failed and we were unable to recover it. 00:30:10.108 [2024-02-14 20:30:47.246043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.108 [2024-02-14 20:30:47.246444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.108 [2024-02-14 20:30:47.246459] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.108 qpair failed and we were unable to recover it. 00:30:10.108 [2024-02-14 20:30:47.246776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.108 [2024-02-14 20:30:47.247071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.108 [2024-02-14 20:30:47.247085] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.108 qpair failed and we were unable to recover it. 00:30:10.108 [2024-02-14 20:30:47.247372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.108 [2024-02-14 20:30:47.247804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.108 [2024-02-14 20:30:47.247819] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.108 qpair failed and we were unable to recover it. 00:30:10.108 [2024-02-14 20:30:47.248225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.108 [2024-02-14 20:30:47.248501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.108 [2024-02-14 20:30:47.248515] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.108 qpair failed and we were unable to recover it. 00:30:10.108 [2024-02-14 20:30:47.248938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.108 [2024-02-14 20:30:47.249271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.108 [2024-02-14 20:30:47.249300] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.108 qpair failed and we were unable to recover it. 00:30:10.108 [2024-02-14 20:30:47.249708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.108 [2024-02-14 20:30:47.250139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.108 [2024-02-14 20:30:47.250153] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.108 qpair failed and we were unable to recover it. 00:30:10.108 [2024-02-14 20:30:47.250475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.108 [2024-02-14 20:30:47.250902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.108 [2024-02-14 20:30:47.250917] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.108 qpair failed and we were unable to recover it. 00:30:10.108 [2024-02-14 20:30:47.251322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.108 [2024-02-14 20:30:47.251670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.108 [2024-02-14 20:30:47.251686] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.108 qpair failed and we were unable to recover it. 00:30:10.108 [2024-02-14 20:30:47.252093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.108 [2024-02-14 20:30:47.252441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.108 [2024-02-14 20:30:47.252456] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.108 qpair failed and we were unable to recover it. 00:30:10.108 [2024-02-14 20:30:47.252889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.108 [2024-02-14 20:30:47.253187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.108 [2024-02-14 20:30:47.253202] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.108 qpair failed and we were unable to recover it. 00:30:10.108 [2024-02-14 20:30:47.253618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.108 [2024-02-14 20:30:47.253980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.108 [2024-02-14 20:30:47.253995] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.108 qpair failed and we were unable to recover it. 00:30:10.108 [2024-02-14 20:30:47.254289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.108 [2024-02-14 20:30:47.254739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.108 [2024-02-14 20:30:47.254754] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.108 qpair failed and we were unable to recover it. 00:30:10.108 [2024-02-14 20:30:47.255059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.108 [2024-02-14 20:30:47.255412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.108 [2024-02-14 20:30:47.255426] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.108 qpair failed and we were unable to recover it. 00:30:10.108 [2024-02-14 20:30:47.255577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.108 [2024-02-14 20:30:47.255843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.108 [2024-02-14 20:30:47.255858] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.109 qpair failed and we were unable to recover it. 00:30:10.109 [2024-02-14 20:30:47.256221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.109 [2024-02-14 20:30:47.256571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.109 [2024-02-14 20:30:47.256585] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.109 qpair failed and we were unable to recover it. 00:30:10.109 [2024-02-14 20:30:47.257014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.109 [2024-02-14 20:30:47.257317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.109 [2024-02-14 20:30:47.257345] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.109 qpair failed and we were unable to recover it. 00:30:10.109 [2024-02-14 20:30:47.257716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.109 [2024-02-14 20:30:47.258103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.109 [2024-02-14 20:30:47.258132] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.109 qpair failed and we were unable to recover it. 00:30:10.109 [2024-02-14 20:30:47.258586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.109 [2024-02-14 20:30:47.258853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.109 [2024-02-14 20:30:47.258883] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.109 qpair failed and we were unable to recover it. 00:30:10.109 [2024-02-14 20:30:47.259274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.109 [2024-02-14 20:30:47.259626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.109 [2024-02-14 20:30:47.259640] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.109 qpair failed and we were unable to recover it. 00:30:10.109 [2024-02-14 20:30:47.260025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.109 [2024-02-14 20:30:47.260390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.109 [2024-02-14 20:30:47.260407] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.109 qpair failed and we were unable to recover it. 00:30:10.109 [2024-02-14 20:30:47.260792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.109 [2024-02-14 20:30:47.261014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.109 [2024-02-14 20:30:47.261028] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.109 qpair failed and we were unable to recover it. 00:30:10.109 [2024-02-14 20:30:47.261272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.109 [2024-02-14 20:30:47.261577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.109 [2024-02-14 20:30:47.261591] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.109 qpair failed and we were unable to recover it. 00:30:10.109 [2024-02-14 20:30:47.261999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.109 [2024-02-14 20:30:47.262287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.109 [2024-02-14 20:30:47.262301] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.109 qpair failed and we were unable to recover it. 00:30:10.109 [2024-02-14 20:30:47.262645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.109 [2024-02-14 20:30:47.263232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.109 [2024-02-14 20:30:47.263246] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.109 qpair failed and we were unable to recover it. 00:30:10.109 [2024-02-14 20:30:47.263544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.109 [2024-02-14 20:30:47.263890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.109 [2024-02-14 20:30:47.263905] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.109 qpair failed and we were unable to recover it. 00:30:10.109 [2024-02-14 20:30:47.264269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.109 [2024-02-14 20:30:47.264644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.109 [2024-02-14 20:30:47.264669] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.109 qpair failed and we were unable to recover it. 00:30:10.109 [2024-02-14 20:30:47.264894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.109 [2024-02-14 20:30:47.265297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.109 [2024-02-14 20:30:47.265312] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.109 qpair failed and we were unable to recover it. 00:30:10.109 [2024-02-14 20:30:47.265645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.109 [2024-02-14 20:30:47.265957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.109 [2024-02-14 20:30:47.265971] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.109 qpair failed and we were unable to recover it. 00:30:10.109 [2024-02-14 20:30:47.266357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.109 [2024-02-14 20:30:47.266734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.109 [2024-02-14 20:30:47.266748] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.109 qpair failed and we were unable to recover it. 00:30:10.109 [2024-02-14 20:30:47.267123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.109 [2024-02-14 20:30:47.267415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.109 [2024-02-14 20:30:47.267432] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.109 qpair failed and we were unable to recover it. 00:30:10.109 [2024-02-14 20:30:47.267604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.109 [2024-02-14 20:30:47.268021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.109 [2024-02-14 20:30:47.268036] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.109 qpair failed and we were unable to recover it. 00:30:10.109 [2024-02-14 20:30:47.268339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.109 [2024-02-14 20:30:47.268686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.109 [2024-02-14 20:30:47.268701] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.109 qpair failed and we were unable to recover it. 00:30:10.109 [2024-02-14 20:30:47.269054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.109 [2024-02-14 20:30:47.269345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.109 [2024-02-14 20:30:47.269359] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.109 qpair failed and we were unable to recover it. 00:30:10.109 [2024-02-14 20:30:47.269792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.109 [2024-02-14 20:30:47.270086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.109 [2024-02-14 20:30:47.270100] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.109 qpair failed and we were unable to recover it. 00:30:10.109 [2024-02-14 20:30:47.270471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.109 [2024-02-14 20:30:47.270898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.109 [2024-02-14 20:30:47.270931] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.109 qpair failed and we were unable to recover it. 00:30:10.109 [2024-02-14 20:30:47.271306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.109 [2024-02-14 20:30:47.271474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.110 [2024-02-14 20:30:47.271503] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.110 qpair failed and we were unable to recover it. 00:30:10.110 [2024-02-14 20:30:47.271967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.110 [2024-02-14 20:30:47.272347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.110 [2024-02-14 20:30:47.272375] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.110 qpair failed and we were unable to recover it. 00:30:10.110 [2024-02-14 20:30:47.272703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.110 [2024-02-14 20:30:47.273022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.110 [2024-02-14 20:30:47.273051] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.110 qpair failed and we were unable to recover it. 00:30:10.110 [2024-02-14 20:30:47.273483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.110 [2024-02-14 20:30:47.273839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.110 [2024-02-14 20:30:47.273854] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.110 qpair failed and we were unable to recover it. 00:30:10.110 [2024-02-14 20:30:47.274079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.110 [2024-02-14 20:30:47.274453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.110 [2024-02-14 20:30:47.274467] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.110 qpair failed and we were unable to recover it. 00:30:10.110 [2024-02-14 20:30:47.274907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.110 [2024-02-14 20:30:47.275291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.110 [2024-02-14 20:30:47.275320] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.110 qpair failed and we were unable to recover it. 00:30:10.110 [2024-02-14 20:30:47.275708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.110 [2024-02-14 20:30:47.276164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.110 [2024-02-14 20:30:47.276192] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.110 qpair failed and we were unable to recover it. 00:30:10.110 [2024-02-14 20:30:47.276607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.110 [2024-02-14 20:30:47.276959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.110 [2024-02-14 20:30:47.276989] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.110 qpair failed and we were unable to recover it. 00:30:10.110 [2024-02-14 20:30:47.277363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.110 [2024-02-14 20:30:47.277791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.110 [2024-02-14 20:30:47.277821] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.110 qpair failed and we were unable to recover it. 00:30:10.110 [2024-02-14 20:30:47.278199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.110 [2024-02-14 20:30:47.278588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.110 [2024-02-14 20:30:47.278617] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.110 qpair failed and we were unable to recover it. 00:30:10.110 [2024-02-14 20:30:47.279063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.110 [2024-02-14 20:30:47.279432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.110 [2024-02-14 20:30:47.279462] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.110 qpair failed and we were unable to recover it. 00:30:10.110 [2024-02-14 20:30:47.279898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.110 [2024-02-14 20:30:47.280331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.110 [2024-02-14 20:30:47.280360] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.110 qpair failed and we were unable to recover it. 00:30:10.110 [2024-02-14 20:30:47.280753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.110 [2024-02-14 20:30:47.281181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.110 [2024-02-14 20:30:47.281210] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.110 qpair failed and we were unable to recover it. 00:30:10.110 [2024-02-14 20:30:47.281644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.110 [2024-02-14 20:30:47.282094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.110 [2024-02-14 20:30:47.282123] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.110 qpair failed and we were unable to recover it. 00:30:10.110 [2024-02-14 20:30:47.282580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.110 [2024-02-14 20:30:47.282947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.110 [2024-02-14 20:30:47.282977] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.110 qpair failed and we were unable to recover it. 00:30:10.110 [2024-02-14 20:30:47.283373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.110 [2024-02-14 20:30:47.283827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.110 [2024-02-14 20:30:47.283857] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.110 qpair failed and we were unable to recover it. 00:30:10.110 [2024-02-14 20:30:47.284118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.110 [2024-02-14 20:30:47.284404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.110 [2024-02-14 20:30:47.284419] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.110 qpair failed and we were unable to recover it. 00:30:10.110 [2024-02-14 20:30:47.284832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.110 [2024-02-14 20:30:47.285213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.110 [2024-02-14 20:30:47.285243] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.110 qpair failed and we were unable to recover it. 00:30:10.110 [2024-02-14 20:30:47.285562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.110 [2024-02-14 20:30:47.285868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.110 [2024-02-14 20:30:47.285898] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.110 qpair failed and we were unable to recover it. 00:30:10.110 [2024-02-14 20:30:47.286358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.110 [2024-02-14 20:30:47.286728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.110 [2024-02-14 20:30:47.286758] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.110 qpair failed and we were unable to recover it. 00:30:10.110 [2024-02-14 20:30:47.287215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.110 [2024-02-14 20:30:47.287590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.110 [2024-02-14 20:30:47.287619] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.110 qpair failed and we were unable to recover it. 00:30:10.110 [2024-02-14 20:30:47.288085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.110 [2024-02-14 20:30:47.288467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.110 [2024-02-14 20:30:47.288496] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.110 qpair failed and we were unable to recover it. 00:30:10.110 [2024-02-14 20:30:47.288959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.110 [2024-02-14 20:30:47.289417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.110 [2024-02-14 20:30:47.289446] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.110 qpair failed and we were unable to recover it. 00:30:10.110 [2024-02-14 20:30:47.289781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.110 [2024-02-14 20:30:47.290175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.110 [2024-02-14 20:30:47.290204] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.110 qpair failed and we were unable to recover it. 00:30:10.110 [2024-02-14 20:30:47.290636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.110 [2024-02-14 20:30:47.291133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.110 [2024-02-14 20:30:47.291162] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.110 qpair failed and we were unable to recover it. 00:30:10.110 [2024-02-14 20:30:47.291617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.110 [2024-02-14 20:30:47.292014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.110 [2024-02-14 20:30:47.292043] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.110 qpair failed and we were unable to recover it. 00:30:10.110 [2024-02-14 20:30:47.292424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.110 [2024-02-14 20:30:47.292736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.110 [2024-02-14 20:30:47.292767] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.110 qpair failed and we were unable to recover it. 00:30:10.110 [2024-02-14 20:30:47.293226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.110 [2024-02-14 20:30:47.293538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.110 [2024-02-14 20:30:47.293566] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.110 qpair failed and we were unable to recover it. 00:30:10.110 [2024-02-14 20:30:47.293999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.110 [2024-02-14 20:30:47.294373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.110 [2024-02-14 20:30:47.294402] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.110 qpair failed and we were unable to recover it. 00:30:10.110 [2024-02-14 20:30:47.294766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.111 [2024-02-14 20:30:47.295214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.111 [2024-02-14 20:30:47.295243] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.111 qpair failed and we were unable to recover it. 00:30:10.111 [2024-02-14 20:30:47.295678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.111 [2024-02-14 20:30:47.296084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.111 [2024-02-14 20:30:47.296112] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.111 qpair failed and we were unable to recover it. 00:30:10.111 [2024-02-14 20:30:47.296555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.111 [2024-02-14 20:30:47.296961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.111 [2024-02-14 20:30:47.297003] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.111 qpair failed and we were unable to recover it. 00:30:10.111 [2024-02-14 20:30:47.297448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.111 [2024-02-14 20:30:47.297900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.111 [2024-02-14 20:30:47.297930] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.111 qpair failed and we were unable to recover it. 00:30:10.111 [2024-02-14 20:30:47.298319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.111 [2024-02-14 20:30:47.298661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.111 [2024-02-14 20:30:47.298677] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.111 qpair failed and we were unable to recover it. 00:30:10.111 [2024-02-14 20:30:47.299099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.111 [2024-02-14 20:30:47.299540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.111 [2024-02-14 20:30:47.299554] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.111 qpair failed and we were unable to recover it. 00:30:10.111 [2024-02-14 20:30:47.299907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.111 [2024-02-14 20:30:47.300313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.111 [2024-02-14 20:30:47.300342] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.111 qpair failed and we were unable to recover it. 00:30:10.111 [2024-02-14 20:30:47.300755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.111 [2024-02-14 20:30:47.301126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.111 [2024-02-14 20:30:47.301155] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.111 qpair failed and we were unable to recover it. 00:30:10.111 [2024-02-14 20:30:47.301611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.111 [2024-02-14 20:30:47.301949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.111 [2024-02-14 20:30:47.301978] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.111 qpair failed and we were unable to recover it. 00:30:10.111 [2024-02-14 20:30:47.302432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.111 [2024-02-14 20:30:47.302808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.111 [2024-02-14 20:30:47.302838] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.111 qpair failed and we were unable to recover it. 00:30:10.111 [2024-02-14 20:30:47.303292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.111 [2024-02-14 20:30:47.303667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.111 [2024-02-14 20:30:47.303700] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.111 qpair failed and we were unable to recover it. 00:30:10.111 [2024-02-14 20:30:47.304164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.111 [2024-02-14 20:30:47.304493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.111 [2024-02-14 20:30:47.304522] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.111 qpair failed and we were unable to recover it. 00:30:10.111 [2024-02-14 20:30:47.304924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.111 [2024-02-14 20:30:47.305347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.111 [2024-02-14 20:30:47.305362] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.111 qpair failed and we were unable to recover it. 00:30:10.111 [2024-02-14 20:30:47.305737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.111 [2024-02-14 20:30:47.306093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.111 [2024-02-14 20:30:47.306121] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.111 qpair failed and we were unable to recover it. 00:30:10.111 [2024-02-14 20:30:47.306596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.111 [2024-02-14 20:30:47.307062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.111 [2024-02-14 20:30:47.307092] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.111 qpair failed and we were unable to recover it. 00:30:10.111 [2024-02-14 20:30:47.307498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.111 [2024-02-14 20:30:47.307792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.111 [2024-02-14 20:30:47.307822] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.111 qpair failed and we were unable to recover it. 00:30:10.111 [2024-02-14 20:30:47.308275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.111 [2024-02-14 20:30:47.308704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.111 [2024-02-14 20:30:47.308740] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.111 qpair failed and we were unable to recover it. 00:30:10.111 [2024-02-14 20:30:47.309174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.111 [2024-02-14 20:30:47.309629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.111 [2024-02-14 20:30:47.309667] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.111 qpair failed and we were unable to recover it. 00:30:10.111 [2024-02-14 20:30:47.310118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.111 [2024-02-14 20:30:47.310510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.111 [2024-02-14 20:30:47.310538] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.111 qpair failed and we were unable to recover it. 00:30:10.111 [2024-02-14 20:30:47.310995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.111 [2024-02-14 20:30:47.311446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.111 [2024-02-14 20:30:47.311474] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.111 qpair failed and we were unable to recover it. 00:30:10.111 [2024-02-14 20:30:47.311956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.111 [2024-02-14 20:30:47.312251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.111 [2024-02-14 20:30:47.312265] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.111 qpair failed and we were unable to recover it. 00:30:10.111 [2024-02-14 20:30:47.312693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.111 [2024-02-14 20:30:47.313064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.111 [2024-02-14 20:30:47.313093] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.111 qpair failed and we were unable to recover it. 00:30:10.111 [2024-02-14 20:30:47.313458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.111 [2024-02-14 20:30:47.313911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.111 [2024-02-14 20:30:47.313941] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.111 qpair failed and we were unable to recover it. 00:30:10.111 [2024-02-14 20:30:47.314319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.111 [2024-02-14 20:30:47.314767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.111 [2024-02-14 20:30:47.314797] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.111 qpair failed and we were unable to recover it. 00:30:10.111 [2024-02-14 20:30:47.315183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.111 [2024-02-14 20:30:47.315640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.111 [2024-02-14 20:30:47.315678] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.111 qpair failed and we were unable to recover it. 00:30:10.111 [2024-02-14 20:30:47.316055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.111 [2024-02-14 20:30:47.316375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.111 [2024-02-14 20:30:47.316403] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.111 qpair failed and we were unable to recover it. 00:30:10.111 [2024-02-14 20:30:47.316872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.111 [2024-02-14 20:30:47.317329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.111 [2024-02-14 20:30:47.317357] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.111 qpair failed and we were unable to recover it. 00:30:10.111 [2024-02-14 20:30:47.317819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.111 [2024-02-14 20:30:47.318208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.111 [2024-02-14 20:30:47.318236] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.111 qpair failed and we were unable to recover it. 00:30:10.111 [2024-02-14 20:30:47.318686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.111 [2024-02-14 20:30:47.319017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.111 [2024-02-14 20:30:47.319046] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.111 qpair failed and we were unable to recover it. 00:30:10.112 [2024-02-14 20:30:47.319434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.112 [2024-02-14 20:30:47.319874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.112 [2024-02-14 20:30:47.319904] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.112 qpair failed and we were unable to recover it. 00:30:10.112 [2024-02-14 20:30:47.320393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.112 [2024-02-14 20:30:47.320773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.112 [2024-02-14 20:30:47.320803] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.112 qpair failed and we were unable to recover it. 00:30:10.112 [2024-02-14 20:30:47.321185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.112 [2024-02-14 20:30:47.321479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.112 [2024-02-14 20:30:47.321508] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.112 qpair failed and we were unable to recover it. 00:30:10.112 [2024-02-14 20:30:47.321880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.112 [2024-02-14 20:30:47.322331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.112 [2024-02-14 20:30:47.322360] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.112 qpair failed and we were unable to recover it. 00:30:10.112 [2024-02-14 20:30:47.322660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.112 [2024-02-14 20:30:47.323033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.112 [2024-02-14 20:30:47.323061] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.112 qpair failed and we were unable to recover it. 00:30:10.112 [2024-02-14 20:30:47.323456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.112 [2024-02-14 20:30:47.323785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.112 [2024-02-14 20:30:47.323816] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.112 qpair failed and we were unable to recover it. 00:30:10.112 [2024-02-14 20:30:47.324251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.112 [2024-02-14 20:30:47.324611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.112 [2024-02-14 20:30:47.324626] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.112 qpair failed and we were unable to recover it. 00:30:10.112 [2024-02-14 20:30:47.324994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.112 [2024-02-14 20:30:47.325344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.112 [2024-02-14 20:30:47.325373] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.112 qpair failed and we were unable to recover it. 00:30:10.112 [2024-02-14 20:30:47.325786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.112 [2024-02-14 20:30:47.326166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.112 [2024-02-14 20:30:47.326195] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.112 qpair failed and we were unable to recover it. 00:30:10.112 [2024-02-14 20:30:47.326592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.112 [2024-02-14 20:30:47.327047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.112 [2024-02-14 20:30:47.327077] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.112 qpair failed and we were unable to recover it. 00:30:10.112 [2024-02-14 20:30:47.327535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.112 [2024-02-14 20:30:47.327913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.112 [2024-02-14 20:30:47.327942] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.112 qpair failed and we were unable to recover it. 00:30:10.112 [2024-02-14 20:30:47.328386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.112 [2024-02-14 20:30:47.328780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.112 [2024-02-14 20:30:47.328809] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.112 qpair failed and we were unable to recover it. 00:30:10.112 [2024-02-14 20:30:47.329282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.112 [2024-02-14 20:30:47.329738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.112 [2024-02-14 20:30:47.329769] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.112 qpair failed and we were unable to recover it. 00:30:10.112 [2024-02-14 20:30:47.330178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.112 [2024-02-14 20:30:47.330570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.112 [2024-02-14 20:30:47.330600] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.112 qpair failed and we were unable to recover it. 00:30:10.112 [2024-02-14 20:30:47.331018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.112 [2024-02-14 20:30:47.331348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.112 [2024-02-14 20:30:47.331377] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.112 qpair failed and we were unable to recover it. 00:30:10.112 [2024-02-14 20:30:47.331754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.112 [2024-02-14 20:30:47.332209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.112 [2024-02-14 20:30:47.332238] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.112 qpair failed and we were unable to recover it. 00:30:10.112 [2024-02-14 20:30:47.332673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.112 [2024-02-14 20:30:47.333130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.112 [2024-02-14 20:30:47.333159] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.112 qpair failed and we were unable to recover it. 00:30:10.112 [2024-02-14 20:30:47.333592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.112 [2024-02-14 20:30:47.333956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.112 [2024-02-14 20:30:47.333986] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.112 qpair failed and we were unable to recover it. 00:30:10.112 [2024-02-14 20:30:47.334432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.112 [2024-02-14 20:30:47.334813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.112 [2024-02-14 20:30:47.334842] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.112 qpair failed and we were unable to recover it. 00:30:10.112 [2024-02-14 20:30:47.335214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.112 [2024-02-14 20:30:47.335676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.112 [2024-02-14 20:30:47.335706] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.112 qpair failed and we were unable to recover it. 00:30:10.112 [2024-02-14 20:30:47.336091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.112 [2024-02-14 20:30:47.336545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.112 [2024-02-14 20:30:47.336573] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.112 qpair failed and we were unable to recover it. 00:30:10.112 [2024-02-14 20:30:47.336969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.112 [2024-02-14 20:30:47.337429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.112 [2024-02-14 20:30:47.337458] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.112 qpair failed and we were unable to recover it. 00:30:10.112 [2024-02-14 20:30:47.337915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.112 [2024-02-14 20:30:47.338312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.112 [2024-02-14 20:30:47.338340] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.112 qpair failed and we were unable to recover it. 00:30:10.112 [2024-02-14 20:30:47.338669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.112 [2024-02-14 20:30:47.339106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.112 [2024-02-14 20:30:47.339134] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.112 qpair failed and we were unable to recover it. 00:30:10.112 [2024-02-14 20:30:47.339609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.112 [2024-02-14 20:30:47.340069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.112 [2024-02-14 20:30:47.340099] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.112 qpair failed and we were unable to recover it. 00:30:10.112 [2024-02-14 20:30:47.340497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.112 [2024-02-14 20:30:47.340946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.112 [2024-02-14 20:30:47.340976] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.112 qpair failed and we were unable to recover it. 00:30:10.112 [2024-02-14 20:30:47.341434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.112 [2024-02-14 20:30:47.341893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.112 [2024-02-14 20:30:47.341924] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.112 qpair failed and we were unable to recover it. 00:30:10.112 [2024-02-14 20:30:47.342244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.112 [2024-02-14 20:30:47.342703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.112 [2024-02-14 20:30:47.342733] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.112 qpair failed and we were unable to recover it. 00:30:10.112 [2024-02-14 20:30:47.343114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.112 [2024-02-14 20:30:47.343447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.113 [2024-02-14 20:30:47.343477] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.113 qpair failed and we were unable to recover it. 00:30:10.113 [2024-02-14 20:30:47.343788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.113 [2024-02-14 20:30:47.344180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.113 [2024-02-14 20:30:47.344209] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.113 qpair failed and we were unable to recover it. 00:30:10.113 [2024-02-14 20:30:47.344664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.113 [2024-02-14 20:30:47.345054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.113 [2024-02-14 20:30:47.345083] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.113 qpair failed and we were unable to recover it. 00:30:10.113 [2024-02-14 20:30:47.345540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.113 [2024-02-14 20:30:47.345968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.113 [2024-02-14 20:30:47.345998] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.113 qpair failed and we were unable to recover it. 00:30:10.113 [2024-02-14 20:30:47.346325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.113 [2024-02-14 20:30:47.346728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.113 [2024-02-14 20:30:47.346758] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.113 qpair failed and we were unable to recover it. 00:30:10.113 [2024-02-14 20:30:47.347144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.113 [2024-02-14 20:30:47.347530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.113 [2024-02-14 20:30:47.347558] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.113 qpair failed and we were unable to recover it. 00:30:10.113 [2024-02-14 20:30:47.347967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.113 [2024-02-14 20:30:47.348353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.113 [2024-02-14 20:30:47.348381] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.113 qpair failed and we were unable to recover it. 00:30:10.113 [2024-02-14 20:30:47.348774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.113 [2024-02-14 20:30:47.349155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.113 [2024-02-14 20:30:47.349183] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.113 qpair failed and we were unable to recover it. 00:30:10.113 [2024-02-14 20:30:47.349637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.113 [2024-02-14 20:30:47.350017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.113 [2024-02-14 20:30:47.350046] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.113 qpair failed and we were unable to recover it. 00:30:10.113 [2024-02-14 20:30:47.350459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.113 [2024-02-14 20:30:47.350832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.113 [2024-02-14 20:30:47.350862] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.113 qpair failed and we were unable to recover it. 00:30:10.113 [2024-02-14 20:30:47.351276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.113 [2024-02-14 20:30:47.351671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.113 [2024-02-14 20:30:47.351689] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.113 qpair failed and we were unable to recover it. 00:30:10.113 [2024-02-14 20:30:47.351978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.113 [2024-02-14 20:30:47.352377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.113 [2024-02-14 20:30:47.352405] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.113 qpair failed and we were unable to recover it. 00:30:10.113 [2024-02-14 20:30:47.352862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.113 [2024-02-14 20:30:47.353086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.113 [2024-02-14 20:30:47.353114] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.113 qpair failed and we were unable to recover it. 00:30:10.113 [2024-02-14 20:30:47.353606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.113 [2024-02-14 20:30:47.353993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.113 [2024-02-14 20:30:47.354023] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.113 qpair failed and we were unable to recover it. 00:30:10.113 [2024-02-14 20:30:47.354406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.113 [2024-02-14 20:30:47.354787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.113 [2024-02-14 20:30:47.354818] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.113 qpair failed and we were unable to recover it. 00:30:10.113 [2024-02-14 20:30:47.355204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.113 [2024-02-14 20:30:47.355661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.113 [2024-02-14 20:30:47.355691] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.113 qpair failed and we were unable to recover it. 00:30:10.113 [2024-02-14 20:30:47.356100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.113 [2024-02-14 20:30:47.356480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.113 [2024-02-14 20:30:47.356509] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.113 qpair failed and we were unable to recover it. 00:30:10.113 [2024-02-14 20:30:47.356939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.113 [2024-02-14 20:30:47.357315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.113 [2024-02-14 20:30:47.357329] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.113 qpair failed and we were unable to recover it. 00:30:10.113 [2024-02-14 20:30:47.357633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.113 [2024-02-14 20:30:47.357996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.113 [2024-02-14 20:30:47.358029] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.113 qpair failed and we were unable to recover it. 00:30:10.113 [2024-02-14 20:30:47.358350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.113 [2024-02-14 20:30:47.358659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.113 [2024-02-14 20:30:47.358690] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.113 qpair failed and we were unable to recover it. 00:30:10.113 [2024-02-14 20:30:47.359120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.113 [2024-02-14 20:30:47.359476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.113 [2024-02-14 20:30:47.359491] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.113 qpair failed and we were unable to recover it. 00:30:10.113 [2024-02-14 20:30:47.359928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.113 [2024-02-14 20:30:47.360354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.113 [2024-02-14 20:30:47.360368] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.113 qpair failed and we were unable to recover it. 00:30:10.113 [2024-02-14 20:30:47.360664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.113 [2024-02-14 20:30:47.361067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.113 [2024-02-14 20:30:47.361096] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.113 qpair failed and we were unable to recover it. 00:30:10.114 [2024-02-14 20:30:47.361479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.114 [2024-02-14 20:30:47.361877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.114 [2024-02-14 20:30:47.361891] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.114 qpair failed and we were unable to recover it. 00:30:10.114 [2024-02-14 20:30:47.362333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.114 [2024-02-14 20:30:47.362578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.114 [2024-02-14 20:30:47.362607] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.114 qpair failed and we were unable to recover it. 00:30:10.114 [2024-02-14 20:30:47.363100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.114 [2024-02-14 20:30:47.363422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.114 [2024-02-14 20:30:47.363451] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.114 qpair failed and we were unable to recover it. 00:30:10.114 [2024-02-14 20:30:47.363888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.114 [2024-02-14 20:30:47.364337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.114 [2024-02-14 20:30:47.364366] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.114 qpair failed and we were unable to recover it. 00:30:10.114 [2024-02-14 20:30:47.364733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.114 [2024-02-14 20:30:47.365109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.114 [2024-02-14 20:30:47.365139] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.114 qpair failed and we were unable to recover it. 00:30:10.114 [2024-02-14 20:30:47.365484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.114 [2024-02-14 20:30:47.365847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.114 [2024-02-14 20:30:47.365877] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.114 qpair failed and we were unable to recover it. 00:30:10.114 [2024-02-14 20:30:47.366337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.114 [2024-02-14 20:30:47.366727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.114 [2024-02-14 20:30:47.366757] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.114 qpair failed and we were unable to recover it. 00:30:10.114 [2024-02-14 20:30:47.367237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.114 [2024-02-14 20:30:47.367616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.114 [2024-02-14 20:30:47.367644] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.114 qpair failed and we were unable to recover it. 00:30:10.114 [2024-02-14 20:30:47.368063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.114 [2024-02-14 20:30:47.368531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.114 [2024-02-14 20:30:47.368559] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.114 qpair failed and we were unable to recover it. 00:30:10.114 [2024-02-14 20:30:47.369019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.114 [2024-02-14 20:30:47.369480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.114 [2024-02-14 20:30:47.369508] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.114 qpair failed and we were unable to recover it. 00:30:10.114 [2024-02-14 20:30:47.369941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.114 [2024-02-14 20:30:47.370372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.114 [2024-02-14 20:30:47.370401] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.114 qpair failed and we were unable to recover it. 00:30:10.114 [2024-02-14 20:30:47.370720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.114 [2024-02-14 20:30:47.371104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.114 [2024-02-14 20:30:47.371133] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.114 qpair failed and we were unable to recover it. 00:30:10.114 [2024-02-14 20:30:47.371525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.114 [2024-02-14 20:30:47.371936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.114 [2024-02-14 20:30:47.371966] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.114 qpair failed and we were unable to recover it. 00:30:10.114 [2024-02-14 20:30:47.372402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.114 [2024-02-14 20:30:47.372724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.114 [2024-02-14 20:30:47.372754] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.114 qpair failed and we were unable to recover it. 00:30:10.114 [2024-02-14 20:30:47.372984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.114 [2024-02-14 20:30:47.373359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.114 [2024-02-14 20:30:47.373388] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.114 qpair failed and we were unable to recover it. 00:30:10.114 [2024-02-14 20:30:47.373772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.114 [2024-02-14 20:30:47.374228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.114 [2024-02-14 20:30:47.374257] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.114 qpair failed and we were unable to recover it. 00:30:10.114 [2024-02-14 20:30:47.374642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.114 [2024-02-14 20:30:47.375079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.114 [2024-02-14 20:30:47.375108] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.114 qpair failed and we were unable to recover it. 00:30:10.114 [2024-02-14 20:30:47.375423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.114 [2024-02-14 20:30:47.375859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.114 [2024-02-14 20:30:47.375874] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.114 qpair failed and we were unable to recover it. 00:30:10.114 [2024-02-14 20:30:47.376262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.114 [2024-02-14 20:30:47.376668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.114 [2024-02-14 20:30:47.376699] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.114 qpair failed and we were unable to recover it. 00:30:10.114 [2024-02-14 20:30:47.377158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.114 [2024-02-14 20:30:47.377532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.114 [2024-02-14 20:30:47.377561] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.114 qpair failed and we were unable to recover it. 00:30:10.114 [2024-02-14 20:30:47.377931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.114 [2024-02-14 20:30:47.378386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.114 [2024-02-14 20:30:47.378415] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.114 qpair failed and we were unable to recover it. 00:30:10.114 [2024-02-14 20:30:47.378819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.114 [2024-02-14 20:30:47.379250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.114 [2024-02-14 20:30:47.379278] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.114 qpair failed and we were unable to recover it. 00:30:10.114 [2024-02-14 20:30:47.379657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.114 [2024-02-14 20:30:47.380088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.114 [2024-02-14 20:30:47.380117] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.114 qpair failed and we were unable to recover it. 00:30:10.114 [2024-02-14 20:30:47.380511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.114 [2024-02-14 20:30:47.380905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.114 [2024-02-14 20:30:47.380936] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.114 qpair failed and we were unable to recover it. 00:30:10.114 [2024-02-14 20:30:47.381333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.114 [2024-02-14 20:30:47.381641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.114 [2024-02-14 20:30:47.381690] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.114 qpair failed and we were unable to recover it. 00:30:10.114 [2024-02-14 20:30:47.382074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.114 [2024-02-14 20:30:47.382466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.114 [2024-02-14 20:30:47.382495] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.114 qpair failed and we were unable to recover it. 00:30:10.114 [2024-02-14 20:30:47.382879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.114 [2024-02-14 20:30:47.383030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.114 [2024-02-14 20:30:47.383059] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.114 qpair failed and we were unable to recover it. 00:30:10.114 [2024-02-14 20:30:47.383515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.114 [2024-02-14 20:30:47.383942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.115 [2024-02-14 20:30:47.383973] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.115 qpair failed and we were unable to recover it. 00:30:10.115 [2024-02-14 20:30:47.384454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.115 [2024-02-14 20:30:47.384846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.115 [2024-02-14 20:30:47.384876] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.115 qpair failed and we were unable to recover it. 00:30:10.115 [2024-02-14 20:30:47.385281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.115 [2024-02-14 20:30:47.385687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.115 [2024-02-14 20:30:47.385717] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.115 qpair failed and we were unable to recover it. 00:30:10.115 [2024-02-14 20:30:47.386150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.115 [2024-02-14 20:30:47.386522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.115 [2024-02-14 20:30:47.386550] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.115 qpair failed and we were unable to recover it. 00:30:10.115 [2024-02-14 20:30:47.386934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.115 [2024-02-14 20:30:47.387370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.115 [2024-02-14 20:30:47.387398] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.115 qpair failed and we were unable to recover it. 00:30:10.115 [2024-02-14 20:30:47.387788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.115 [2024-02-14 20:30:47.388240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.115 [2024-02-14 20:30:47.388268] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.115 qpair failed and we were unable to recover it. 00:30:10.115 [2024-02-14 20:30:47.388637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.115 [2024-02-14 20:30:47.389040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.115 [2024-02-14 20:30:47.389069] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.115 qpair failed and we were unable to recover it. 00:30:10.115 [2024-02-14 20:30:47.389456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.115 [2024-02-14 20:30:47.389756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.115 [2024-02-14 20:30:47.389787] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.115 qpair failed and we were unable to recover it. 00:30:10.115 [2024-02-14 20:30:47.390181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.115 [2024-02-14 20:30:47.390555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.115 [2024-02-14 20:30:47.390584] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.115 qpair failed and we were unable to recover it. 00:30:10.115 [2024-02-14 20:30:47.391049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.115 [2024-02-14 20:30:47.391438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.115 [2024-02-14 20:30:47.391467] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.115 qpair failed and we were unable to recover it. 00:30:10.115 [2024-02-14 20:30:47.391902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.115 [2024-02-14 20:30:47.392303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.115 [2024-02-14 20:30:47.392332] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.115 qpair failed and we were unable to recover it. 00:30:10.115 [2024-02-14 20:30:47.392711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.115 [2024-02-14 20:30:47.392919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.115 [2024-02-14 20:30:47.392953] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.115 qpair failed and we were unable to recover it. 00:30:10.115 [2024-02-14 20:30:47.393413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.115 [2024-02-14 20:30:47.393790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.115 [2024-02-14 20:30:47.393821] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.115 qpair failed and we were unable to recover it. 00:30:10.115 [2024-02-14 20:30:47.394275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.115 [2024-02-14 20:30:47.394664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.115 [2024-02-14 20:30:47.394695] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.115 qpair failed and we were unable to recover it. 00:30:10.115 [2024-02-14 20:30:47.395077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.115 [2024-02-14 20:30:47.395438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.115 [2024-02-14 20:30:47.395468] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.115 qpair failed and we were unable to recover it. 00:30:10.115 [2024-02-14 20:30:47.395878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.115 [2024-02-14 20:30:47.396255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.115 [2024-02-14 20:30:47.396284] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.115 qpair failed and we were unable to recover it. 00:30:10.115 [2024-02-14 20:30:47.396679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.115 [2024-02-14 20:30:47.397047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.115 [2024-02-14 20:30:47.397076] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.115 qpair failed and we were unable to recover it. 00:30:10.115 [2024-02-14 20:30:47.397533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.115 [2024-02-14 20:30:47.397910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.115 [2024-02-14 20:30:47.397940] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.115 qpair failed and we were unable to recover it. 00:30:10.115 [2024-02-14 20:30:47.398374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.115 [2024-02-14 20:30:47.398806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.115 [2024-02-14 20:30:47.398836] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.115 qpair failed and we were unable to recover it. 00:30:10.115 [2024-02-14 20:30:47.399206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.115 [2024-02-14 20:30:47.399567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.115 [2024-02-14 20:30:47.399596] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.115 qpair failed and we were unable to recover it. 00:30:10.115 [2024-02-14 20:30:47.400050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.115 [2024-02-14 20:30:47.400439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.115 [2024-02-14 20:30:47.400468] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.115 qpair failed and we were unable to recover it. 00:30:10.115 [2024-02-14 20:30:47.400856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.115 [2024-02-14 20:30:47.401313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.115 [2024-02-14 20:30:47.401347] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.115 qpair failed and we were unable to recover it. 00:30:10.115 [2024-02-14 20:30:47.401777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.115 [2024-02-14 20:30:47.402063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.115 [2024-02-14 20:30:47.402092] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.115 qpair failed and we were unable to recover it. 00:30:10.115 [2024-02-14 20:30:47.402527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.115 [2024-02-14 20:30:47.402885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.115 [2024-02-14 20:30:47.402915] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.115 qpair failed and we were unable to recover it. 00:30:10.115 [2024-02-14 20:30:47.403360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.115 [2024-02-14 20:30:47.403789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.115 [2024-02-14 20:30:47.403822] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.115 qpair failed and we were unable to recover it. 00:30:10.115 [2024-02-14 20:30:47.404197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.115 [2024-02-14 20:30:47.404665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.115 [2024-02-14 20:30:47.404695] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.115 qpair failed and we were unable to recover it. 00:30:10.115 [2024-02-14 20:30:47.405156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.115 [2024-02-14 20:30:47.405537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.115 [2024-02-14 20:30:47.405566] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.115 qpair failed and we were unable to recover it. 00:30:10.115 [2024-02-14 20:30:47.406020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.115 [2024-02-14 20:30:47.406331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.115 [2024-02-14 20:30:47.406361] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.115 qpair failed and we were unable to recover it. 00:30:10.115 [2024-02-14 20:30:47.406813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.115 [2024-02-14 20:30:47.407193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.115 [2024-02-14 20:30:47.407222] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.115 qpair failed and we were unable to recover it. 00:30:10.115 [2024-02-14 20:30:47.407597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.115 [2024-02-14 20:30:47.408056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.116 [2024-02-14 20:30:47.408086] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.116 qpair failed and we were unable to recover it. 00:30:10.116 [2024-02-14 20:30:47.408402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.116 [2024-02-14 20:30:47.408857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.116 [2024-02-14 20:30:47.408888] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.116 qpair failed and we were unable to recover it. 00:30:10.116 [2024-02-14 20:30:47.409278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.116 [2024-02-14 20:30:47.409606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.116 [2024-02-14 20:30:47.409636] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.116 qpair failed and we were unable to recover it. 00:30:10.116 [2024-02-14 20:30:47.410025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.116 [2024-02-14 20:30:47.410403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.116 [2024-02-14 20:30:47.410432] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.116 qpair failed and we were unable to recover it. 00:30:10.116 [2024-02-14 20:30:47.410819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.116 [2024-02-14 20:30:47.411192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.116 [2024-02-14 20:30:47.411221] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.116 qpair failed and we were unable to recover it. 00:30:10.116 [2024-02-14 20:30:47.411668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.116 [2024-02-14 20:30:47.411889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.116 [2024-02-14 20:30:47.411918] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.116 qpair failed and we were unable to recover it. 00:30:10.116 [2024-02-14 20:30:47.412245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.116 [2024-02-14 20:30:47.412626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.116 [2024-02-14 20:30:47.412663] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.116 qpair failed and we were unable to recover it. 00:30:10.116 [2024-02-14 20:30:47.413055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.116 [2024-02-14 20:30:47.413442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.116 [2024-02-14 20:30:47.413470] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.116 qpair failed and we were unable to recover it. 00:30:10.116 [2024-02-14 20:30:47.413938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.116 [2024-02-14 20:30:47.414369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.116 [2024-02-14 20:30:47.414398] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.116 qpair failed and we were unable to recover it. 00:30:10.116 [2024-02-14 20:30:47.414628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.116 [2024-02-14 20:30:47.415017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.116 [2024-02-14 20:30:47.415046] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.116 qpair failed and we were unable to recover it. 00:30:10.116 [2024-02-14 20:30:47.415428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.116 [2024-02-14 20:30:47.415822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.116 [2024-02-14 20:30:47.415837] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.116 qpair failed and we were unable to recover it. 00:30:10.116 [2024-02-14 20:30:47.416141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.116 [2024-02-14 20:30:47.416308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.116 [2024-02-14 20:30:47.416336] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.116 qpair failed and we were unable to recover it. 00:30:10.116 [2024-02-14 20:30:47.416727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.116 [2024-02-14 20:30:47.417057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.116 [2024-02-14 20:30:47.417086] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.116 qpair failed and we were unable to recover it. 00:30:10.116 [2024-02-14 20:30:47.417519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.116 [2024-02-14 20:30:47.417714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.116 [2024-02-14 20:30:47.417729] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.116 qpair failed and we were unable to recover it. 00:30:10.116 [2024-02-14 20:30:47.418097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.116 [2024-02-14 20:30:47.418479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.116 [2024-02-14 20:30:47.418507] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.116 qpair failed and we were unable to recover it. 00:30:10.116 [2024-02-14 20:30:47.418936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.116 [2024-02-14 20:30:47.419341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.116 [2024-02-14 20:30:47.419369] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.116 qpair failed and we were unable to recover it. 00:30:10.116 [2024-02-14 20:30:47.419816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.116 [2024-02-14 20:30:47.420218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.116 [2024-02-14 20:30:47.420232] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.116 qpair failed and we were unable to recover it. 00:30:10.116 [2024-02-14 20:30:47.420678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.116 [2024-02-14 20:30:47.421132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.116 [2024-02-14 20:30:47.421161] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.116 qpair failed and we were unable to recover it. 00:30:10.116 [2024-02-14 20:30:47.421541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.116 [2024-02-14 20:30:47.421974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.116 [2024-02-14 20:30:47.422003] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.116 qpair failed and we were unable to recover it. 00:30:10.116 [2024-02-14 20:30:47.422398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.116 [2024-02-14 20:30:47.422824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.116 [2024-02-14 20:30:47.422854] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.116 qpair failed and we were unable to recover it. 00:30:10.116 [2024-02-14 20:30:47.423289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.116 [2024-02-14 20:30:47.423714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.116 [2024-02-14 20:30:47.423745] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.116 qpair failed and we were unable to recover it. 00:30:10.116 [2024-02-14 20:30:47.424218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.116 [2024-02-14 20:30:47.424594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.116 [2024-02-14 20:30:47.424622] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.116 qpair failed and we were unable to recover it. 00:30:10.116 [2024-02-14 20:30:47.424976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.116 [2024-02-14 20:30:47.425409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.116 [2024-02-14 20:30:47.425437] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.116 qpair failed and we were unable to recover it. 00:30:10.116 [2024-02-14 20:30:47.425823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.116 [2024-02-14 20:30:47.426319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.116 [2024-02-14 20:30:47.426348] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.116 qpair failed and we were unable to recover it. 00:30:10.116 [2024-02-14 20:30:47.426756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.116 [2024-02-14 20:30:47.427210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.116 [2024-02-14 20:30:47.427238] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.116 qpair failed and we were unable to recover it. 00:30:10.116 [2024-02-14 20:30:47.427644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.116 [2024-02-14 20:30:47.427971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.116 [2024-02-14 20:30:47.428000] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.116 qpair failed and we were unable to recover it. 00:30:10.116 [2024-02-14 20:30:47.428338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.116 [2024-02-14 20:30:47.428734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.116 [2024-02-14 20:30:47.428764] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.116 qpair failed and we were unable to recover it. 00:30:10.116 [2024-02-14 20:30:47.429172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.116 [2024-02-14 20:30:47.429634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.116 [2024-02-14 20:30:47.429670] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.116 qpair failed and we were unable to recover it. 00:30:10.116 [2024-02-14 20:30:47.430076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.116 [2024-02-14 20:30:47.430504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.116 [2024-02-14 20:30:47.430532] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.116 qpair failed and we were unable to recover it. 00:30:10.117 [2024-02-14 20:30:47.430975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.117 [2024-02-14 20:30:47.431354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.117 [2024-02-14 20:30:47.431382] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.117 qpair failed and we were unable to recover it. 00:30:10.117 [2024-02-14 20:30:47.431764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.117 [2024-02-14 20:30:47.432126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.117 [2024-02-14 20:30:47.432154] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.117 qpair failed and we were unable to recover it. 00:30:10.117 [2024-02-14 20:30:47.432467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.117 [2024-02-14 20:30:47.432670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.117 [2024-02-14 20:30:47.432684] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.117 qpair failed and we were unable to recover it. 00:30:10.117 [2024-02-14 20:30:47.433067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.117 [2024-02-14 20:30:47.433495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.117 [2024-02-14 20:30:47.433523] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.117 qpair failed and we were unable to recover it. 00:30:10.117 [2024-02-14 20:30:47.434014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.117 [2024-02-14 20:30:47.434400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.117 [2024-02-14 20:30:47.434428] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.117 qpair failed and we were unable to recover it. 00:30:10.117 [2024-02-14 20:30:47.434812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.117 [2024-02-14 20:30:47.435260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.117 [2024-02-14 20:30:47.435289] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.117 qpair failed and we were unable to recover it. 00:30:10.117 [2024-02-14 20:30:47.435745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.117 [2024-02-14 20:30:47.436124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.117 [2024-02-14 20:30:47.436152] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.117 qpair failed and we were unable to recover it. 00:30:10.117 [2024-02-14 20:30:47.436610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.117 [2024-02-14 20:30:47.436997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.117 [2024-02-14 20:30:47.437027] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.117 qpair failed and we were unable to recover it. 00:30:10.117 [2024-02-14 20:30:47.437463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.117 [2024-02-14 20:30:47.437886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.117 [2024-02-14 20:30:47.437901] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.117 qpair failed and we were unable to recover it. 00:30:10.117 [2024-02-14 20:30:47.438251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.117 [2024-02-14 20:30:47.438548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.117 [2024-02-14 20:30:47.438586] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.117 qpair failed and we were unable to recover it. 00:30:10.117 [2024-02-14 20:30:47.438993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.117 [2024-02-14 20:30:47.439377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.117 [2024-02-14 20:30:47.439406] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.117 qpair failed and we were unable to recover it. 00:30:10.117 [2024-02-14 20:30:47.439801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.117 [2024-02-14 20:30:47.440253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.117 [2024-02-14 20:30:47.440281] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.117 qpair failed and we were unable to recover it. 00:30:10.117 [2024-02-14 20:30:47.440691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.117 [2024-02-14 20:30:47.441119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.117 [2024-02-14 20:30:47.441147] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.117 qpair failed and we were unable to recover it. 00:30:10.117 [2024-02-14 20:30:47.441526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.117 [2024-02-14 20:30:47.441984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.117 [2024-02-14 20:30:47.442014] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.117 qpair failed and we were unable to recover it. 00:30:10.117 [2024-02-14 20:30:47.442401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.117 [2024-02-14 20:30:47.442848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.117 [2024-02-14 20:30:47.442882] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.117 qpair failed and we were unable to recover it. 00:30:10.117 [2024-02-14 20:30:47.443270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.117 [2024-02-14 20:30:47.443721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.117 [2024-02-14 20:30:47.443751] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.117 qpair failed and we were unable to recover it. 00:30:10.117 [2024-02-14 20:30:47.444065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.117 [2024-02-14 20:30:47.444518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.117 [2024-02-14 20:30:47.444547] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.117 qpair failed and we were unable to recover it. 00:30:10.117 [2024-02-14 20:30:47.444878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.117 [2024-02-14 20:30:47.445270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.117 [2024-02-14 20:30:47.445299] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.117 qpair failed and we were unable to recover it. 00:30:10.117 [2024-02-14 20:30:47.445703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.117 [2024-02-14 20:30:47.446084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.117 [2024-02-14 20:30:47.446112] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.117 qpair failed and we were unable to recover it. 00:30:10.117 [2024-02-14 20:30:47.446568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.117 [2024-02-14 20:30:47.446968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.117 [2024-02-14 20:30:47.447014] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.117 qpair failed and we were unable to recover it. 00:30:10.117 [2024-02-14 20:30:47.447389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.117 [2024-02-14 20:30:47.447826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.117 [2024-02-14 20:30:47.447855] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.117 qpair failed and we were unable to recover it. 00:30:10.117 [2024-02-14 20:30:47.448240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.117 [2024-02-14 20:30:47.448677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.117 [2024-02-14 20:30:47.448707] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.117 qpair failed and we were unable to recover it. 00:30:10.117 [2024-02-14 20:30:47.449092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.117 [2024-02-14 20:30:47.449468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.117 [2024-02-14 20:30:47.449496] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.117 qpair failed and we were unable to recover it. 00:30:10.117 [2024-02-14 20:30:47.449878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.117 [2024-02-14 20:30:47.450329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.117 [2024-02-14 20:30:47.450358] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.117 qpair failed and we were unable to recover it. 00:30:10.117 [2024-02-14 20:30:47.450802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.118 [2024-02-14 20:30:47.451179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.118 [2024-02-14 20:30:47.451208] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.118 qpair failed and we were unable to recover it. 00:30:10.118 [2024-02-14 20:30:47.451666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.118 [2024-02-14 20:30:47.452127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.118 [2024-02-14 20:30:47.452156] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.118 qpair failed and we were unable to recover it. 00:30:10.118 [2024-02-14 20:30:47.452618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.118 [2024-02-14 20:30:47.453011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.118 [2024-02-14 20:30:47.453041] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.118 qpair failed and we were unable to recover it. 00:30:10.118 [2024-02-14 20:30:47.453499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.118 [2024-02-14 20:30:47.453866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.118 [2024-02-14 20:30:47.453896] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.118 qpair failed and we were unable to recover it. 00:30:10.118 [2024-02-14 20:30:47.454336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.118 [2024-02-14 20:30:47.454779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.118 [2024-02-14 20:30:47.454809] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.118 qpair failed and we were unable to recover it. 00:30:10.118 [2024-02-14 20:30:47.455189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.118 [2024-02-14 20:30:47.455356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.118 [2024-02-14 20:30:47.455384] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.118 qpair failed and we were unable to recover it. 00:30:10.118 [2024-02-14 20:30:47.455708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.118 [2024-02-14 20:30:47.456090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.118 [2024-02-14 20:30:47.456119] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.118 qpair failed and we were unable to recover it. 00:30:10.118 [2024-02-14 20:30:47.456555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.118 [2024-02-14 20:30:47.456933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.118 [2024-02-14 20:30:47.456963] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.118 qpair failed and we were unable to recover it. 00:30:10.118 [2024-02-14 20:30:47.457351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.118 [2024-02-14 20:30:47.457734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.118 [2024-02-14 20:30:47.457765] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.118 qpair failed and we were unable to recover it. 00:30:10.118 [2024-02-14 20:30:47.458222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.118 [2024-02-14 20:30:47.458679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.118 [2024-02-14 20:30:47.458710] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.118 qpair failed and we were unable to recover it. 00:30:10.118 [2024-02-14 20:30:47.459146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.118 [2024-02-14 20:30:47.459695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.118 [2024-02-14 20:30:47.459725] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.118 qpair failed and we were unable to recover it. 00:30:10.118 [2024-02-14 20:30:47.460123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.118 [2024-02-14 20:30:47.460504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.118 [2024-02-14 20:30:47.460532] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.118 qpair failed and we were unable to recover it. 00:30:10.118 [2024-02-14 20:30:47.460915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.118 [2024-02-14 20:30:47.462104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.118 [2024-02-14 20:30:47.462135] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.118 qpair failed and we were unable to recover it. 00:30:10.118 [2024-02-14 20:30:47.462626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.118 [2024-02-14 20:30:47.464422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.118 [2024-02-14 20:30:47.464449] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.118 qpair failed and we were unable to recover it. 00:30:10.118 [2024-02-14 20:30:47.464869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.118 [2024-02-14 20:30:47.465294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.118 [2024-02-14 20:30:47.465324] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.118 qpair failed and we were unable to recover it. 00:30:10.118 [2024-02-14 20:30:47.465711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.118 [2024-02-14 20:30:47.466103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.118 [2024-02-14 20:30:47.466133] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.118 qpair failed and we were unable to recover it. 00:30:10.118 [2024-02-14 20:30:47.466753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.118 [2024-02-14 20:30:47.467077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.118 [2024-02-14 20:30:47.467108] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.118 qpair failed and we were unable to recover it. 00:30:10.118 [2024-02-14 20:30:47.467500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.118 [2024-02-14 20:30:47.467957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.118 [2024-02-14 20:30:47.467988] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.118 qpair failed and we were unable to recover it. 00:30:10.118 [2024-02-14 20:30:47.468409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.118 [2024-02-14 20:30:47.469009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.118 [2024-02-14 20:30:47.469039] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.118 qpair failed and we were unable to recover it. 00:30:10.118 [2024-02-14 20:30:47.469510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.118 [2024-02-14 20:30:47.469944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.118 [2024-02-14 20:30:47.469974] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.118 qpair failed and we were unable to recover it. 00:30:10.118 [2024-02-14 20:30:47.470380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.118 [2024-02-14 20:30:47.470789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.118 [2024-02-14 20:30:47.470803] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.118 qpair failed and we were unable to recover it. 00:30:10.118 [2024-02-14 20:30:47.471106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.118 [2024-02-14 20:30:47.471506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.118 [2024-02-14 20:30:47.471521] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.118 qpair failed and we were unable to recover it. 00:30:10.118 [2024-02-14 20:30:47.471886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.118 [2024-02-14 20:30:47.472168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.118 [2024-02-14 20:30:47.472183] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.118 qpair failed and we were unable to recover it. 00:30:10.118 [2024-02-14 20:30:47.472350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.118 [2024-02-14 20:30:47.472726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.118 [2024-02-14 20:30:47.472756] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.118 qpair failed and we were unable to recover it. 00:30:10.118 [2024-02-14 20:30:47.473084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.118 [2024-02-14 20:30:47.473490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.118 [2024-02-14 20:30:47.473518] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.118 qpair failed and we were unable to recover it. 00:30:10.118 [2024-02-14 20:30:47.473969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.118 [2024-02-14 20:30:47.474336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.118 [2024-02-14 20:30:47.474350] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.118 qpair failed and we were unable to recover it. 00:30:10.118 [2024-02-14 20:30:47.474759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.118 [2024-02-14 20:30:47.475111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.118 [2024-02-14 20:30:47.475125] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.118 qpair failed and we were unable to recover it. 00:30:10.118 [2024-02-14 20:30:47.475659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.118 [2024-02-14 20:30:47.476058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.118 [2024-02-14 20:30:47.476073] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.118 qpair failed and we were unable to recover it. 00:30:10.118 [2024-02-14 20:30:47.476221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.118 [2024-02-14 20:30:47.476521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.118 [2024-02-14 20:30:47.476536] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.118 qpair failed and we were unable to recover it. 00:30:10.119 [2024-02-14 20:30:47.476939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.119 [2024-02-14 20:30:47.477310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.119 [2024-02-14 20:30:47.477325] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.119 qpair failed and we were unable to recover it. 00:30:10.119 [2024-02-14 20:30:47.477677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.119 [2024-02-14 20:30:47.478024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.119 [2024-02-14 20:30:47.478038] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.119 qpair failed and we were unable to recover it. 00:30:10.119 [2024-02-14 20:30:47.478378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.119 [2024-02-14 20:30:47.478585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.119 [2024-02-14 20:30:47.478605] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.119 qpair failed and we were unable to recover it. 00:30:10.119 [2024-02-14 20:30:47.479024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.119 [2024-02-14 20:30:47.479429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.119 [2024-02-14 20:30:47.479444] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.119 qpair failed and we were unable to recover it. 00:30:10.119 [2024-02-14 20:30:47.479846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.119 [2024-02-14 20:30:47.480256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.119 [2024-02-14 20:30:47.480284] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.119 qpair failed and we were unable to recover it. 00:30:10.119 [2024-02-14 20:30:47.480668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.119 [2024-02-14 20:30:47.480947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.119 [2024-02-14 20:30:47.480962] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.119 qpair failed and we were unable to recover it. 00:30:10.119 [2024-02-14 20:30:47.481318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.119 [2024-02-14 20:30:47.481454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.119 [2024-02-14 20:30:47.481468] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.119 qpair failed and we were unable to recover it. 00:30:10.119 [2024-02-14 20:30:47.481824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.119 [2024-02-14 20:30:47.482209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.119 [2024-02-14 20:30:47.482239] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.119 qpair failed and we were unable to recover it. 00:30:10.119 [2024-02-14 20:30:47.482699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.119 [2024-02-14 20:30:47.483129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.119 [2024-02-14 20:30:47.483143] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.119 qpair failed and we were unable to recover it. 00:30:10.119 [2024-02-14 20:30:47.483495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.119 [2024-02-14 20:30:47.483882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.119 [2024-02-14 20:30:47.483898] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.119 qpair failed and we were unable to recover it. 00:30:10.119 [2024-02-14 20:30:47.484198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.119 [2024-02-14 20:30:47.484555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.119 [2024-02-14 20:30:47.484584] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.119 qpair failed and we were unable to recover it. 00:30:10.119 [2024-02-14 20:30:47.484954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.119 [2024-02-14 20:30:47.485390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.119 [2024-02-14 20:30:47.485422] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.119 qpair failed and we were unable to recover it. 00:30:10.119 [2024-02-14 20:30:47.485862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.119 [2024-02-14 20:30:47.486196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.119 [2024-02-14 20:30:47.486213] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.119 qpair failed and we were unable to recover it. 00:30:10.119 [2024-02-14 20:30:47.486630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.119 [2024-02-14 20:30:47.486987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.119 [2024-02-14 20:30:47.487002] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.119 qpair failed and we were unable to recover it. 00:30:10.119 [2024-02-14 20:30:47.487341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.119 [2024-02-14 20:30:47.487757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.119 [2024-02-14 20:30:47.487772] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.119 qpair failed and we were unable to recover it. 00:30:10.119 [2024-02-14 20:30:47.488124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.119 [2024-02-14 20:30:47.488380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.119 [2024-02-14 20:30:47.488394] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.119 qpair failed and we were unable to recover it. 00:30:10.119 [2024-02-14 20:30:47.488632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.119 [2024-02-14 20:30:47.489074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.119 [2024-02-14 20:30:47.489088] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.119 qpair failed and we were unable to recover it. 00:30:10.119 [2024-02-14 20:30:47.489371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.119 [2024-02-14 20:30:47.489662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.119 [2024-02-14 20:30:47.489678] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.119 qpair failed and we were unable to recover it. 00:30:10.119 [2024-02-14 20:30:47.490074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.119 [2024-02-14 20:30:47.490423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.119 [2024-02-14 20:30:47.490437] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.119 qpair failed and we were unable to recover it. 00:30:10.119 [2024-02-14 20:30:47.490814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.119 [2024-02-14 20:30:47.491187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.119 [2024-02-14 20:30:47.491216] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.119 qpair failed and we were unable to recover it. 00:30:10.119 [2024-02-14 20:30:47.491643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.119 [2024-02-14 20:30:47.491984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.119 [2024-02-14 20:30:47.491999] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.119 qpair failed and we were unable to recover it. 00:30:10.119 [2024-02-14 20:30:47.492354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.119 [2024-02-14 20:30:47.492903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.119 [2024-02-14 20:30:47.492918] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.119 qpair failed and we were unable to recover it. 00:30:10.119 [2024-02-14 20:30:47.493263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.119 [2024-02-14 20:30:47.493639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.119 [2024-02-14 20:30:47.493681] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.119 qpair failed and we were unable to recover it. 00:30:10.119 [2024-02-14 20:30:47.494150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.119 [2024-02-14 20:30:47.494458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.119 [2024-02-14 20:30:47.494487] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.119 qpair failed and we were unable to recover it. 00:30:10.119 [2024-02-14 20:30:47.494889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.119 [2024-02-14 20:30:47.495230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.119 [2024-02-14 20:30:47.495244] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.119 qpair failed and we were unable to recover it. 00:30:10.119 [2024-02-14 20:30:47.495661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.119 [2024-02-14 20:30:47.496552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.119 [2024-02-14 20:30:47.496579] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.119 qpair failed and we were unable to recover it. 00:30:10.119 [2024-02-14 20:30:47.496947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.119 [2024-02-14 20:30:47.497244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.119 [2024-02-14 20:30:47.497259] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.119 qpair failed and we were unable to recover it. 00:30:10.119 [2024-02-14 20:30:47.497542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.119 [2024-02-14 20:30:47.497932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.119 [2024-02-14 20:30:47.497947] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.119 qpair failed and we were unable to recover it. 00:30:10.119 [2024-02-14 20:30:47.498250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.120 [2024-02-14 20:30:47.498676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.120 [2024-02-14 20:30:47.498692] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.120 qpair failed and we were unable to recover it. 00:30:10.120 [2024-02-14 20:30:47.499044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.120 [2024-02-14 20:30:47.499464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.120 [2024-02-14 20:30:47.499479] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.120 qpair failed and we were unable to recover it. 00:30:10.120 [2024-02-14 20:30:47.499840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.120 [2024-02-14 20:30:47.500196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.120 [2024-02-14 20:30:47.500225] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.120 qpair failed and we were unable to recover it. 00:30:10.120 [2024-02-14 20:30:47.500786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.120 [2024-02-14 20:30:47.501184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.120 [2024-02-14 20:30:47.501213] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.120 qpair failed and we were unable to recover it. 00:30:10.120 [2024-02-14 20:30:47.501545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.120 [2024-02-14 20:30:47.501892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.120 [2024-02-14 20:30:47.501907] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.120 qpair failed and we were unable to recover it. 00:30:10.120 [2024-02-14 20:30:47.502263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.120 [2024-02-14 20:30:47.502566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.120 [2024-02-14 20:30:47.502580] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.120 qpair failed and we were unable to recover it. 00:30:10.120 [2024-02-14 20:30:47.502962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.120 [2024-02-14 20:30:47.503310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.120 [2024-02-14 20:30:47.503325] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.120 qpair failed and we were unable to recover it. 00:30:10.120 [2024-02-14 20:30:47.503902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.120 [2024-02-14 20:30:47.504186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.120 [2024-02-14 20:30:47.504202] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.120 qpair failed and we were unable to recover it. 00:30:10.120 [2024-02-14 20:30:47.504587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.120 [2024-02-14 20:30:47.505043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.120 [2024-02-14 20:30:47.505057] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.120 qpair failed and we were unable to recover it. 00:30:10.120 [2024-02-14 20:30:47.505428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.120 [2024-02-14 20:30:47.505769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.120 [2024-02-14 20:30:47.505784] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.120 qpair failed and we were unable to recover it. 00:30:10.120 [2024-02-14 20:30:47.506071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.120 [2024-02-14 20:30:47.506294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.120 [2024-02-14 20:30:47.506308] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.120 qpair failed and we were unable to recover it. 00:30:10.120 [2024-02-14 20:30:47.507091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.120 [2024-02-14 20:30:47.507413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.120 [2024-02-14 20:30:47.507445] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.120 qpair failed and we were unable to recover it. 00:30:10.120 [2024-02-14 20:30:47.507773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.120 [2024-02-14 20:30:47.508090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.120 [2024-02-14 20:30:47.508105] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.120 qpair failed and we were unable to recover it. 00:30:10.120 [2024-02-14 20:30:47.508449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.120 [2024-02-14 20:30:47.508886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.120 [2024-02-14 20:30:47.508916] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.120 qpair failed and we were unable to recover it. 00:30:10.120 [2024-02-14 20:30:47.509295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.120 [2024-02-14 20:30:47.509693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.120 [2024-02-14 20:30:47.509724] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.120 qpair failed and we were unable to recover it. 00:30:10.120 [2024-02-14 20:30:47.510059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.120 [2024-02-14 20:30:47.510328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.120 [2024-02-14 20:30:47.510342] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.120 qpair failed and we were unable to recover it. 00:30:10.120 [2024-02-14 20:30:47.510930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.120 [2024-02-14 20:30:47.511217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.120 [2024-02-14 20:30:47.511246] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.120 qpair failed and we were unable to recover it. 00:30:10.120 [2024-02-14 20:30:47.511477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.120 [2024-02-14 20:30:47.511843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.120 [2024-02-14 20:30:47.511860] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.120 qpair failed and we were unable to recover it. 00:30:10.120 [2024-02-14 20:30:47.512204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.120 [2024-02-14 20:30:47.512547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.120 [2024-02-14 20:30:47.512562] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.120 qpair failed and we were unable to recover it. 00:30:10.120 [2024-02-14 20:30:47.512944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.120 [2024-02-14 20:30:47.513325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.120 [2024-02-14 20:30:47.513354] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.120 qpair failed and we were unable to recover it. 00:30:10.120 [2024-02-14 20:30:47.513700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.120 [2024-02-14 20:30:47.514017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.120 [2024-02-14 20:30:47.514046] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.120 qpair failed and we were unable to recover it. 00:30:10.120 [2024-02-14 20:30:47.514430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.120 [2024-02-14 20:30:47.514793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.120 [2024-02-14 20:30:47.514807] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.120 qpair failed and we were unable to recover it. 00:30:10.120 [2024-02-14 20:30:47.515321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.120 [2024-02-14 20:30:47.515662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.120 [2024-02-14 20:30:47.515678] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.120 qpair failed and we were unable to recover it. 00:30:10.120 [2024-02-14 20:30:47.515886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.120 [2024-02-14 20:30:47.516230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.120 [2024-02-14 20:30:47.516245] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.120 qpair failed and we were unable to recover it. 00:30:10.120 [2024-02-14 20:30:47.516596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.120 [2024-02-14 20:30:47.516952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.120 [2024-02-14 20:30:47.516982] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.120 qpair failed and we were unable to recover it. 00:30:10.120 [2024-02-14 20:30:47.517383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.120 [2024-02-14 20:30:47.517767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.120 [2024-02-14 20:30:47.517782] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.120 qpair failed and we were unable to recover it. 00:30:10.383 [2024-02-14 20:30:47.518232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.383 [2024-02-14 20:30:47.518573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.383 [2024-02-14 20:30:47.518588] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.383 qpair failed and we were unable to recover it. 00:30:10.383 [2024-02-14 20:30:47.518877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.383 [2024-02-14 20:30:47.519235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.384 [2024-02-14 20:30:47.519250] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.384 qpair failed and we were unable to recover it. 00:30:10.384 [2024-02-14 20:30:47.519621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.384 [2024-02-14 20:30:47.519926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.384 [2024-02-14 20:30:47.519941] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.384 qpair failed and we were unable to recover it. 00:30:10.384 [2024-02-14 20:30:47.520289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.384 [2024-02-14 20:30:47.520624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.384 [2024-02-14 20:30:47.520638] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.384 qpair failed and we were unable to recover it. 00:30:10.384 [2024-02-14 20:30:47.520952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.384 [2024-02-14 20:30:47.521231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.384 [2024-02-14 20:30:47.521259] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.384 qpair failed and we were unable to recover it. 00:30:10.384 [2024-02-14 20:30:47.521588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.384 [2024-02-14 20:30:47.521894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.384 [2024-02-14 20:30:47.521909] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.384 qpair failed and we were unable to recover it. 00:30:10.384 [2024-02-14 20:30:47.522247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.384 [2024-02-14 20:30:47.522663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.384 [2024-02-14 20:30:47.522693] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.384 qpair failed and we were unable to recover it. 00:30:10.384 [2024-02-14 20:30:47.524076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.384 [2024-02-14 20:30:47.524467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.384 [2024-02-14 20:30:47.524506] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.384 qpair failed and we were unable to recover it. 00:30:10.384 [2024-02-14 20:30:47.524827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.384 [2024-02-14 20:30:47.525150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.384 [2024-02-14 20:30:47.525181] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.384 qpair failed and we were unable to recover it. 00:30:10.384 [2024-02-14 20:30:47.525565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.384 [2024-02-14 20:30:47.525942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.384 [2024-02-14 20:30:47.525979] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.384 qpair failed and we were unable to recover it. 00:30:10.384 [2024-02-14 20:30:47.526369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.384 [2024-02-14 20:30:47.526745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.384 [2024-02-14 20:30:47.526775] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.384 qpair failed and we were unable to recover it. 00:30:10.384 [2024-02-14 20:30:47.527144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.384 [2024-02-14 20:30:47.527624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.384 [2024-02-14 20:30:47.527673] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.384 qpair failed and we were unable to recover it. 00:30:10.384 [2024-02-14 20:30:47.528060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.384 [2024-02-14 20:30:47.528370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.384 [2024-02-14 20:30:47.528398] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.384 qpair failed and we were unable to recover it. 00:30:10.384 [2024-02-14 20:30:47.528733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.384 [2024-02-14 20:30:47.529110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.384 [2024-02-14 20:30:47.529139] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.384 qpair failed and we were unable to recover it. 00:30:10.384 [2024-02-14 20:30:47.529464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.384 [2024-02-14 20:30:47.529799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.384 [2024-02-14 20:30:47.529829] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.384 qpair failed and we were unable to recover it. 00:30:10.384 [2024-02-14 20:30:47.530265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.384 [2024-02-14 20:30:47.530542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.384 [2024-02-14 20:30:47.530570] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.384 qpair failed and we were unable to recover it. 00:30:10.384 [2024-02-14 20:30:47.531011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.384 [2024-02-14 20:30:47.531330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.384 [2024-02-14 20:30:47.531359] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.384 qpair failed and we were unable to recover it. 00:30:10.384 [2024-02-14 20:30:47.531694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.384 [2024-02-14 20:30:47.532065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.384 [2024-02-14 20:30:47.532093] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.384 qpair failed and we were unable to recover it. 00:30:10.384 [2024-02-14 20:30:47.532485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.384 [2024-02-14 20:30:47.532872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.384 [2024-02-14 20:30:47.532901] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.384 qpair failed and we were unable to recover it. 00:30:10.384 [2024-02-14 20:30:47.533294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.384 [2024-02-14 20:30:47.533605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.384 [2024-02-14 20:30:47.533634] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.384 qpair failed and we were unable to recover it. 00:30:10.384 [2024-02-14 20:30:47.533971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.384 [2024-02-14 20:30:47.534277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.384 [2024-02-14 20:30:47.534306] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.384 qpair failed and we were unable to recover it. 00:30:10.384 [2024-02-14 20:30:47.534732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.384 [2024-02-14 20:30:47.535064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.384 [2024-02-14 20:30:47.535092] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.384 qpair failed and we were unable to recover it. 00:30:10.384 [2024-02-14 20:30:47.535495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.384 [2024-02-14 20:30:47.535875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.384 [2024-02-14 20:30:47.535905] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.384 qpair failed and we were unable to recover it. 00:30:10.384 [2024-02-14 20:30:47.536232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.384 [2024-02-14 20:30:47.536544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.384 [2024-02-14 20:30:47.536573] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.384 qpair failed and we were unable to recover it. 00:30:10.384 [2024-02-14 20:30:47.537030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.384 [2024-02-14 20:30:47.537343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.384 [2024-02-14 20:30:47.537371] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.384 qpair failed and we were unable to recover it. 00:30:10.384 [2024-02-14 20:30:47.537759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.384 [2024-02-14 20:30:47.538085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.384 [2024-02-14 20:30:47.538114] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.384 qpair failed and we were unable to recover it. 00:30:10.384 [2024-02-14 20:30:47.538447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.384 [2024-02-14 20:30:47.538878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.384 [2024-02-14 20:30:47.538909] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.384 qpair failed and we were unable to recover it. 00:30:10.384 [2024-02-14 20:30:47.539294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.384 [2024-02-14 20:30:47.539722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.384 [2024-02-14 20:30:47.539751] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.384 qpair failed and we were unable to recover it. 00:30:10.384 [2024-02-14 20:30:47.539933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.384 [2024-02-14 20:30:47.540336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.384 [2024-02-14 20:30:47.540365] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.384 qpair failed and we were unable to recover it. 00:30:10.384 [2024-02-14 20:30:47.540700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.384 [2024-02-14 20:30:47.541010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.384 [2024-02-14 20:30:47.541039] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.384 qpair failed and we were unable to recover it. 00:30:10.384 [2024-02-14 20:30:47.541426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.384 [2024-02-14 20:30:47.541704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.384 [2024-02-14 20:30:47.541733] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.384 qpair failed and we were unable to recover it. 00:30:10.384 [2024-02-14 20:30:47.542115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.384 [2024-02-14 20:30:47.542502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.384 [2024-02-14 20:30:47.542531] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.384 qpair failed and we were unable to recover it. 00:30:10.384 [2024-02-14 20:30:47.542848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.384 [2024-02-14 20:30:47.543170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.384 [2024-02-14 20:30:47.543185] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.384 qpair failed and we were unable to recover it. 00:30:10.384 [2024-02-14 20:30:47.543603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.384 [2024-02-14 20:30:47.544042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.384 [2024-02-14 20:30:47.544058] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.384 qpair failed and we were unable to recover it. 00:30:10.384 [2024-02-14 20:30:47.544359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.385 [2024-02-14 20:30:47.544681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.385 [2024-02-14 20:30:47.544712] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.385 qpair failed and we were unable to recover it. 00:30:10.385 [2024-02-14 20:30:47.545030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.385 [2024-02-14 20:30:47.545357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.385 [2024-02-14 20:30:47.545385] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.385 qpair failed and we were unable to recover it. 00:30:10.385 [2024-02-14 20:30:47.545774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.385 [2024-02-14 20:30:47.546157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.385 [2024-02-14 20:30:47.546187] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.385 qpair failed and we were unable to recover it. 00:30:10.385 [2024-02-14 20:30:47.546502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.385 [2024-02-14 20:30:47.546821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.385 [2024-02-14 20:30:47.546851] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.385 qpair failed and we were unable to recover it. 00:30:10.385 [2024-02-14 20:30:47.547180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.385 [2024-02-14 20:30:47.547610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.385 [2024-02-14 20:30:47.547640] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.385 qpair failed and we were unable to recover it. 00:30:10.385 [2024-02-14 20:30:47.547965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.385 [2024-02-14 20:30:47.548254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.385 [2024-02-14 20:30:47.548269] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.385 qpair failed and we were unable to recover it. 00:30:10.385 [2024-02-14 20:30:47.548619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.385 [2024-02-14 20:30:47.548954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.385 [2024-02-14 20:30:47.548984] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.385 qpair failed and we were unable to recover it. 00:30:10.385 [2024-02-14 20:30:47.549300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.385 [2024-02-14 20:30:47.549689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.385 [2024-02-14 20:30:47.549719] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.385 qpair failed and we were unable to recover it. 00:30:10.385 [2024-02-14 20:30:47.550060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.385 [2024-02-14 20:30:47.550366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.385 [2024-02-14 20:30:47.550395] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.385 qpair failed and we were unable to recover it. 00:30:10.385 [2024-02-14 20:30:47.550726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.385 [2024-02-14 20:30:47.551129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.385 [2024-02-14 20:30:47.551159] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.385 qpair failed and we were unable to recover it. 00:30:10.385 [2024-02-14 20:30:47.551533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.385 [2024-02-14 20:30:47.551903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.385 [2024-02-14 20:30:47.551940] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.385 qpair failed and we were unable to recover it. 00:30:10.385 [2024-02-14 20:30:47.552304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.385 [2024-02-14 20:30:47.552672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.385 [2024-02-14 20:30:47.552703] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.385 qpair failed and we were unable to recover it. 00:30:10.385 [2024-02-14 20:30:47.553007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.385 [2024-02-14 20:30:47.553460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.385 [2024-02-14 20:30:47.553489] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.385 qpair failed and we were unable to recover it. 00:30:10.385 [2024-02-14 20:30:47.553817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.385 [2024-02-14 20:30:47.554126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.385 [2024-02-14 20:30:47.554155] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.385 qpair failed and we were unable to recover it. 00:30:10.385 [2024-02-14 20:30:47.554590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.385 [2024-02-14 20:30:47.554912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.385 [2024-02-14 20:30:47.554942] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.385 qpair failed and we were unable to recover it. 00:30:10.385 [2024-02-14 20:30:47.555268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.385 [2024-02-14 20:30:47.555590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.385 [2024-02-14 20:30:47.555619] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.385 qpair failed and we were unable to recover it. 00:30:10.385 [2024-02-14 20:30:47.556169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.385 [2024-02-14 20:30:47.556494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.385 [2024-02-14 20:30:47.556524] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.385 qpair failed and we were unable to recover it. 00:30:10.385 [2024-02-14 20:30:47.556931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.385 [2024-02-14 20:30:47.557250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.385 [2024-02-14 20:30:47.557279] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.385 qpair failed and we were unable to recover it. 00:30:10.385 [2024-02-14 20:30:47.557595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.385 [2024-02-14 20:30:47.558053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.385 [2024-02-14 20:30:47.558083] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.385 qpair failed and we were unable to recover it. 00:30:10.385 [2024-02-14 20:30:47.558473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.385 [2024-02-14 20:30:47.558841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.385 [2024-02-14 20:30:47.558871] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.385 qpair failed and we were unable to recover it. 00:30:10.385 [2024-02-14 20:30:47.559424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.385 [2024-02-14 20:30:47.559799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.385 [2024-02-14 20:30:47.559830] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.385 qpair failed and we were unable to recover it. 00:30:10.385 [2024-02-14 20:30:47.560129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.385 [2024-02-14 20:30:47.560550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.385 [2024-02-14 20:30:47.560565] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.385 qpair failed and we were unable to recover it. 00:30:10.385 [2024-02-14 20:30:47.560939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.385 [2024-02-14 20:30:47.561342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.385 [2024-02-14 20:30:47.561371] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.385 qpair failed and we were unable to recover it. 00:30:10.385 [2024-02-14 20:30:47.561827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.385 [2024-02-14 20:30:47.562193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.385 [2024-02-14 20:30:47.562208] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.385 qpair failed and we were unable to recover it. 00:30:10.385 [2024-02-14 20:30:47.562484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.385 [2024-02-14 20:30:47.562897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.385 [2024-02-14 20:30:47.562928] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.385 qpair failed and we were unable to recover it. 00:30:10.385 [2024-02-14 20:30:47.563318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.385 [2024-02-14 20:30:47.563491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.385 [2024-02-14 20:30:47.563520] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.385 qpair failed and we were unable to recover it. 00:30:10.385 [2024-02-14 20:30:47.563982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.385 [2024-02-14 20:30:47.564306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.385 [2024-02-14 20:30:47.564345] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.385 qpair failed and we were unable to recover it. 00:30:10.385 [2024-02-14 20:30:47.564730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.385 [2024-02-14 20:30:47.565096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.385 [2024-02-14 20:30:47.565126] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.385 qpair failed and we were unable to recover it. 00:30:10.385 [2024-02-14 20:30:47.565304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.385 [2024-02-14 20:30:47.565823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.386 [2024-02-14 20:30:47.565853] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.386 qpair failed and we were unable to recover it. 00:30:10.386 [2024-02-14 20:30:47.566233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.386 [2024-02-14 20:30:47.566610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.386 [2024-02-14 20:30:47.566639] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.386 qpair failed and we were unable to recover it. 00:30:10.386 [2024-02-14 20:30:47.566961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.386 [2024-02-14 20:30:47.567227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.386 [2024-02-14 20:30:47.567242] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.386 qpair failed and we were unable to recover it. 00:30:10.386 [2024-02-14 20:30:47.567535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.386 [2024-02-14 20:30:47.567684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.386 [2024-02-14 20:30:47.567699] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.386 qpair failed and we were unable to recover it. 00:30:10.386 [2024-02-14 20:30:47.568000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.386 [2024-02-14 20:30:47.568372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.386 [2024-02-14 20:30:47.568401] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.386 qpair failed and we were unable to recover it. 00:30:10.386 [2024-02-14 20:30:47.568797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.386 [2024-02-14 20:30:47.569172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.386 [2024-02-14 20:30:47.569201] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.386 qpair failed and we were unable to recover it. 00:30:10.386 [2024-02-14 20:30:47.569594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.386 [2024-02-14 20:30:47.569911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.386 [2024-02-14 20:30:47.569926] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.386 qpair failed and we were unable to recover it. 00:30:10.386 [2024-02-14 20:30:47.570325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.386 [2024-02-14 20:30:47.570623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.386 [2024-02-14 20:30:47.570668] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.386 qpair failed and we were unable to recover it. 00:30:10.386 [2024-02-14 20:30:47.571047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.386 [2024-02-14 20:30:47.571376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.386 [2024-02-14 20:30:47.571410] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.386 qpair failed and we were unable to recover it. 00:30:10.386 [2024-02-14 20:30:47.571729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.386 [2024-02-14 20:30:47.572049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.386 [2024-02-14 20:30:47.572078] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.386 qpair failed and we were unable to recover it. 00:30:10.386 [2024-02-14 20:30:47.572396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.386 [2024-02-14 20:30:47.572712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.386 [2024-02-14 20:30:47.572741] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.386 qpair failed and we were unable to recover it. 00:30:10.386 [2024-02-14 20:30:47.573270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.386 [2024-02-14 20:30:47.573580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.386 [2024-02-14 20:30:47.573608] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.386 qpair failed and we were unable to recover it. 00:30:10.386 [2024-02-14 20:30:47.573956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.386 [2024-02-14 20:30:47.574266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.386 [2024-02-14 20:30:47.574295] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.386 qpair failed and we were unable to recover it. 00:30:10.386 [2024-02-14 20:30:47.574685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.386 [2024-02-14 20:30:47.575125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.386 [2024-02-14 20:30:47.575154] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.386 qpair failed and we were unable to recover it. 00:30:10.386 [2024-02-14 20:30:47.575491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.386 [2024-02-14 20:30:47.575790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.386 [2024-02-14 20:30:47.575820] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.386 qpair failed and we were unable to recover it. 00:30:10.386 [2024-02-14 20:30:47.576128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.386 [2024-02-14 20:30:47.576573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.386 [2024-02-14 20:30:47.576602] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.386 qpair failed and we were unable to recover it. 00:30:10.386 [2024-02-14 20:30:47.576926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.386 [2024-02-14 20:30:47.577304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.386 [2024-02-14 20:30:47.577318] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.386 qpair failed and we were unable to recover it. 00:30:10.386 [2024-02-14 20:30:47.577689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.386 [2024-02-14 20:30:47.578010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.386 [2024-02-14 20:30:47.578039] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.386 qpair failed and we were unable to recover it. 00:30:10.386 [2024-02-14 20:30:47.578269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.386 [2024-02-14 20:30:47.578588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.386 [2024-02-14 20:30:47.578617] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.386 qpair failed and we were unable to recover it. 00:30:10.386 [2024-02-14 20:30:47.579089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.386 [2024-02-14 20:30:47.579403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.386 [2024-02-14 20:30:47.579431] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.386 qpair failed and we were unable to recover it. 00:30:10.386 [2024-02-14 20:30:47.579869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.386 [2024-02-14 20:30:47.580159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.386 [2024-02-14 20:30:47.580174] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.386 qpair failed and we were unable to recover it. 00:30:10.386 [2024-02-14 20:30:47.580479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.386 [2024-02-14 20:30:47.580815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.386 [2024-02-14 20:30:47.580845] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.386 qpair failed and we were unable to recover it. 00:30:10.386 [2024-02-14 20:30:47.581240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.386 [2024-02-14 20:30:47.581622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.386 [2024-02-14 20:30:47.581671] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.386 qpair failed and we were unable to recover it. 00:30:10.386 [2024-02-14 20:30:47.581990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.386 [2024-02-14 20:30:47.582407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.386 [2024-02-14 20:30:47.582436] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.386 qpair failed and we were unable to recover it. 00:30:10.386 [2024-02-14 20:30:47.582879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.386 [2024-02-14 20:30:47.583284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.386 [2024-02-14 20:30:47.583313] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.386 qpair failed and we were unable to recover it. 00:30:10.386 [2024-02-14 20:30:47.583746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.386 [2024-02-14 20:30:47.584109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.386 [2024-02-14 20:30:47.584124] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.386 qpair failed and we were unable to recover it. 00:30:10.386 [2024-02-14 20:30:47.584491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.386 [2024-02-14 20:30:47.584862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.386 [2024-02-14 20:30:47.584892] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.386 qpair failed and we were unable to recover it. 00:30:10.386 [2024-02-14 20:30:47.585286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.386 [2024-02-14 20:30:47.585665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.386 [2024-02-14 20:30:47.585696] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.386 qpair failed and we were unable to recover it. 00:30:10.386 [2024-02-14 20:30:47.586096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.386 [2024-02-14 20:30:47.586464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.386 [2024-02-14 20:30:47.586494] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.386 qpair failed and we were unable to recover it. 00:30:10.386 [2024-02-14 20:30:47.586960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.387 [2024-02-14 20:30:47.587273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.387 [2024-02-14 20:30:47.587303] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.387 qpair failed and we were unable to recover it. 00:30:10.387 [2024-02-14 20:30:47.587699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.387 [2024-02-14 20:30:47.588036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.387 [2024-02-14 20:30:47.588065] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.387 qpair failed and we were unable to recover it. 00:30:10.387 [2024-02-14 20:30:47.588391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.387 [2024-02-14 20:30:47.588780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.387 [2024-02-14 20:30:47.588810] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.387 qpair failed and we were unable to recover it. 00:30:10.387 [2024-02-14 20:30:47.589119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.387 [2024-02-14 20:30:47.589547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.387 [2024-02-14 20:30:47.589576] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.387 qpair failed and we were unable to recover it. 00:30:10.387 [2024-02-14 20:30:47.589949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.387 [2024-02-14 20:30:47.590380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.387 [2024-02-14 20:30:47.590409] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.387 qpair failed and we were unable to recover it. 00:30:10.387 [2024-02-14 20:30:47.590795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.387 [2024-02-14 20:30:47.591179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.387 [2024-02-14 20:30:47.591208] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.387 qpair failed and we were unable to recover it. 00:30:10.387 [2024-02-14 20:30:47.591585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.387 [2024-02-14 20:30:47.591895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.387 [2024-02-14 20:30:47.591924] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.387 qpair failed and we were unable to recover it. 00:30:10.387 [2024-02-14 20:30:47.592312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.387 [2024-02-14 20:30:47.592639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.387 [2024-02-14 20:30:47.592689] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.387 qpair failed and we were unable to recover it. 00:30:10.387 [2024-02-14 20:30:47.592999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.387 [2024-02-14 20:30:47.593172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.387 [2024-02-14 20:30:47.593186] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.387 qpair failed and we were unable to recover it. 00:30:10.387 [2024-02-14 20:30:47.593680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.387 [2024-02-14 20:30:47.593902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.387 [2024-02-14 20:30:47.593942] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.387 qpair failed and we were unable to recover it. 00:30:10.387 [2024-02-14 20:30:47.594289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.387 [2024-02-14 20:30:47.594743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.387 [2024-02-14 20:30:47.594757] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.387 qpair failed and we were unable to recover it. 00:30:10.387 [2024-02-14 20:30:47.595123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.387 [2024-02-14 20:30:47.595431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.387 [2024-02-14 20:30:47.595460] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.387 qpair failed and we were unable to recover it. 00:30:10.387 [2024-02-14 20:30:47.595851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.387 [2024-02-14 20:30:47.596170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.387 [2024-02-14 20:30:47.596199] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.387 qpair failed and we were unable to recover it. 00:30:10.387 [2024-02-14 20:30:47.596598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.387 [2024-02-14 20:30:47.596937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.387 [2024-02-14 20:30:47.596967] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.387 qpair failed and we were unable to recover it. 00:30:10.387 [2024-02-14 20:30:47.597360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.387 [2024-02-14 20:30:47.597763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.387 [2024-02-14 20:30:47.597792] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.387 qpair failed and we were unable to recover it. 00:30:10.387 [2024-02-14 20:30:47.598177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.387 [2024-02-14 20:30:47.598557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.387 [2024-02-14 20:30:47.598587] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.387 qpair failed and we were unable to recover it. 00:30:10.387 [2024-02-14 20:30:47.598968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.387 [2024-02-14 20:30:47.599291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.387 [2024-02-14 20:30:47.599306] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.387 qpair failed and we were unable to recover it. 00:30:10.387 [2024-02-14 20:30:47.599660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.387 [2024-02-14 20:30:47.599913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.387 [2024-02-14 20:30:47.599942] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.387 qpair failed and we were unable to recover it. 00:30:10.387 [2024-02-14 20:30:47.600324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.387 [2024-02-14 20:30:47.600711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.387 [2024-02-14 20:30:47.600741] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.387 qpair failed and we were unable to recover it. 00:30:10.387 [2024-02-14 20:30:47.601206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.387 [2024-02-14 20:30:47.601570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.387 [2024-02-14 20:30:47.601598] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.387 qpair failed and we were unable to recover it. 00:30:10.387 [2024-02-14 20:30:47.601960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.387 [2024-02-14 20:30:47.602285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.387 [2024-02-14 20:30:47.602314] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.387 qpair failed and we were unable to recover it. 00:30:10.387 [2024-02-14 20:30:47.602693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.387 [2024-02-14 20:30:47.603072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.387 [2024-02-14 20:30:47.603101] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.387 qpair failed and we were unable to recover it. 00:30:10.387 [2024-02-14 20:30:47.603344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.387 [2024-02-14 20:30:47.603670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.387 [2024-02-14 20:30:47.603699] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.387 qpair failed and we were unable to recover it. 00:30:10.387 [2024-02-14 20:30:47.604043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.387 [2024-02-14 20:30:47.604410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.387 [2024-02-14 20:30:47.604439] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.387 qpair failed and we were unable to recover it. 00:30:10.387 [2024-02-14 20:30:47.604830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.387 [2024-02-14 20:30:47.605167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.387 [2024-02-14 20:30:47.605197] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.387 qpair failed and we were unable to recover it. 00:30:10.387 [2024-02-14 20:30:47.605633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.387 [2024-02-14 20:30:47.605956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.387 [2024-02-14 20:30:47.605971] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.387 qpair failed and we were unable to recover it. 00:30:10.387 [2024-02-14 20:30:47.606468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.387 [2024-02-14 20:30:47.606780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.387 [2024-02-14 20:30:47.606810] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.387 qpair failed and we were unable to recover it. 00:30:10.387 [2024-02-14 20:30:47.607197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.387 [2024-02-14 20:30:47.607553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.387 [2024-02-14 20:30:47.607583] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.387 qpair failed and we were unable to recover it. 00:30:10.387 [2024-02-14 20:30:47.607969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.387 [2024-02-14 20:30:47.608346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.387 [2024-02-14 20:30:47.608376] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.387 qpair failed and we were unable to recover it. 00:30:10.388 [2024-02-14 20:30:47.608689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.388 [2024-02-14 20:30:47.609122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.388 [2024-02-14 20:30:47.609151] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.388 qpair failed and we were unable to recover it. 00:30:10.388 [2024-02-14 20:30:47.609700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.388 [2024-02-14 20:30:47.610081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.388 [2024-02-14 20:30:47.610115] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.388 qpair failed and we were unable to recover it. 00:30:10.388 [2024-02-14 20:30:47.610568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.388 [2024-02-14 20:30:47.611029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.388 [2024-02-14 20:30:47.611058] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.388 qpair failed and we were unable to recover it. 00:30:10.388 [2024-02-14 20:30:47.611369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.388 [2024-02-14 20:30:47.611700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.388 [2024-02-14 20:30:47.611730] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.388 qpair failed and we were unable to recover it. 00:30:10.388 [2024-02-14 20:30:47.612052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.388 [2024-02-14 20:30:47.612364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.388 [2024-02-14 20:30:47.612393] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.388 qpair failed and we were unable to recover it. 00:30:10.388 [2024-02-14 20:30:47.612773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.388 [2024-02-14 20:30:47.613114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.388 [2024-02-14 20:30:47.613143] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.388 qpair failed and we were unable to recover it. 00:30:10.388 [2024-02-14 20:30:47.613439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.388 [2024-02-14 20:30:47.613818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.388 [2024-02-14 20:30:47.613848] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.388 qpair failed and we were unable to recover it. 00:30:10.388 [2024-02-14 20:30:47.614180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.388 [2024-02-14 20:30:47.614563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.388 [2024-02-14 20:30:47.614591] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.388 qpair failed and we were unable to recover it. 00:30:10.388 [2024-02-14 20:30:47.614912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.388 [2024-02-14 20:30:47.615049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.388 [2024-02-14 20:30:47.615063] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.388 qpair failed and we were unable to recover it. 00:30:10.388 [2024-02-14 20:30:47.615285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.388 [2024-02-14 20:30:47.615581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.388 [2024-02-14 20:30:47.615609] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.388 qpair failed and we were unable to recover it. 00:30:10.388 [2024-02-14 20:30:47.616055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.388 [2024-02-14 20:30:47.616352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.388 [2024-02-14 20:30:47.616366] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.388 qpair failed and we were unable to recover it. 00:30:10.388 [2024-02-14 20:30:47.616772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.388 [2024-02-14 20:30:47.617050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.388 [2024-02-14 20:30:47.617079] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.388 qpair failed and we were unable to recover it. 00:30:10.388 [2024-02-14 20:30:47.617530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.388 [2024-02-14 20:30:47.617912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.388 [2024-02-14 20:30:47.617942] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.388 qpair failed and we were unable to recover it. 00:30:10.388 [2024-02-14 20:30:47.618376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.388 [2024-02-14 20:30:47.618808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.388 [2024-02-14 20:30:47.618838] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.388 qpair failed and we were unable to recover it. 00:30:10.388 [2024-02-14 20:30:47.619153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.388 [2024-02-14 20:30:47.619426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.388 [2024-02-14 20:30:47.619441] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.388 qpair failed and we were unable to recover it. 00:30:10.388 [2024-02-14 20:30:47.619736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.388 [2024-02-14 20:30:47.620087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.388 [2024-02-14 20:30:47.620117] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.388 qpair failed and we were unable to recover it. 00:30:10.388 [2024-02-14 20:30:47.620501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.388 [2024-02-14 20:30:47.620981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.388 [2024-02-14 20:30:47.621011] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.388 qpair failed and we were unable to recover it. 00:30:10.388 [2024-02-14 20:30:47.621326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.388 [2024-02-14 20:30:47.621640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.388 [2024-02-14 20:30:47.621679] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.388 qpair failed and we were unable to recover it. 00:30:10.388 [2024-02-14 20:30:47.622003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.388 [2024-02-14 20:30:47.622383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.388 [2024-02-14 20:30:47.622412] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.388 qpair failed and we were unable to recover it. 00:30:10.388 [2024-02-14 20:30:47.622740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.388 [2024-02-14 20:30:47.623135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.388 [2024-02-14 20:30:47.623164] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.388 qpair failed and we were unable to recover it. 00:30:10.388 [2024-02-14 20:30:47.623543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.388 [2024-02-14 20:30:47.623847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.388 [2024-02-14 20:30:47.623876] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.388 qpair failed and we were unable to recover it. 00:30:10.388 [2024-02-14 20:30:47.624334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.388 [2024-02-14 20:30:47.624727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.388 [2024-02-14 20:30:47.624756] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.388 qpair failed and we were unable to recover it. 00:30:10.388 [2024-02-14 20:30:47.625151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.388 [2024-02-14 20:30:47.625475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.388 [2024-02-14 20:30:47.625504] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.388 qpair failed and we were unable to recover it. 00:30:10.388 [2024-02-14 20:30:47.625814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.389 [2024-02-14 20:30:47.626200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.389 [2024-02-14 20:30:47.626229] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.389 qpair failed and we were unable to recover it. 00:30:10.389 [2024-02-14 20:30:47.626614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.389 [2024-02-14 20:30:47.627007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.389 [2024-02-14 20:30:47.627037] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.389 qpair failed and we were unable to recover it. 00:30:10.389 [2024-02-14 20:30:47.627351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.389 [2024-02-14 20:30:47.627638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.389 [2024-02-14 20:30:47.627659] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.389 qpair failed and we were unable to recover it. 00:30:10.389 [2024-02-14 20:30:47.628005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.389 [2024-02-14 20:30:47.628395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.389 [2024-02-14 20:30:47.628424] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.389 qpair failed and we were unable to recover it. 00:30:10.389 [2024-02-14 20:30:47.628890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.389 [2024-02-14 20:30:47.629209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.389 [2024-02-14 20:30:47.629239] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.389 qpair failed and we were unable to recover it. 00:30:10.389 [2024-02-14 20:30:47.629616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.389 [2024-02-14 20:30:47.629947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.389 [2024-02-14 20:30:47.629977] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.389 qpair failed and we were unable to recover it. 00:30:10.389 [2024-02-14 20:30:47.630302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.389 [2024-02-14 20:30:47.630731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.389 [2024-02-14 20:30:47.630761] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.389 qpair failed and we were unable to recover it. 00:30:10.389 [2024-02-14 20:30:47.631076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.389 [2024-02-14 20:30:47.631375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.389 [2024-02-14 20:30:47.631390] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.389 qpair failed and we were unable to recover it. 00:30:10.389 [2024-02-14 20:30:47.631818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.389 [2024-02-14 20:30:47.632160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.389 [2024-02-14 20:30:47.632190] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.389 qpair failed and we were unable to recover it. 00:30:10.389 [2024-02-14 20:30:47.632590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.389 [2024-02-14 20:30:47.632976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.389 [2024-02-14 20:30:47.633007] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.389 qpair failed and we were unable to recover it. 00:30:10.389 [2024-02-14 20:30:47.633400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.389 [2024-02-14 20:30:47.633764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.389 [2024-02-14 20:30:47.633794] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.389 qpair failed and we were unable to recover it. 00:30:10.389 [2024-02-14 20:30:47.634114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.389 [2024-02-14 20:30:47.634432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.389 [2024-02-14 20:30:47.634461] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.389 qpair failed and we were unable to recover it. 00:30:10.389 [2024-02-14 20:30:47.634847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.389 [2024-02-14 20:30:47.635158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.389 [2024-02-14 20:30:47.635188] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.389 qpair failed and we were unable to recover it. 00:30:10.389 [2024-02-14 20:30:47.635523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.389 [2024-02-14 20:30:47.635833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.389 [2024-02-14 20:30:47.635863] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.389 qpair failed and we were unable to recover it. 00:30:10.389 [2024-02-14 20:30:47.636253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.389 [2024-02-14 20:30:47.636686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.389 [2024-02-14 20:30:47.636716] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.389 qpair failed and we were unable to recover it. 00:30:10.389 [2024-02-14 20:30:47.636947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.389 [2024-02-14 20:30:47.637238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.389 [2024-02-14 20:30:47.637267] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.389 qpair failed and we were unable to recover it. 00:30:10.389 [2024-02-14 20:30:47.637591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.389 [2024-02-14 20:30:47.637965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.389 [2024-02-14 20:30:47.637994] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.389 qpair failed and we were unable to recover it. 00:30:10.389 [2024-02-14 20:30:47.638385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.389 [2024-02-14 20:30:47.638851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.389 [2024-02-14 20:30:47.638881] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.389 qpair failed and we were unable to recover it. 00:30:10.389 [2024-02-14 20:30:47.639218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.389 [2024-02-14 20:30:47.639581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.389 [2024-02-14 20:30:47.639610] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.389 qpair failed and we were unable to recover it. 00:30:10.389 [2024-02-14 20:30:47.640068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.389 [2024-02-14 20:30:47.640500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.389 [2024-02-14 20:30:47.640515] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.389 qpair failed and we were unable to recover it. 00:30:10.389 [2024-02-14 20:30:47.640810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.389 [2024-02-14 20:30:47.641125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.389 [2024-02-14 20:30:47.641154] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.389 qpair failed and we were unable to recover it. 00:30:10.389 [2024-02-14 20:30:47.641527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.389 [2024-02-14 20:30:47.641910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.389 [2024-02-14 20:30:47.641941] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.389 qpair failed and we were unable to recover it. 00:30:10.389 [2024-02-14 20:30:47.642373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.389 [2024-02-14 20:30:47.642682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.389 [2024-02-14 20:30:47.642714] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.389 qpair failed and we were unable to recover it. 00:30:10.389 [2024-02-14 20:30:47.642887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.389 [2024-02-14 20:30:47.643218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.389 [2024-02-14 20:30:47.643247] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.389 qpair failed and we were unable to recover it. 00:30:10.389 [2024-02-14 20:30:47.643562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.389 [2024-02-14 20:30:47.643886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.389 [2024-02-14 20:30:47.643917] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.389 qpair failed and we were unable to recover it. 00:30:10.389 [2024-02-14 20:30:47.644253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.389 [2024-02-14 20:30:47.644756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.389 [2024-02-14 20:30:47.644787] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.389 qpair failed and we were unable to recover it. 00:30:10.389 [2024-02-14 20:30:47.645172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.389 [2024-02-14 20:30:47.645476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.389 [2024-02-14 20:30:47.645506] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.389 qpair failed and we were unable to recover it. 00:30:10.389 [2024-02-14 20:30:47.645875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.389 [2024-02-14 20:30:47.646248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.389 [2024-02-14 20:30:47.646276] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.389 qpair failed and we were unable to recover it. 00:30:10.389 [2024-02-14 20:30:47.646659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.389 [2024-02-14 20:30:47.646974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.389 [2024-02-14 20:30:47.647003] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.389 qpair failed and we were unable to recover it. 00:30:10.390 [2024-02-14 20:30:47.647314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.390 [2024-02-14 20:30:47.647601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.390 [2024-02-14 20:30:47.647619] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.390 qpair failed and we were unable to recover it. 00:30:10.390 [2024-02-14 20:30:47.647905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.390 [2024-02-14 20:30:47.648277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.390 [2024-02-14 20:30:47.648306] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.390 qpair failed and we were unable to recover it. 00:30:10.390 [2024-02-14 20:30:47.648684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.390 [2024-02-14 20:30:47.649074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.390 [2024-02-14 20:30:47.649103] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.390 qpair failed and we were unable to recover it. 00:30:10.390 [2024-02-14 20:30:47.649414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.390 [2024-02-14 20:30:47.649773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.390 [2024-02-14 20:30:47.649803] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.390 qpair failed and we were unable to recover it. 00:30:10.390 [2024-02-14 20:30:47.650111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.390 [2024-02-14 20:30:47.650483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.390 [2024-02-14 20:30:47.650512] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.390 qpair failed and we were unable to recover it. 00:30:10.390 [2024-02-14 20:30:47.650823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.390 [2024-02-14 20:30:47.651150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.390 [2024-02-14 20:30:47.651165] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.390 qpair failed and we were unable to recover it. 00:30:10.390 [2024-02-14 20:30:47.651581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.390 [2024-02-14 20:30:47.651975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.390 [2024-02-14 20:30:47.652004] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.390 qpair failed and we were unable to recover it. 00:30:10.390 [2024-02-14 20:30:47.652405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.390 [2024-02-14 20:30:47.652798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.390 [2024-02-14 20:30:47.652829] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.390 qpair failed and we were unable to recover it. 00:30:10.390 [2024-02-14 20:30:47.653231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.390 [2024-02-14 20:30:47.653599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.390 [2024-02-14 20:30:47.653628] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.390 qpair failed and we were unable to recover it. 00:30:10.390 [2024-02-14 20:30:47.653964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.390 [2024-02-14 20:30:47.654347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.390 [2024-02-14 20:30:47.654376] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.390 qpair failed and we were unable to recover it. 00:30:10.390 [2024-02-14 20:30:47.654690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.390 [2024-02-14 20:30:47.655016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.390 [2024-02-14 20:30:47.655045] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.390 qpair failed and we were unable to recover it. 00:30:10.390 [2024-02-14 20:30:47.655495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.390 [2024-02-14 20:30:47.655866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.390 [2024-02-14 20:30:47.655896] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.390 qpair failed and we were unable to recover it. 00:30:10.390 [2024-02-14 20:30:47.656198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.390 [2024-02-14 20:30:47.656604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.390 [2024-02-14 20:30:47.656633] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.390 qpair failed and we were unable to recover it. 00:30:10.390 [2024-02-14 20:30:47.657029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.390 [2024-02-14 20:30:47.657407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.390 [2024-02-14 20:30:47.657422] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.390 qpair failed and we were unable to recover it. 00:30:10.390 [2024-02-14 20:30:47.657769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.390 [2024-02-14 20:30:47.658118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.390 [2024-02-14 20:30:47.658148] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.390 qpair failed and we were unable to recover it. 00:30:10.390 [2024-02-14 20:30:47.658455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.390 [2024-02-14 20:30:47.658820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.390 [2024-02-14 20:30:47.658851] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.390 qpair failed and we were unable to recover it. 00:30:10.390 [2024-02-14 20:30:47.659235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.390 [2024-02-14 20:30:47.659578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.390 [2024-02-14 20:30:47.659607] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.390 qpair failed and we were unable to recover it. 00:30:10.390 [2024-02-14 20:30:47.660000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.390 [2024-02-14 20:30:47.660317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.390 [2024-02-14 20:30:47.660346] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.390 qpair failed and we were unable to recover it. 00:30:10.390 [2024-02-14 20:30:47.660665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.390 [2024-02-14 20:30:47.660983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.390 [2024-02-14 20:30:47.661012] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.390 qpair failed and we were unable to recover it. 00:30:10.390 [2024-02-14 20:30:47.661380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.390 [2024-02-14 20:30:47.661747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.390 [2024-02-14 20:30:47.661777] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.390 qpair failed and we were unable to recover it. 00:30:10.390 [2024-02-14 20:30:47.662032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.390 [2024-02-14 20:30:47.662393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.390 [2024-02-14 20:30:47.662408] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.390 qpair failed and we were unable to recover it. 00:30:10.390 [2024-02-14 20:30:47.662765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.390 [2024-02-14 20:30:47.663166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.390 [2024-02-14 20:30:47.663195] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.390 qpair failed and we were unable to recover it. 00:30:10.390 [2024-02-14 20:30:47.663566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.390 [2024-02-14 20:30:47.663980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.390 [2024-02-14 20:30:47.664010] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.390 qpair failed and we were unable to recover it. 00:30:10.390 [2024-02-14 20:30:47.664408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.390 [2024-02-14 20:30:47.664729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.390 [2024-02-14 20:30:47.664759] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.390 qpair failed and we were unable to recover it. 00:30:10.390 [2024-02-14 20:30:47.665162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.390 [2024-02-14 20:30:47.665467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.390 [2024-02-14 20:30:47.665496] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.390 qpair failed and we were unable to recover it. 00:30:10.390 [2024-02-14 20:30:47.665811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.390 [2024-02-14 20:30:47.666185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.390 [2024-02-14 20:30:47.666213] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.390 qpair failed and we were unable to recover it. 00:30:10.390 [2024-02-14 20:30:47.666616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.390 [2024-02-14 20:30:47.666999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.390 [2024-02-14 20:30:47.667029] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.390 qpair failed and we were unable to recover it. 00:30:10.390 [2024-02-14 20:30:47.667340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.390 [2024-02-14 20:30:47.667630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.390 [2024-02-14 20:30:47.667645] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.390 qpair failed and we were unable to recover it. 00:30:10.390 [2024-02-14 20:30:47.668198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.390 [2024-02-14 20:30:47.668418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.390 [2024-02-14 20:30:47.668448] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.390 qpair failed and we were unable to recover it. 00:30:10.391 [2024-02-14 20:30:47.668854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.391 [2024-02-14 20:30:47.669338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.391 [2024-02-14 20:30:47.669352] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.391 qpair failed and we were unable to recover it. 00:30:10.391 [2024-02-14 20:30:47.669781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.391 [2024-02-14 20:30:47.670140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.391 [2024-02-14 20:30:47.670169] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.391 qpair failed and we were unable to recover it. 00:30:10.391 [2024-02-14 20:30:47.670496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.391 [2024-02-14 20:30:47.670879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.391 [2024-02-14 20:30:47.670909] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.391 qpair failed and we were unable to recover it. 00:30:10.391 [2024-02-14 20:30:47.671293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.391 [2024-02-14 20:30:47.671621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.391 [2024-02-14 20:30:47.671668] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.391 qpair failed and we were unable to recover it. 00:30:10.391 [2024-02-14 20:30:47.672056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.391 [2024-02-14 20:30:47.672312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.391 [2024-02-14 20:30:47.672341] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.391 qpair failed and we were unable to recover it. 00:30:10.391 [2024-02-14 20:30:47.672662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.391 [2024-02-14 20:30:47.673070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.391 [2024-02-14 20:30:47.673099] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.391 qpair failed and we were unable to recover it. 00:30:10.391 [2024-02-14 20:30:47.673415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.391 [2024-02-14 20:30:47.673796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.391 [2024-02-14 20:30:47.673837] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.391 qpair failed and we were unable to recover it. 00:30:10.391 [2024-02-14 20:30:47.674129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.391 [2024-02-14 20:30:47.674523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.391 [2024-02-14 20:30:47.674538] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.391 qpair failed and we were unable to recover it. 00:30:10.391 [2024-02-14 20:30:47.674888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.391 [2024-02-14 20:30:47.675141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.391 [2024-02-14 20:30:47.675170] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.391 qpair failed and we were unable to recover it. 00:30:10.391 [2024-02-14 20:30:47.675492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.391 [2024-02-14 20:30:47.675833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.391 [2024-02-14 20:30:47.675863] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.391 qpair failed and we were unable to recover it. 00:30:10.391 [2024-02-14 20:30:47.676199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.391 [2024-02-14 20:30:47.676566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.391 [2024-02-14 20:30:47.676595] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.391 qpair failed and we were unable to recover it. 00:30:10.391 [2024-02-14 20:30:47.676963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.391 [2024-02-14 20:30:47.677276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.391 [2024-02-14 20:30:47.677305] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.391 qpair failed and we were unable to recover it. 00:30:10.391 [2024-02-14 20:30:47.677672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.391 [2024-02-14 20:30:47.678062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.391 [2024-02-14 20:30:47.678091] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.391 qpair failed and we were unable to recover it. 00:30:10.391 [2024-02-14 20:30:47.678506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.391 [2024-02-14 20:30:47.678896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.391 [2024-02-14 20:30:47.678925] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.391 qpair failed and we were unable to recover it. 00:30:10.391 [2024-02-14 20:30:47.679314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.391 [2024-02-14 20:30:47.679734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.391 [2024-02-14 20:30:47.679764] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.391 qpair failed and we were unable to recover it. 00:30:10.391 [2024-02-14 20:30:47.680145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.391 [2024-02-14 20:30:47.680588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.391 [2024-02-14 20:30:47.680616] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.391 qpair failed and we were unable to recover it. 00:30:10.391 [2024-02-14 20:30:47.680953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.391 [2024-02-14 20:30:47.681324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.391 [2024-02-14 20:30:47.681353] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.391 qpair failed and we were unable to recover it. 00:30:10.391 [2024-02-14 20:30:47.681733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.391 [2024-02-14 20:30:47.682165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.391 [2024-02-14 20:30:47.682194] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.391 qpair failed and we were unable to recover it. 00:30:10.391 [2024-02-14 20:30:47.682570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.391 [2024-02-14 20:30:47.682881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.391 [2024-02-14 20:30:47.682911] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.391 qpair failed and we were unable to recover it. 00:30:10.391 [2024-02-14 20:30:47.683300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.391 [2024-02-14 20:30:47.683557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.391 [2024-02-14 20:30:47.683586] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.391 qpair failed and we were unable to recover it. 00:30:10.391 [2024-02-14 20:30:47.683977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.391 [2024-02-14 20:30:47.684350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.391 [2024-02-14 20:30:47.684364] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.391 qpair failed and we were unable to recover it. 00:30:10.391 [2024-02-14 20:30:47.684769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.391 [2024-02-14 20:30:47.685138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.391 [2024-02-14 20:30:47.685167] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.391 qpair failed and we were unable to recover it. 00:30:10.391 [2024-02-14 20:30:47.685658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.391 [2024-02-14 20:30:47.685981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.391 [2024-02-14 20:30:47.686016] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.391 qpair failed and we were unable to recover it. 00:30:10.391 [2024-02-14 20:30:47.686404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.391 [2024-02-14 20:30:47.686847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.391 [2024-02-14 20:30:47.686877] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.391 qpair failed and we were unable to recover it. 00:30:10.391 [2024-02-14 20:30:47.687058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.391 [2024-02-14 20:30:47.687343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.391 [2024-02-14 20:30:47.687371] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.391 qpair failed and we were unable to recover it. 00:30:10.391 [2024-02-14 20:30:47.687755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.391 [2024-02-14 20:30:47.688069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.391 [2024-02-14 20:30:47.688098] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.391 qpair failed and we were unable to recover it. 00:30:10.391 [2024-02-14 20:30:47.688422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.391 [2024-02-14 20:30:47.688661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.391 [2024-02-14 20:30:47.688691] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.391 qpair failed and we were unable to recover it. 00:30:10.391 [2024-02-14 20:30:47.689077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.391 [2024-02-14 20:30:47.689459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.391 [2024-02-14 20:30:47.689488] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.391 qpair failed and we were unable to recover it. 00:30:10.391 [2024-02-14 20:30:47.689868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.391 [2024-02-14 20:30:47.690246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.392 [2024-02-14 20:30:47.690274] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.392 qpair failed and we were unable to recover it. 00:30:10.392 [2024-02-14 20:30:47.690670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.392 [2024-02-14 20:30:47.690986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.392 [2024-02-14 20:30:47.691015] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.392 qpair failed and we were unable to recover it. 00:30:10.392 [2024-02-14 20:30:47.691386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.392 [2024-02-14 20:30:47.691767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.392 [2024-02-14 20:30:47.691797] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.392 qpair failed and we were unable to recover it. 00:30:10.392 [2024-02-14 20:30:47.692173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.392 [2024-02-14 20:30:47.692570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.392 [2024-02-14 20:30:47.692599] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.392 qpair failed and we were unable to recover it. 00:30:10.392 [2024-02-14 20:30:47.693005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.392 [2024-02-14 20:30:47.693409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.392 [2024-02-14 20:30:47.693444] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.392 qpair failed and we were unable to recover it. 00:30:10.392 [2024-02-14 20:30:47.693786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.392 [2024-02-14 20:30:47.694094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.392 [2024-02-14 20:30:47.694109] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.392 qpair failed and we were unable to recover it. 00:30:10.392 [2024-02-14 20:30:47.694439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.392 [2024-02-14 20:30:47.694763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.392 [2024-02-14 20:30:47.694793] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.392 qpair failed and we were unable to recover it. 00:30:10.392 [2024-02-14 20:30:47.695192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.392 [2024-02-14 20:30:47.695560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.392 [2024-02-14 20:30:47.695574] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.392 qpair failed and we were unable to recover it. 00:30:10.392 [2024-02-14 20:30:47.696076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.392 [2024-02-14 20:30:47.696415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.392 [2024-02-14 20:30:47.696429] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.392 qpair failed and we were unable to recover it. 00:30:10.392 [2024-02-14 20:30:47.696799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.392 [2024-02-14 20:30:47.697133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.392 [2024-02-14 20:30:47.697150] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.392 qpair failed and we were unable to recover it. 00:30:10.392 [2024-02-14 20:30:47.697492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.392 [2024-02-14 20:30:47.697794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.392 [2024-02-14 20:30:47.697825] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.392 qpair failed and we were unable to recover it. 00:30:10.392 [2024-02-14 20:30:47.698153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.392 [2024-02-14 20:30:47.698603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.392 [2024-02-14 20:30:47.698632] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.392 qpair failed and we were unable to recover it. 00:30:10.392 [2024-02-14 20:30:47.699027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.392 [2024-02-14 20:30:47.699484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.392 [2024-02-14 20:30:47.699513] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.392 qpair failed and we were unable to recover it. 00:30:10.392 [2024-02-14 20:30:47.699971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.392 [2024-02-14 20:30:47.700302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.392 [2024-02-14 20:30:47.700330] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.392 qpair failed and we were unable to recover it. 00:30:10.392 [2024-02-14 20:30:47.700663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.392 [2024-02-14 20:30:47.701047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.392 [2024-02-14 20:30:47.701061] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.392 qpair failed and we were unable to recover it. 00:30:10.392 [2024-02-14 20:30:47.701426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.392 [2024-02-14 20:30:47.701804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.392 [2024-02-14 20:30:47.701835] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.392 qpair failed and we were unable to recover it. 00:30:10.392 [2024-02-14 20:30:47.702155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.392 [2024-02-14 20:30:47.702516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.392 [2024-02-14 20:30:47.702531] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.392 qpair failed and we were unable to recover it. 00:30:10.392 [2024-02-14 20:30:47.702825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.392 [2024-02-14 20:30:47.703200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.392 [2024-02-14 20:30:47.703230] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.392 qpair failed and we were unable to recover it. 00:30:10.392 [2024-02-14 20:30:47.703609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.392 [2024-02-14 20:30:47.703991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.392 [2024-02-14 20:30:47.704024] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.392 qpair failed and we were unable to recover it. 00:30:10.392 [2024-02-14 20:30:47.704415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.392 [2024-02-14 20:30:47.704778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.392 [2024-02-14 20:30:47.704809] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.392 qpair failed and we were unable to recover it. 00:30:10.392 [2024-02-14 20:30:47.705269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.392 [2024-02-14 20:30:47.705709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.392 [2024-02-14 20:30:47.705739] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.392 qpair failed and we were unable to recover it. 00:30:10.392 [2024-02-14 20:30:47.706119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.392 [2024-02-14 20:30:47.706499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.392 [2024-02-14 20:30:47.706528] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.392 qpair failed and we were unable to recover it. 00:30:10.392 [2024-02-14 20:30:47.706870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.392 [2024-02-14 20:30:47.707238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.392 [2024-02-14 20:30:47.707252] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.392 qpair failed and we were unable to recover it. 00:30:10.392 [2024-02-14 20:30:47.707665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.392 [2024-02-14 20:30:47.708038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.392 [2024-02-14 20:30:47.708053] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.392 qpair failed and we were unable to recover it. 00:30:10.392 [2024-02-14 20:30:47.708484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.392 [2024-02-14 20:30:47.708718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.392 [2024-02-14 20:30:47.708748] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.392 qpair failed and we were unable to recover it. 00:30:10.392 [2024-02-14 20:30:47.709170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.392 [2024-02-14 20:30:47.709486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.392 [2024-02-14 20:30:47.709515] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.392 qpair failed and we were unable to recover it. 00:30:10.392 [2024-02-14 20:30:47.709845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.392 [2024-02-14 20:30:47.710223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.392 [2024-02-14 20:30:47.710252] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.392 qpair failed and we were unable to recover it. 00:30:10.392 [2024-02-14 20:30:47.710659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.392 [2024-02-14 20:30:47.711120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.392 [2024-02-14 20:30:47.711149] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.392 qpair failed and we were unable to recover it. 00:30:10.392 [2024-02-14 20:30:47.711478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.393 [2024-02-14 20:30:47.711801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.393 [2024-02-14 20:30:47.711831] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.393 qpair failed and we were unable to recover it. 00:30:10.393 [2024-02-14 20:30:47.712293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.393 [2024-02-14 20:30:47.712615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.393 [2024-02-14 20:30:47.712644] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.393 qpair failed and we were unable to recover it. 00:30:10.393 [2024-02-14 20:30:47.713050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.393 [2024-02-14 20:30:47.713434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.393 [2024-02-14 20:30:47.713464] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.393 qpair failed and we were unable to recover it. 00:30:10.393 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 44: 1966193 Killed "${NVMF_APP[@]}" "$@" 00:30:10.393 [2024-02-14 20:30:47.713900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.393 [2024-02-14 20:30:47.714135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.393 [2024-02-14 20:30:47.714165] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.393 qpair failed and we were unable to recover it. 00:30:10.393 20:30:47 -- host/target_disconnect.sh@56 -- # disconnect_init 10.0.0.2 00:30:10.393 [2024-02-14 20:30:47.714540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.393 20:30:47 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:30:10.393 [2024-02-14 20:30:47.714908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.393 [2024-02-14 20:30:47.714924] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.393 qpair failed and we were unable to recover it. 00:30:10.393 20:30:47 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:30:10.393 [2024-02-14 20:30:47.715277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.393 20:30:47 -- common/autotest_common.sh@710 -- # xtrace_disable 00:30:10.393 [2024-02-14 20:30:47.715636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.393 [2024-02-14 20:30:47.715657] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.393 qpair failed and we were unable to recover it. 00:30:10.393 20:30:47 -- common/autotest_common.sh@10 -- # set +x 00:30:10.393 [2024-02-14 20:30:47.715955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.393 [2024-02-14 20:30:47.716296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.393 [2024-02-14 20:30:47.716311] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.393 qpair failed and we were unable to recover it. 00:30:10.393 [2024-02-14 20:30:47.716613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.393 [2024-02-14 20:30:47.717015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.393 [2024-02-14 20:30:47.717030] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.393 qpair failed and we were unable to recover it. 00:30:10.393 [2024-02-14 20:30:47.717526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.393 [2024-02-14 20:30:47.717978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.393 [2024-02-14 20:30:47.717994] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.393 qpair failed and we were unable to recover it. 00:30:10.393 [2024-02-14 20:30:47.718349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.393 [2024-02-14 20:30:47.718750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.393 [2024-02-14 20:30:47.718765] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.393 qpair failed and we were unable to recover it. 00:30:10.393 [2024-02-14 20:30:47.719107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.393 [2024-02-14 20:30:47.719416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.393 [2024-02-14 20:30:47.719431] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.393 qpair failed and we were unable to recover it. 00:30:10.393 [2024-02-14 20:30:47.719715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.393 [2024-02-14 20:30:47.720061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.393 [2024-02-14 20:30:47.720075] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.393 qpair failed and we were unable to recover it. 00:30:10.393 [2024-02-14 20:30:47.720426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.393 [2024-02-14 20:30:47.720716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.393 [2024-02-14 20:30:47.720731] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.393 qpair failed and we were unable to recover it. 00:30:10.393 [2024-02-14 20:30:47.721023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.393 [2024-02-14 20:30:47.721313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.393 [2024-02-14 20:30:47.721327] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.393 qpair failed and we were unable to recover it. 00:30:10.393 20:30:47 -- nvmf/common.sh@469 -- # nvmfpid=1967048 00:30:10.393 [2024-02-14 20:30:47.721748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.393 20:30:47 -- nvmf/common.sh@470 -- # waitforlisten 1967048 00:30:10.393 [2024-02-14 20:30:47.722044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.393 20:30:47 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:30:10.393 [2024-02-14 20:30:47.722060] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.393 qpair failed and we were unable to recover it. 00:30:10.393 20:30:47 -- common/autotest_common.sh@817 -- # '[' -z 1967048 ']' 00:30:10.393 [2024-02-14 20:30:47.722402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.393 20:30:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:10.393 [2024-02-14 20:30:47.722701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.393 [2024-02-14 20:30:47.722717] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.393 qpair failed and we were unable to recover it. 00:30:10.393 20:30:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:10.393 [2024-02-14 20:30:47.723122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.393 20:30:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:10.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:10.393 20:30:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:10.393 [2024-02-14 20:30:47.723527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.393 [2024-02-14 20:30:47.723542] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.393 qpair failed and we were unable to recover it. 00:30:10.393 20:30:47 -- common/autotest_common.sh@10 -- # set +x 00:30:10.393 [2024-02-14 20:30:47.723908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.393 [2024-02-14 20:30:47.724278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.393 [2024-02-14 20:30:47.724293] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.393 qpair failed and we were unable to recover it. 00:30:10.393 [2024-02-14 20:30:47.724433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.393 [2024-02-14 20:30:47.724783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.393 [2024-02-14 20:30:47.724797] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.393 qpair failed and we were unable to recover it. 00:30:10.393 [2024-02-14 20:30:47.725084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.393 [2024-02-14 20:30:47.725517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.393 [2024-02-14 20:30:47.725531] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.393 qpair failed and we were unable to recover it. 00:30:10.393 [2024-02-14 20:30:47.725875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.393 [2024-02-14 20:30:47.726159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.393 [2024-02-14 20:30:47.726174] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.393 qpair failed and we were unable to recover it. 00:30:10.393 [2024-02-14 20:30:47.726461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.393 [2024-02-14 20:30:47.726679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.393 [2024-02-14 20:30:47.726694] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.393 qpair failed and we were unable to recover it. 00:30:10.393 [2024-02-14 20:30:47.726992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.393 [2024-02-14 20:30:47.727353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.394 [2024-02-14 20:30:47.727368] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.394 qpair failed and we were unable to recover it. 00:30:10.394 [2024-02-14 20:30:47.727702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.394 [2024-02-14 20:30:47.727997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.394 [2024-02-14 20:30:47.728012] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.394 qpair failed and we were unable to recover it. 00:30:10.394 [2024-02-14 20:30:47.728367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.394 [2024-02-14 20:30:47.728770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.394 [2024-02-14 20:30:47.728785] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.394 qpair failed and we were unable to recover it. 00:30:10.394 [2024-02-14 20:30:47.729129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.394 [2024-02-14 20:30:47.729468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.394 [2024-02-14 20:30:47.729482] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.394 qpair failed and we were unable to recover it. 00:30:10.394 [2024-02-14 20:30:47.729887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.394 [2024-02-14 20:30:47.730238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.394 [2024-02-14 20:30:47.730253] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.394 qpair failed and we were unable to recover it. 00:30:10.394 [2024-02-14 20:30:47.730609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.394 [2024-02-14 20:30:47.730899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.394 [2024-02-14 20:30:47.730915] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.394 qpair failed and we were unable to recover it. 00:30:10.394 [2024-02-14 20:30:47.731421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.394 [2024-02-14 20:30:47.731712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.394 [2024-02-14 20:30:47.731728] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.394 qpair failed and we were unable to recover it. 00:30:10.394 [2024-02-14 20:30:47.732074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.394 [2024-02-14 20:30:47.732451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.394 [2024-02-14 20:30:47.732467] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.394 qpair failed and we were unable to recover it. 00:30:10.394 [2024-02-14 20:30:47.732819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.394 [2024-02-14 20:30:47.733163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.394 [2024-02-14 20:30:47.733177] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.394 qpair failed and we were unable to recover it. 00:30:10.394 [2024-02-14 20:30:47.733531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.394 [2024-02-14 20:30:47.733870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.394 [2024-02-14 20:30:47.733885] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.394 qpair failed and we were unable to recover it. 00:30:10.394 [2024-02-14 20:30:47.734171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.394 [2024-02-14 20:30:47.734515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.394 [2024-02-14 20:30:47.734529] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.394 qpair failed and we were unable to recover it. 00:30:10.394 [2024-02-14 20:30:47.734904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.394 [2024-02-14 20:30:47.735327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.394 [2024-02-14 20:30:47.735342] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.394 qpair failed and we were unable to recover it. 00:30:10.394 [2024-02-14 20:30:47.735617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.394 [2024-02-14 20:30:47.735971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.394 [2024-02-14 20:30:47.735989] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.394 qpair failed and we were unable to recover it. 00:30:10.394 [2024-02-14 20:30:47.736286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.394 [2024-02-14 20:30:47.736686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.394 [2024-02-14 20:30:47.736701] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.394 qpair failed and we were unable to recover it. 00:30:10.394 [2024-02-14 20:30:47.737137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.394 [2024-02-14 20:30:47.737431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.394 [2024-02-14 20:30:47.737445] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.394 qpair failed and we were unable to recover it. 00:30:10.394 [2024-02-14 20:30:47.737774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.394 [2024-02-14 20:30:47.738124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.394 [2024-02-14 20:30:47.738139] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.394 qpair failed and we were unable to recover it. 00:30:10.394 [2024-02-14 20:30:47.738428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.394 [2024-02-14 20:30:47.738769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.394 [2024-02-14 20:30:47.738784] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.394 qpair failed and we were unable to recover it. 00:30:10.394 [2024-02-14 20:30:47.739149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.394 [2024-02-14 20:30:47.739586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.394 [2024-02-14 20:30:47.739600] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.394 qpair failed and we were unable to recover it. 00:30:10.394 [2024-02-14 20:30:47.739899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.394 [2024-02-14 20:30:47.740277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.394 [2024-02-14 20:30:47.740292] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.394 qpair failed and we were unable to recover it. 00:30:10.394 [2024-02-14 20:30:47.740656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.394 [2024-02-14 20:30:47.741060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.394 [2024-02-14 20:30:47.741075] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.394 qpair failed and we were unable to recover it. 00:30:10.394 [2024-02-14 20:30:47.741377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.394 [2024-02-14 20:30:47.741727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.394 [2024-02-14 20:30:47.741742] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.394 qpair failed and we were unable to recover it. 00:30:10.394 [2024-02-14 20:30:47.742044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.394 [2024-02-14 20:30:47.742321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.394 [2024-02-14 20:30:47.742336] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.394 qpair failed and we were unable to recover it. 00:30:10.394 [2024-02-14 20:30:47.742693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.394 [2024-02-14 20:30:47.743037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.394 [2024-02-14 20:30:47.743052] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.394 qpair failed and we were unable to recover it. 00:30:10.394 [2024-02-14 20:30:47.743398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.394 [2024-02-14 20:30:47.743741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.394 [2024-02-14 20:30:47.743756] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.394 qpair failed and we were unable to recover it. 00:30:10.394 [2024-02-14 20:30:47.743956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.394 [2024-02-14 20:30:47.744296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.394 [2024-02-14 20:30:47.744310] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.394 qpair failed and we were unable to recover it. 00:30:10.394 [2024-02-14 20:30:47.744664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.394 [2024-02-14 20:30:47.744962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.394 [2024-02-14 20:30:47.744977] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.394 qpair failed and we were unable to recover it. 00:30:10.394 [2024-02-14 20:30:47.745449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.394 [2024-02-14 20:30:47.745736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.394 [2024-02-14 20:30:47.745751] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.394 qpair failed and we were unable to recover it. 00:30:10.394 [2024-02-14 20:30:47.746167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.394 [2024-02-14 20:30:47.746567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.394 [2024-02-14 20:30:47.746582] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.394 qpair failed and we were unable to recover it. 00:30:10.394 [2024-02-14 20:30:47.746741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.394 [2024-02-14 20:30:47.747079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.394 [2024-02-14 20:30:47.747093] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.394 qpair failed and we were unable to recover it. 00:30:10.394 [2024-02-14 20:30:47.747457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.395 [2024-02-14 20:30:47.747867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.395 [2024-02-14 20:30:47.747882] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.395 qpair failed and we were unable to recover it. 00:30:10.395 [2024-02-14 20:30:47.748285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.395 [2024-02-14 20:30:47.748712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.395 [2024-02-14 20:30:47.748727] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.395 qpair failed and we were unable to recover it. 00:30:10.395 [2024-02-14 20:30:47.749080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.395 [2024-02-14 20:30:47.749425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.395 [2024-02-14 20:30:47.749440] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.395 qpair failed and we were unable to recover it. 00:30:10.395 [2024-02-14 20:30:47.749844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.395 [2024-02-14 20:30:47.750129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.395 [2024-02-14 20:30:47.750144] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.395 qpair failed and we were unable to recover it. 00:30:10.395 [2024-02-14 20:30:47.750369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.395 [2024-02-14 20:30:47.750730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.395 [2024-02-14 20:30:47.750744] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.395 qpair failed and we were unable to recover it. 00:30:10.395 [2024-02-14 20:30:47.751105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.395 [2024-02-14 20:30:47.751455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.395 [2024-02-14 20:30:47.751470] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.395 qpair failed and we were unable to recover it. 00:30:10.395 [2024-02-14 20:30:47.751764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.395 [2024-02-14 20:30:47.752192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.395 [2024-02-14 20:30:47.752207] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.395 qpair failed and we were unable to recover it. 00:30:10.395 [2024-02-14 20:30:47.752665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.395 [2024-02-14 20:30:47.753069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.395 [2024-02-14 20:30:47.753084] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.395 qpair failed and we were unable to recover it. 00:30:10.395 [2024-02-14 20:30:47.753505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.395 [2024-02-14 20:30:47.753932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.395 [2024-02-14 20:30:47.753947] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.395 qpair failed and we were unable to recover it. 00:30:10.395 [2024-02-14 20:30:47.754099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.395 [2024-02-14 20:30:47.754514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.395 [2024-02-14 20:30:47.754529] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.395 qpair failed and we were unable to recover it. 00:30:10.395 [2024-02-14 20:30:47.754957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.395 [2024-02-14 20:30:47.755320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.395 [2024-02-14 20:30:47.755334] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.395 qpair failed and we were unable to recover it. 00:30:10.395 [2024-02-14 20:30:47.755700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.395 [2024-02-14 20:30:47.756062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.395 [2024-02-14 20:30:47.756077] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.395 qpair failed and we were unable to recover it. 00:30:10.395 [2024-02-14 20:30:47.756427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.395 [2024-02-14 20:30:47.756829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.395 [2024-02-14 20:30:47.756844] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.395 qpair failed and we were unable to recover it. 00:30:10.395 [2024-02-14 20:30:47.757010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.395 [2024-02-14 20:30:47.757366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.395 [2024-02-14 20:30:47.757380] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.395 qpair failed and we were unable to recover it. 00:30:10.395 [2024-02-14 20:30:47.757717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.395 [2024-02-14 20:30:47.758139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.395 [2024-02-14 20:30:47.758154] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.395 qpair failed and we were unable to recover it. 00:30:10.395 [2024-02-14 20:30:47.758571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.395 [2024-02-14 20:30:47.758928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.395 [2024-02-14 20:30:47.758944] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.395 qpair failed and we were unable to recover it. 00:30:10.395 [2024-02-14 20:30:47.759222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.395 [2024-02-14 20:30:47.759567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.395 [2024-02-14 20:30:47.759581] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.395 qpair failed and we were unable to recover it. 00:30:10.395 [2024-02-14 20:30:47.760012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.395 [2024-02-14 20:30:47.760297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.395 [2024-02-14 20:30:47.760311] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.395 qpair failed and we were unable to recover it. 00:30:10.395 [2024-02-14 20:30:47.760763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.395 [2024-02-14 20:30:47.761059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.395 [2024-02-14 20:30:47.761073] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.395 qpair failed and we were unable to recover it. 00:30:10.395 [2024-02-14 20:30:47.761484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.395 [2024-02-14 20:30:47.761913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.395 [2024-02-14 20:30:47.761928] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.395 qpair failed and we were unable to recover it. 00:30:10.395 [2024-02-14 20:30:47.762357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.395 [2024-02-14 20:30:47.762654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.395 [2024-02-14 20:30:47.762668] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.395 qpair failed and we were unable to recover it. 00:30:10.395 [2024-02-14 20:30:47.763012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.395 [2024-02-14 20:30:47.763445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.395 [2024-02-14 20:30:47.763459] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.395 qpair failed and we were unable to recover it. 00:30:10.395 [2024-02-14 20:30:47.763908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.395 [2024-02-14 20:30:47.764240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.395 [2024-02-14 20:30:47.764254] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.395 qpair failed and we were unable to recover it. 00:30:10.395 [2024-02-14 20:30:47.764688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.395 [2024-02-14 20:30:47.764813] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:30:10.395 [2024-02-14 20:30:47.764853] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:10.395 [2024-02-14 20:30:47.765032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.395 [2024-02-14 20:30:47.765049] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.395 qpair failed and we were unable to recover it. 00:30:10.395 [2024-02-14 20:30:47.765479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.395 [2024-02-14 20:30:47.765906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.395 [2024-02-14 20:30:47.765921] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.395 qpair failed and we were unable to recover it. 00:30:10.395 [2024-02-14 20:30:47.766320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.395 [2024-02-14 20:30:47.766619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.395 [2024-02-14 20:30:47.766633] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.395 qpair failed and we were unable to recover it. 00:30:10.395 [2024-02-14 20:30:47.767045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.395 [2024-02-14 20:30:47.767409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.395 [2024-02-14 20:30:47.767424] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.395 qpair failed and we were unable to recover it. 00:30:10.395 [2024-02-14 20:30:47.767762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.395 [2024-02-14 20:30:47.768136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.395 [2024-02-14 20:30:47.768150] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.395 qpair failed and we were unable to recover it. 00:30:10.395 [2024-02-14 20:30:47.768582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.396 [2024-02-14 20:30:47.768930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.396 [2024-02-14 20:30:47.768945] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.396 qpair failed and we were unable to recover it. 00:30:10.396 [2024-02-14 20:30:47.769293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.396 [2024-02-14 20:30:47.769486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.396 [2024-02-14 20:30:47.769500] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.396 qpair failed and we were unable to recover it. 00:30:10.396 [2024-02-14 20:30:47.769855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.396 [2024-02-14 20:30:47.770209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.396 [2024-02-14 20:30:47.770224] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.396 qpair failed and we were unable to recover it. 00:30:10.396 [2024-02-14 20:30:47.770574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.396 [2024-02-14 20:30:47.770865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.396 [2024-02-14 20:30:47.770880] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.396 qpair failed and we were unable to recover it. 00:30:10.396 [2024-02-14 20:30:47.771193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.396 [2024-02-14 20:30:47.771546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.396 [2024-02-14 20:30:47.771560] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.396 qpair failed and we were unable to recover it. 00:30:10.396 [2024-02-14 20:30:47.771986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.396 [2024-02-14 20:30:47.772297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.396 [2024-02-14 20:30:47.772314] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.396 qpair failed and we were unable to recover it. 00:30:10.396 [2024-02-14 20:30:47.772608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.396 [2024-02-14 20:30:47.772961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.396 [2024-02-14 20:30:47.772976] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.396 qpair failed and we were unable to recover it. 00:30:10.396 [2024-02-14 20:30:47.773359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.396 [2024-02-14 20:30:47.773704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.396 [2024-02-14 20:30:47.773719] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.396 qpair failed and we were unable to recover it. 00:30:10.396 [2024-02-14 20:30:47.774123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.396 [2024-02-14 20:30:47.774461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.396 [2024-02-14 20:30:47.774476] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.396 qpair failed and we were unable to recover it. 00:30:10.396 [2024-02-14 20:30:47.775000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.396 [2024-02-14 20:30:47.775302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.396 [2024-02-14 20:30:47.775317] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.396 qpair failed and we were unable to recover it. 00:30:10.396 [2024-02-14 20:30:47.775619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.396 [2024-02-14 20:30:47.776000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.396 [2024-02-14 20:30:47.776015] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.396 qpair failed and we were unable to recover it. 00:30:10.396 [2024-02-14 20:30:47.776418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.396 [2024-02-14 20:30:47.776702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.396 [2024-02-14 20:30:47.776717] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.396 qpair failed and we were unable to recover it. 00:30:10.396 [2024-02-14 20:30:47.777120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.396 [2024-02-14 20:30:47.777549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.396 [2024-02-14 20:30:47.777563] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.396 qpair failed and we were unable to recover it. 00:30:10.396 [2024-02-14 20:30:47.777772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.396 [2024-02-14 20:30:47.778113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.396 [2024-02-14 20:30:47.778128] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.396 qpair failed and we were unable to recover it. 00:30:10.396 [2024-02-14 20:30:47.778413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.396 [2024-02-14 20:30:47.778765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.396 [2024-02-14 20:30:47.778780] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.396 qpair failed and we were unable to recover it. 00:30:10.396 [2024-02-14 20:30:47.779062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.396 [2024-02-14 20:30:47.779426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.396 [2024-02-14 20:30:47.779440] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.396 qpair failed and we were unable to recover it. 00:30:10.396 [2024-02-14 20:30:47.779779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.396 [2024-02-14 20:30:47.780133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.396 [2024-02-14 20:30:47.780147] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.396 qpair failed and we were unable to recover it. 00:30:10.396 [2024-02-14 20:30:47.780500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.396 [2024-02-14 20:30:47.780859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.396 [2024-02-14 20:30:47.780874] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.396 qpair failed and we were unable to recover it. 00:30:10.396 [2024-02-14 20:30:47.781294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.396 [2024-02-14 20:30:47.781645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.396 [2024-02-14 20:30:47.781670] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.396 qpair failed and we were unable to recover it. 00:30:10.396 [2024-02-14 20:30:47.782026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.396 [2024-02-14 20:30:47.782454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.396 [2024-02-14 20:30:47.782468] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.396 qpair failed and we were unable to recover it. 00:30:10.396 [2024-02-14 20:30:47.782756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.396 [2024-02-14 20:30:47.783091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.396 [2024-02-14 20:30:47.783105] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.396 qpair failed and we were unable to recover it. 00:30:10.396 [2024-02-14 20:30:47.783398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.396 [2024-02-14 20:30:47.783824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.396 [2024-02-14 20:30:47.783839] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.396 qpair failed and we were unable to recover it. 00:30:10.396 [2024-02-14 20:30:47.784247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.396 [2024-02-14 20:30:47.784662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.396 [2024-02-14 20:30:47.784677] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.396 qpair failed and we were unable to recover it. 00:30:10.396 [2024-02-14 20:30:47.785089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.396 [2024-02-14 20:30:47.785490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.396 [2024-02-14 20:30:47.785505] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.396 qpair failed and we were unable to recover it. 00:30:10.396 [2024-02-14 20:30:47.785805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.396 [2024-02-14 20:30:47.786206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.396 [2024-02-14 20:30:47.786220] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.396 qpair failed and we were unable to recover it. 00:30:10.396 [2024-02-14 20:30:47.786578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.396 [2024-02-14 20:30:47.786978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.396 [2024-02-14 20:30:47.786993] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.396 qpair failed and we were unable to recover it. 00:30:10.396 [2024-02-14 20:30:47.787367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.396 [2024-02-14 20:30:47.787733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.396 [2024-02-14 20:30:47.787748] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.396 qpair failed and we were unable to recover it. 00:30:10.396 [2024-02-14 20:30:47.788090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.396 [2024-02-14 20:30:47.788529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.396 [2024-02-14 20:30:47.788544] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.396 qpair failed and we were unable to recover it. 00:30:10.396 [2024-02-14 20:30:47.788921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.396 [2024-02-14 20:30:47.789323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.396 [2024-02-14 20:30:47.789337] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.396 qpair failed and we were unable to recover it. 00:30:10.396 [2024-02-14 20:30:47.789645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.397 [2024-02-14 20:30:47.790058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.397 [2024-02-14 20:30:47.790073] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.397 qpair failed and we were unable to recover it. 00:30:10.397 [2024-02-14 20:30:47.790412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.397 [2024-02-14 20:30:47.790814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.397 [2024-02-14 20:30:47.790829] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.397 qpair failed and we were unable to recover it. 00:30:10.397 [2024-02-14 20:30:47.791256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.397 [2024-02-14 20:30:47.791561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.397 [2024-02-14 20:30:47.791575] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.397 qpair failed and we were unable to recover it. 00:30:10.397 [2024-02-14 20:30:47.791725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.397 [2024-02-14 20:30:47.792155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.397 [2024-02-14 20:30:47.792169] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.397 qpair failed and we were unable to recover it. 00:30:10.397 [2024-02-14 20:30:47.792538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.397 [2024-02-14 20:30:47.792896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.397 [2024-02-14 20:30:47.792911] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.397 qpair failed and we were unable to recover it. 00:30:10.397 [2024-02-14 20:30:47.793207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.397 [2024-02-14 20:30:47.793491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.397 [2024-02-14 20:30:47.793505] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.397 qpair failed and we were unable to recover it. 00:30:10.397 [2024-02-14 20:30:47.793939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.397 [2024-02-14 20:30:47.794283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.397 [2024-02-14 20:30:47.794298] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.397 qpair failed and we were unable to recover it. 00:30:10.397 [2024-02-14 20:30:47.794699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.663 [2024-02-14 20:30:47.795119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.663 [2024-02-14 20:30:47.795134] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.663 qpair failed and we were unable to recover it. 00:30:10.663 [2024-02-14 20:30:47.795467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.663 [2024-02-14 20:30:47.795852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.663 [2024-02-14 20:30:47.795866] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.663 qpair failed and we were unable to recover it. 00:30:10.663 [2024-02-14 20:30:47.796231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.663 [2024-02-14 20:30:47.796672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.663 [2024-02-14 20:30:47.796687] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.663 qpair failed and we were unable to recover it. 00:30:10.663 [2024-02-14 20:30:47.797118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.663 [2024-02-14 20:30:47.797421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.663 [2024-02-14 20:30:47.797435] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.663 qpair failed and we were unable to recover it. 00:30:10.663 EAL: No free 2048 kB hugepages reported on node 1 00:30:10.663 [2024-02-14 20:30:47.797857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.663 [2024-02-14 20:30:47.798300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.663 [2024-02-14 20:30:47.798314] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.663 qpair failed and we were unable to recover it. 00:30:10.663 [2024-02-14 20:30:47.798631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.663 [2024-02-14 20:30:47.799067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.663 [2024-02-14 20:30:47.799082] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.663 qpair failed and we were unable to recover it. 00:30:10.663 [2024-02-14 20:30:47.799523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.663 [2024-02-14 20:30:47.799789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.663 [2024-02-14 20:30:47.799804] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.663 qpair failed and we were unable to recover it. 00:30:10.663 [2024-02-14 20:30:47.800147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.663 [2024-02-14 20:30:47.800437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.663 [2024-02-14 20:30:47.800452] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.663 qpair failed and we were unable to recover it. 00:30:10.663 [2024-02-14 20:30:47.800858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.663 [2024-02-14 20:30:47.801260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.663 [2024-02-14 20:30:47.801275] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.663 qpair failed and we were unable to recover it. 00:30:10.663 [2024-02-14 20:30:47.801679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.663 [2024-02-14 20:30:47.802030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.663 [2024-02-14 20:30:47.802045] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.663 qpair failed and we were unable to recover it. 00:30:10.663 [2024-02-14 20:30:47.802479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.663 [2024-02-14 20:30:47.802823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.663 [2024-02-14 20:30:47.802838] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.664 qpair failed and we were unable to recover it. 00:30:10.664 [2024-02-14 20:30:47.803111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.664 [2024-02-14 20:30:47.803459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.664 [2024-02-14 20:30:47.803473] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.664 qpair failed and we were unable to recover it. 00:30:10.664 [2024-02-14 20:30:47.803851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.664 [2024-02-14 20:30:47.804210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.664 [2024-02-14 20:30:47.804225] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.664 qpair failed and we were unable to recover it. 00:30:10.664 [2024-02-14 20:30:47.804581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.664 [2024-02-14 20:30:47.805010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.664 [2024-02-14 20:30:47.805025] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.664 qpair failed and we were unable to recover it. 00:30:10.664 [2024-02-14 20:30:47.805387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.664 [2024-02-14 20:30:47.805789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.664 [2024-02-14 20:30:47.805804] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.664 qpair failed and we were unable to recover it. 00:30:10.664 [2024-02-14 20:30:47.806229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.664 [2024-02-14 20:30:47.806526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.664 [2024-02-14 20:30:47.806541] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.664 qpair failed and we were unable to recover it. 00:30:10.664 [2024-02-14 20:30:47.806876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.664 [2024-02-14 20:30:47.807237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.664 [2024-02-14 20:30:47.807251] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.664 qpair failed and we were unable to recover it. 00:30:10.664 [2024-02-14 20:30:47.807620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.664 [2024-02-14 20:30:47.807973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.664 [2024-02-14 20:30:47.807988] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.664 qpair failed and we were unable to recover it. 00:30:10.664 [2024-02-14 20:30:47.808415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.664 [2024-02-14 20:30:47.808623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.664 [2024-02-14 20:30:47.808638] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.664 qpair failed and we were unable to recover it. 00:30:10.664 [2024-02-14 20:30:47.808944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.664 [2024-02-14 20:30:47.809354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.664 [2024-02-14 20:30:47.809369] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.664 qpair failed and we were unable to recover it. 00:30:10.664 [2024-02-14 20:30:47.809723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.664 [2024-02-14 20:30:47.810067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.664 [2024-02-14 20:30:47.810085] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.664 qpair failed and we were unable to recover it. 00:30:10.664 [2024-02-14 20:30:47.810440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.664 [2024-02-14 20:30:47.810870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.664 [2024-02-14 20:30:47.810885] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.664 qpair failed and we were unable to recover it. 00:30:10.664 [2024-02-14 20:30:47.811245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.664 [2024-02-14 20:30:47.811743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.664 [2024-02-14 20:30:47.811758] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.664 qpair failed and we were unable to recover it. 00:30:10.664 [2024-02-14 20:30:47.812184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.664 [2024-02-14 20:30:47.812495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.664 [2024-02-14 20:30:47.812510] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.664 qpair failed and we were unable to recover it. 00:30:10.664 [2024-02-14 20:30:47.812864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.664 [2024-02-14 20:30:47.813246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.664 [2024-02-14 20:30:47.813261] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.664 qpair failed and we were unable to recover it. 00:30:10.664 [2024-02-14 20:30:47.813673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.664 [2024-02-14 20:30:47.813972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.664 [2024-02-14 20:30:47.813987] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.664 qpair failed and we were unable to recover it. 00:30:10.664 [2024-02-14 20:30:47.814336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.664 [2024-02-14 20:30:47.814742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.664 [2024-02-14 20:30:47.814757] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.664 qpair failed and we were unable to recover it. 00:30:10.664 [2024-02-14 20:30:47.815027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.664 [2024-02-14 20:30:47.815450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.664 [2024-02-14 20:30:47.815465] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.664 qpair failed and we were unable to recover it. 00:30:10.664 [2024-02-14 20:30:47.815852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.664 [2024-02-14 20:30:47.816205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.664 [2024-02-14 20:30:47.816220] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.664 qpair failed and we were unable to recover it. 00:30:10.664 [2024-02-14 20:30:47.816370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.664 [2024-02-14 20:30:47.816637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.664 [2024-02-14 20:30:47.816659] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.664 qpair failed and we were unable to recover it. 00:30:10.664 [2024-02-14 20:30:47.816997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.664 [2024-02-14 20:30:47.817310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.664 [2024-02-14 20:30:47.817328] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.664 qpair failed and we were unable to recover it. 00:30:10.664 [2024-02-14 20:30:47.817715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.664 [2024-02-14 20:30:47.817928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.664 [2024-02-14 20:30:47.817942] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.664 qpair failed and we were unable to recover it. 00:30:10.664 [2024-02-14 20:30:47.818305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.664 [2024-02-14 20:30:47.818659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.664 [2024-02-14 20:30:47.818673] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.664 qpair failed and we were unable to recover it. 00:30:10.664 [2024-02-14 20:30:47.819112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.664 [2024-02-14 20:30:47.819460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.664 [2024-02-14 20:30:47.819476] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.664 qpair failed and we were unable to recover it. 00:30:10.664 [2024-02-14 20:30:47.819766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.664 [2024-02-14 20:30:47.820067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.664 [2024-02-14 20:30:47.820081] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.664 qpair failed and we were unable to recover it. 00:30:10.664 [2024-02-14 20:30:47.820359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.664 [2024-02-14 20:30:47.820555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.664 [2024-02-14 20:30:47.820570] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.664 qpair failed and we were unable to recover it. 00:30:10.664 [2024-02-14 20:30:47.820867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.664 [2024-02-14 20:30:47.821164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.664 [2024-02-14 20:30:47.821179] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.664 qpair failed and we were unable to recover it. 00:30:10.664 [2024-02-14 20:30:47.821313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.664 [2024-02-14 20:30:47.821462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.664 [2024-02-14 20:30:47.821476] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.664 qpair failed and we were unable to recover it. 00:30:10.664 [2024-02-14 20:30:47.821822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.664 [2024-02-14 20:30:47.822108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.664 [2024-02-14 20:30:47.822123] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.664 qpair failed and we were unable to recover it. 00:30:10.664 [2024-02-14 20:30:47.822476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.665 [2024-02-14 20:30:47.822766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.665 [2024-02-14 20:30:47.822781] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.665 qpair failed and we were unable to recover it. 00:30:10.665 [2024-02-14 20:30:47.823127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.665 [2024-02-14 20:30:47.823477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.665 [2024-02-14 20:30:47.823493] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.665 qpair failed and we were unable to recover it. 00:30:10.665 [2024-02-14 20:30:47.823792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.665 [2024-02-14 20:30:47.824151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.665 [2024-02-14 20:30:47.824166] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.665 qpair failed and we were unable to recover it. 00:30:10.665 [2024-02-14 20:30:47.824567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.665 [2024-02-14 20:30:47.824936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.665 [2024-02-14 20:30:47.824951] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.665 qpair failed and we were unable to recover it. 00:30:10.665 [2024-02-14 20:30:47.825311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.665 [2024-02-14 20:30:47.825600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.665 [2024-02-14 20:30:47.825615] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.665 qpair failed and we were unable to recover it. 00:30:10.665 [2024-02-14 20:30:47.826022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.665 [2024-02-14 20:30:47.826391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.665 [2024-02-14 20:30:47.826406] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.665 qpair failed and we were unable to recover it. 00:30:10.665 [2024-02-14 20:30:47.826760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.665 [2024-02-14 20:30:47.827104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.665 [2024-02-14 20:30:47.827119] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.665 qpair failed and we were unable to recover it. 00:30:10.665 [2024-02-14 20:30:47.827481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.665 [2024-02-14 20:30:47.827704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.665 [2024-02-14 20:30:47.827719] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.665 qpair failed and we were unable to recover it. 00:30:10.665 [2024-02-14 20:30:47.828059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.665 [2024-02-14 20:30:47.828348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.665 [2024-02-14 20:30:47.828362] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.665 qpair failed and we were unable to recover it. 00:30:10.665 [2024-02-14 20:30:47.828642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.665 [2024-02-14 20:30:47.828933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.665 [2024-02-14 20:30:47.828948] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.665 qpair failed and we were unable to recover it. 00:30:10.665 [2024-02-14 20:30:47.829511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.665 [2024-02-14 20:30:47.829828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.665 [2024-02-14 20:30:47.829842] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.665 qpair failed and we were unable to recover it. 00:30:10.665 [2024-02-14 20:30:47.830313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.665 [2024-02-14 20:30:47.830667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.665 [2024-02-14 20:30:47.830682] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.665 qpair failed and we were unable to recover it. 00:30:10.665 [2024-02-14 20:30:47.831048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.665 [2024-02-14 20:30:47.831396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.665 [2024-02-14 20:30:47.831411] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.665 qpair failed and we were unable to recover it. 00:30:10.665 [2024-02-14 20:30:47.831847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.665 [2024-02-14 20:30:47.832259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.665 [2024-02-14 20:30:47.832274] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.665 qpair failed and we were unable to recover it. 00:30:10.665 [2024-02-14 20:30:47.832424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.665 [2024-02-14 20:30:47.832778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.665 [2024-02-14 20:30:47.832794] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.665 qpair failed and we were unable to recover it. 00:30:10.665 [2024-02-14 20:30:47.833077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.665 [2024-02-14 20:30:47.833373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.665 [2024-02-14 20:30:47.833388] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.665 qpair failed and we were unable to recover it. 00:30:10.665 [2024-02-14 20:30:47.833675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.665 [2024-02-14 20:30:47.833962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.665 [2024-02-14 20:30:47.833979] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.665 qpair failed and we were unable to recover it. 00:30:10.665 [2024-02-14 20:30:47.834346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.665 [2024-02-14 20:30:47.834706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.665 [2024-02-14 20:30:47.834721] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.665 qpair failed and we were unable to recover it. 00:30:10.665 [2024-02-14 20:30:47.835014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.665 [2024-02-14 20:30:47.835420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.665 [2024-02-14 20:30:47.835435] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.665 qpair failed and we were unable to recover it. 00:30:10.665 [2024-02-14 20:30:47.835794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.665 [2024-02-14 20:30:47.836143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.665 [2024-02-14 20:30:47.836158] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.665 qpair failed and we were unable to recover it. 00:30:10.665 [2024-02-14 20:30:47.836447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.665 [2024-02-14 20:30:47.836779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.665 [2024-02-14 20:30:47.836795] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.665 qpair failed and we were unable to recover it. 00:30:10.665 [2024-02-14 20:30:47.837013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.665 [2024-02-14 20:30:47.837301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.665 [2024-02-14 20:30:47.837321] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.665 qpair failed and we were unable to recover it. 00:30:10.665 [2024-02-14 20:30:47.837684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.665 [2024-02-14 20:30:47.838037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.665 [2024-02-14 20:30:47.838052] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.665 qpair failed and we were unable to recover it. 00:30:10.665 [2024-02-14 20:30:47.838461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.665 [2024-02-14 20:30:47.838825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.665 [2024-02-14 20:30:47.838840] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.665 qpair failed and we were unable to recover it. 00:30:10.665 [2024-02-14 20:30:47.839127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.665 [2024-02-14 20:30:47.839467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.665 [2024-02-14 20:30:47.839482] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.665 qpair failed and we were unable to recover it. 00:30:10.665 [2024-02-14 20:30:47.839786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.665 [2024-02-14 20:30:47.839933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.665 [2024-02-14 20:30:47.839948] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.665 qpair failed and we were unable to recover it. 00:30:10.665 [2024-02-14 20:30:47.840244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.665 [2024-02-14 20:30:47.840585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.665 [2024-02-14 20:30:47.840600] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.665 qpair failed and we were unable to recover it. 00:30:10.665 [2024-02-14 20:30:47.841010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.665 [2024-02-14 20:30:47.841420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.665 [2024-02-14 20:30:47.841434] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.665 qpair failed and we were unable to recover it. 00:30:10.665 [2024-02-14 20:30:47.841862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.665 [2024-02-14 20:30:47.842154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.666 [2024-02-14 20:30:47.842169] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.666 qpair failed and we were unable to recover it. 00:30:10.666 [2024-02-14 20:30:47.842326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.666 [2024-02-14 20:30:47.842533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.666 [2024-02-14 20:30:47.842548] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.666 qpair failed and we were unable to recover it. 00:30:10.666 [2024-02-14 20:30:47.842857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.666 [2024-02-14 20:30:47.843127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.666 [2024-02-14 20:30:47.843142] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.666 qpair failed and we were unable to recover it. 00:30:10.666 [2024-02-14 20:30:47.843438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.666 [2024-02-14 20:30:47.843798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.666 [2024-02-14 20:30:47.843814] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.666 qpair failed and we were unable to recover it. 00:30:10.666 [2024-02-14 20:30:47.844174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.666 [2024-02-14 20:30:47.844519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.666 [2024-02-14 20:30:47.844535] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.666 qpair failed and we were unable to recover it. 00:30:10.666 [2024-02-14 20:30:47.844721] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:10.666 [2024-02-14 20:30:47.844882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.666 [2024-02-14 20:30:47.845179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.666 [2024-02-14 20:30:47.845194] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.666 qpair failed and we were unable to recover it. 00:30:10.666 [2024-02-14 20:30:47.845604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.666 [2024-02-14 20:30:47.845884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.666 [2024-02-14 20:30:47.845900] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.666 qpair failed and we were unable to recover it. 00:30:10.666 [2024-02-14 20:30:47.846255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.666 [2024-02-14 20:30:47.846605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.666 [2024-02-14 20:30:47.846621] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.666 qpair failed and we were unable to recover it. 00:30:10.666 [2024-02-14 20:30:47.846915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.666 [2024-02-14 20:30:47.847203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.666 [2024-02-14 20:30:47.847218] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.666 qpair failed and we were unable to recover it. 00:30:10.666 [2024-02-14 20:30:47.847573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.666 [2024-02-14 20:30:47.847921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.666 [2024-02-14 20:30:47.847937] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.666 qpair failed and we were unable to recover it. 00:30:10.666 [2024-02-14 20:30:47.848362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.666 [2024-02-14 20:30:47.848574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.666 [2024-02-14 20:30:47.848590] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.666 qpair failed and we were unable to recover it. 00:30:10.666 [2024-02-14 20:30:47.848884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.666 [2024-02-14 20:30:47.849181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.666 [2024-02-14 20:30:47.849197] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.666 qpair failed and we were unable to recover it. 00:30:10.666 [2024-02-14 20:30:47.849500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.666 [2024-02-14 20:30:47.849924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.666 [2024-02-14 20:30:47.849941] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.666 qpair failed and we were unable to recover it. 00:30:10.666 [2024-02-14 20:30:47.850282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.666 [2024-02-14 20:30:47.850578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.666 [2024-02-14 20:30:47.850594] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.666 qpair failed and we were unable to recover it. 00:30:10.666 [2024-02-14 20:30:47.850884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.666 [2024-02-14 20:30:47.851104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.666 [2024-02-14 20:30:47.851119] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.666 qpair failed and we were unable to recover it. 00:30:10.666 [2024-02-14 20:30:47.851262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.666 [2024-02-14 20:30:47.851613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.666 [2024-02-14 20:30:47.851629] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.666 qpair failed and we were unable to recover it. 00:30:10.666 [2024-02-14 20:30:47.851989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.666 [2024-02-14 20:30:47.852357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.666 [2024-02-14 20:30:47.852374] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.666 qpair failed and we were unable to recover it. 00:30:10.666 [2024-02-14 20:30:47.852743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.666 [2024-02-14 20:30:47.853031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.666 [2024-02-14 20:30:47.853047] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.666 qpair failed and we were unable to recover it. 00:30:10.666 [2024-02-14 20:30:47.853345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.666 [2024-02-14 20:30:47.853728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.666 [2024-02-14 20:30:47.853745] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.666 qpair failed and we were unable to recover it. 00:30:10.666 [2024-02-14 20:30:47.854036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.666 [2024-02-14 20:30:47.854463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.666 [2024-02-14 20:30:47.854478] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.666 qpair failed and we were unable to recover it. 00:30:10.666 [2024-02-14 20:30:47.854834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.666 [2024-02-14 20:30:47.855118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.666 [2024-02-14 20:30:47.855133] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.666 qpair failed and we were unable to recover it. 00:30:10.666 [2024-02-14 20:30:47.855475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.666 [2024-02-14 20:30:47.855820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.666 [2024-02-14 20:30:47.855836] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.666 qpair failed and we were unable to recover it. 00:30:10.666 [2024-02-14 20:30:47.856191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.666 [2024-02-14 20:30:47.856470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.666 [2024-02-14 20:30:47.856486] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.666 qpair failed and we were unable to recover it. 00:30:10.666 [2024-02-14 20:30:47.856870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.666 [2024-02-14 20:30:47.857211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.666 [2024-02-14 20:30:47.857226] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.666 qpair failed and we were unable to recover it. 00:30:10.666 [2024-02-14 20:30:47.857575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.666 [2024-02-14 20:30:47.857879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.666 [2024-02-14 20:30:47.857901] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.666 qpair failed and we were unable to recover it. 00:30:10.666 [2024-02-14 20:30:47.858240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.666 [2024-02-14 20:30:47.858594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.666 [2024-02-14 20:30:47.858609] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.666 qpair failed and we were unable to recover it. 00:30:10.666 [2024-02-14 20:30:47.858912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.666 [2024-02-14 20:30:47.859192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.666 [2024-02-14 20:30:47.859207] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.666 qpair failed and we were unable to recover it. 00:30:10.666 [2024-02-14 20:30:47.859551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.666 [2024-02-14 20:30:47.859953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.666 [2024-02-14 20:30:47.859969] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.666 qpair failed and we were unable to recover it. 00:30:10.666 [2024-02-14 20:30:47.860256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.666 [2024-02-14 20:30:47.860610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.667 [2024-02-14 20:30:47.860625] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.667 qpair failed and we were unable to recover it. 00:30:10.667 [2024-02-14 20:30:47.860913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.667 [2024-02-14 20:30:47.861144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.667 [2024-02-14 20:30:47.861158] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.667 qpair failed and we were unable to recover it. 00:30:10.667 [2024-02-14 20:30:47.861362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.667 [2024-02-14 20:30:47.861712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.667 [2024-02-14 20:30:47.861728] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.667 qpair failed and we were unable to recover it. 00:30:10.667 [2024-02-14 20:30:47.862013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.667 [2024-02-14 20:30:47.862359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.667 [2024-02-14 20:30:47.862374] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.667 qpair failed and we were unable to recover it. 00:30:10.667 [2024-02-14 20:30:47.862664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.667 [2024-02-14 20:30:47.863029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.667 [2024-02-14 20:30:47.863045] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.667 qpair failed and we were unable to recover it. 00:30:10.667 [2024-02-14 20:30:47.863201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.667 [2024-02-14 20:30:47.863604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.667 [2024-02-14 20:30:47.863619] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.667 qpair failed and we were unable to recover it. 00:30:10.667 [2024-02-14 20:30:47.863969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.667 [2024-02-14 20:30:47.864257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.667 [2024-02-14 20:30:47.864271] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.667 qpair failed and we were unable to recover it. 00:30:10.667 [2024-02-14 20:30:47.864570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.667 [2024-02-14 20:30:47.864859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.667 [2024-02-14 20:30:47.864875] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.667 qpair failed and we were unable to recover it. 00:30:10.667 [2024-02-14 20:30:47.865172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.667 [2024-02-14 20:30:47.865507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.667 [2024-02-14 20:30:47.865522] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.667 qpair failed and we were unable to recover it. 00:30:10.667 [2024-02-14 20:30:47.865802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.667 [2024-02-14 20:30:47.866156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.667 [2024-02-14 20:30:47.866171] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.667 qpair failed and we were unable to recover it. 00:30:10.667 [2024-02-14 20:30:47.866444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.667 [2024-02-14 20:30:47.866733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.667 [2024-02-14 20:30:47.866749] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.667 qpair failed and we were unable to recover it. 00:30:10.667 [2024-02-14 20:30:47.867203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.667 [2024-02-14 20:30:47.867537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.667 [2024-02-14 20:30:47.867552] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.667 qpair failed and we were unable to recover it. 00:30:10.667 [2024-02-14 20:30:47.867907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.667 [2024-02-14 20:30:47.868346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.667 [2024-02-14 20:30:47.868361] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.667 qpair failed and we were unable to recover it. 00:30:10.667 [2024-02-14 20:30:47.868640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.667 [2024-02-14 20:30:47.869053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.667 [2024-02-14 20:30:47.869069] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.667 qpair failed and we were unable to recover it. 00:30:10.667 [2024-02-14 20:30:47.869287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.667 [2024-02-14 20:30:47.869686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.667 [2024-02-14 20:30:47.869702] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.667 qpair failed and we were unable to recover it. 00:30:10.667 [2024-02-14 20:30:47.870060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.667 [2024-02-14 20:30:47.870410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.667 [2024-02-14 20:30:47.870425] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.667 qpair failed and we were unable to recover it. 00:30:10.667 [2024-02-14 20:30:47.870765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.667 [2024-02-14 20:30:47.870896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.667 [2024-02-14 20:30:47.870910] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.667 qpair failed and we were unable to recover it. 00:30:10.667 [2024-02-14 20:30:47.871199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.667 [2024-02-14 20:30:47.871565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.667 [2024-02-14 20:30:47.871579] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.667 qpair failed and we were unable to recover it. 00:30:10.667 [2024-02-14 20:30:47.871937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.667 [2024-02-14 20:30:47.872282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.667 [2024-02-14 20:30:47.872298] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.667 qpair failed and we were unable to recover it. 00:30:10.667 [2024-02-14 20:30:47.872637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.667 [2024-02-14 20:30:47.872941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.667 [2024-02-14 20:30:47.872957] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.667 qpair failed and we were unable to recover it. 00:30:10.667 [2024-02-14 20:30:47.873302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.667 [2024-02-14 20:30:47.873590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.667 [2024-02-14 20:30:47.873605] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.667 qpair failed and we were unable to recover it. 00:30:10.667 [2024-02-14 20:30:47.873972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.667 [2024-02-14 20:30:47.874252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.667 [2024-02-14 20:30:47.874267] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.667 qpair failed and we were unable to recover it. 00:30:10.667 [2024-02-14 20:30:47.874557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.667 [2024-02-14 20:30:47.874853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.667 [2024-02-14 20:30:47.874869] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.667 qpair failed and we were unable to recover it. 00:30:10.667 [2024-02-14 20:30:47.875151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.667 [2024-02-14 20:30:47.875488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.667 [2024-02-14 20:30:47.875503] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.667 qpair failed and we were unable to recover it. 00:30:10.667 [2024-02-14 20:30:47.875807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.667 [2024-02-14 20:30:47.876154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.667 [2024-02-14 20:30:47.876170] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.667 qpair failed and we were unable to recover it. 00:30:10.667 [2024-02-14 20:30:47.876574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.667 [2024-02-14 20:30:47.876732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.667 [2024-02-14 20:30:47.876747] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.667 qpair failed and we were unable to recover it. 00:30:10.667 [2024-02-14 20:30:47.877092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.667 [2024-02-14 20:30:47.877428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.667 [2024-02-14 20:30:47.877443] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.667 qpair failed and we were unable to recover it. 00:30:10.667 [2024-02-14 20:30:47.877876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.667 [2024-02-14 20:30:47.878156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.667 [2024-02-14 20:30:47.878171] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.667 qpair failed and we were unable to recover it. 00:30:10.667 [2024-02-14 20:30:47.878514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.667 [2024-02-14 20:30:47.878861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.667 [2024-02-14 20:30:47.878876] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.667 qpair failed and we were unable to recover it. 00:30:10.667 [2024-02-14 20:30:47.879283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.668 [2024-02-14 20:30:47.879575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.668 [2024-02-14 20:30:47.879595] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.668 qpair failed and we were unable to recover it. 00:30:10.668 [2024-02-14 20:30:47.879952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.668 [2024-02-14 20:30:47.880234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.668 [2024-02-14 20:30:47.880252] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.668 qpair failed and we were unable to recover it. 00:30:10.668 [2024-02-14 20:30:47.880542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.668 [2024-02-14 20:30:47.880846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.668 [2024-02-14 20:30:47.880866] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.668 qpair failed and we were unable to recover it. 00:30:10.668 [2024-02-14 20:30:47.881190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.668 [2024-02-14 20:30:47.881497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.668 [2024-02-14 20:30:47.881514] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.668 qpair failed and we were unable to recover it. 00:30:10.668 [2024-02-14 20:30:47.881807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.668 [2024-02-14 20:30:47.882108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.668 [2024-02-14 20:30:47.882126] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.668 qpair failed and we were unable to recover it. 00:30:10.668 [2024-02-14 20:30:47.882426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.668 [2024-02-14 20:30:47.882739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.668 [2024-02-14 20:30:47.882756] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.668 qpair failed and we were unable to recover it. 00:30:10.668 [2024-02-14 20:30:47.883049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.668 [2024-02-14 20:30:47.883406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.668 [2024-02-14 20:30:47.883423] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.668 qpair failed and we were unable to recover it. 00:30:10.668 [2024-02-14 20:30:47.883727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.668 [2024-02-14 20:30:47.884074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.668 [2024-02-14 20:30:47.884091] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.668 qpair failed and we were unable to recover it. 00:30:10.668 [2024-02-14 20:30:47.884409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.668 [2024-02-14 20:30:47.884767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.668 [2024-02-14 20:30:47.884784] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.668 qpair failed and we were unable to recover it. 00:30:10.668 [2024-02-14 20:30:47.885063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.668 [2024-02-14 20:30:47.885411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.668 [2024-02-14 20:30:47.885426] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.668 qpair failed and we were unable to recover it. 00:30:10.668 [2024-02-14 20:30:47.885622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.668 [2024-02-14 20:30:47.886057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.668 [2024-02-14 20:30:47.886075] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.668 qpair failed and we were unable to recover it. 00:30:10.668 [2024-02-14 20:30:47.886428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.668 [2024-02-14 20:30:47.886779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.668 [2024-02-14 20:30:47.886797] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.668 qpair failed and we were unable to recover it. 00:30:10.668 [2024-02-14 20:30:47.887139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.668 [2024-02-14 20:30:47.887436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.668 [2024-02-14 20:30:47.887453] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.668 qpair failed and we were unable to recover it. 00:30:10.668 [2024-02-14 20:30:47.887745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.668 [2024-02-14 20:30:47.888036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.668 [2024-02-14 20:30:47.888052] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.668 qpair failed and we were unable to recover it. 00:30:10.668 [2024-02-14 20:30:47.888349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.668 [2024-02-14 20:30:47.888644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.668 [2024-02-14 20:30:47.888667] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.668 qpair failed and we were unable to recover it. 00:30:10.668 [2024-02-14 20:30:47.889022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.668 [2024-02-14 20:30:47.889362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.668 [2024-02-14 20:30:47.889377] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.668 qpair failed and we were unable to recover it. 00:30:10.668 [2024-02-14 20:30:47.889583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.668 [2024-02-14 20:30:47.889927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.668 [2024-02-14 20:30:47.889942] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.668 qpair failed and we were unable to recover it. 00:30:10.668 [2024-02-14 20:30:47.890248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.668 [2024-02-14 20:30:47.890537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.668 [2024-02-14 20:30:47.890552] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.668 qpair failed and we were unable to recover it. 00:30:10.668 [2024-02-14 20:30:47.890899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.668 [2024-02-14 20:30:47.891263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.668 [2024-02-14 20:30:47.891281] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.668 qpair failed and we were unable to recover it. 00:30:10.668 [2024-02-14 20:30:47.891633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.668 [2024-02-14 20:30:47.891779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.668 [2024-02-14 20:30:47.891794] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.668 qpair failed and we were unable to recover it. 00:30:10.668 [2024-02-14 20:30:47.892088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.668 [2024-02-14 20:30:47.892482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.668 [2024-02-14 20:30:47.892496] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.668 qpair failed and we were unable to recover it. 00:30:10.668 [2024-02-14 20:30:47.892781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.668 [2024-02-14 20:30:47.893123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.668 [2024-02-14 20:30:47.893138] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.668 qpair failed and we were unable to recover it. 00:30:10.668 [2024-02-14 20:30:47.893542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.668 [2024-02-14 20:30:47.893826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.668 [2024-02-14 20:30:47.893841] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.668 qpair failed and we were unable to recover it. 00:30:10.668 [2024-02-14 20:30:47.894193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.668 [2024-02-14 20:30:47.894530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.669 [2024-02-14 20:30:47.894545] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.669 qpair failed and we were unable to recover it. 00:30:10.669 [2024-02-14 20:30:47.894846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.669 [2024-02-14 20:30:47.895104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.669 [2024-02-14 20:30:47.895118] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.669 qpair failed and we were unable to recover it. 00:30:10.669 [2024-02-14 20:30:47.895543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.669 [2024-02-14 20:30:47.895909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.669 [2024-02-14 20:30:47.895924] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.669 qpair failed and we were unable to recover it. 00:30:10.669 [2024-02-14 20:30:47.896349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.669 [2024-02-14 20:30:47.896638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.669 [2024-02-14 20:30:47.896660] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.669 qpair failed and we were unable to recover it. 00:30:10.669 [2024-02-14 20:30:47.896932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.669 [2024-02-14 20:30:47.897352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.669 [2024-02-14 20:30:47.897366] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.669 qpair failed and we were unable to recover it. 00:30:10.669 [2024-02-14 20:30:47.897655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.669 [2024-02-14 20:30:47.898078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.669 [2024-02-14 20:30:47.898095] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.669 qpair failed and we were unable to recover it. 00:30:10.669 [2024-02-14 20:30:47.898499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.669 [2024-02-14 20:30:47.898843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.669 [2024-02-14 20:30:47.898858] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.669 qpair failed and we were unable to recover it. 00:30:10.669 [2024-02-14 20:30:47.899142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.669 [2024-02-14 20:30:47.899548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.669 [2024-02-14 20:30:47.899563] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.669 qpair failed and we were unable to recover it. 00:30:10.669 [2024-02-14 20:30:47.899969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.669 [2024-02-14 20:30:47.900238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.669 [2024-02-14 20:30:47.900252] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.669 qpair failed and we were unable to recover it. 00:30:10.669 [2024-02-14 20:30:47.900602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.669 [2024-02-14 20:30:47.900879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.669 [2024-02-14 20:30:47.900894] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.669 qpair failed and we were unable to recover it. 00:30:10.669 [2024-02-14 20:30:47.901238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.669 [2024-02-14 20:30:47.901428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.669 [2024-02-14 20:30:47.901442] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.669 qpair failed and we were unable to recover it. 00:30:10.669 [2024-02-14 20:30:47.901851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.669 [2024-02-14 20:30:47.902186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.669 [2024-02-14 20:30:47.902201] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.669 qpair failed and we were unable to recover it. 00:30:10.669 [2024-02-14 20:30:47.902575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.669 [2024-02-14 20:30:47.902733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.669 [2024-02-14 20:30:47.902748] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.669 qpair failed and we were unable to recover it. 00:30:10.669 [2024-02-14 20:30:47.903103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.669 [2024-02-14 20:30:47.903507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.669 [2024-02-14 20:30:47.903522] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.669 qpair failed and we were unable to recover it. 00:30:10.669 [2024-02-14 20:30:47.903948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.669 [2024-02-14 20:30:47.904371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.669 [2024-02-14 20:30:47.904387] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.669 qpair failed and we were unable to recover it. 00:30:10.669 [2024-02-14 20:30:47.904754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.669 [2024-02-14 20:30:47.905141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.669 [2024-02-14 20:30:47.905155] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.669 qpair failed and we were unable to recover it. 00:30:10.669 [2024-02-14 20:30:47.905587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.669 [2024-02-14 20:30:47.906002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.669 [2024-02-14 20:30:47.906017] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.669 qpair failed and we were unable to recover it. 00:30:10.669 [2024-02-14 20:30:47.906445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.669 [2024-02-14 20:30:47.906815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.669 [2024-02-14 20:30:47.906830] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.669 qpair failed and we were unable to recover it. 00:30:10.669 [2024-02-14 20:30:47.907119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.669 [2024-02-14 20:30:47.907404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.669 [2024-02-14 20:30:47.907418] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.669 qpair failed and we were unable to recover it. 00:30:10.669 [2024-02-14 20:30:47.907849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.669 [2024-02-14 20:30:47.908145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.669 [2024-02-14 20:30:47.908159] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.669 qpair failed and we were unable to recover it. 00:30:10.669 [2024-02-14 20:30:47.908591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.669 [2024-02-14 20:30:47.908942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.669 [2024-02-14 20:30:47.908957] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.669 qpair failed and we were unable to recover it. 00:30:10.669 [2024-02-14 20:30:47.909311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.669 [2024-02-14 20:30:47.909604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.669 [2024-02-14 20:30:47.909618] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.669 qpair failed and we were unable to recover it. 00:30:10.669 [2024-02-14 20:30:47.909976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.669 [2024-02-14 20:30:47.910328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.669 [2024-02-14 20:30:47.910342] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.669 qpair failed and we were unable to recover it. 00:30:10.669 [2024-02-14 20:30:47.910691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.669 [2024-02-14 20:30:47.911136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.669 [2024-02-14 20:30:47.911150] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.669 qpair failed and we were unable to recover it. 00:30:10.669 [2024-02-14 20:30:47.911440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.669 [2024-02-14 20:30:47.911776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.669 [2024-02-14 20:30:47.911791] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.669 qpair failed and we were unable to recover it. 00:30:10.669 [2024-02-14 20:30:47.912216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.669 [2024-02-14 20:30:47.912562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.669 [2024-02-14 20:30:47.912577] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.669 qpair failed and we were unable to recover it. 00:30:10.669 [2024-02-14 20:30:47.912933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.669 [2024-02-14 20:30:47.913361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.669 [2024-02-14 20:30:47.913377] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.669 qpair failed and we were unable to recover it. 00:30:10.669 [2024-02-14 20:30:47.913804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.669 [2024-02-14 20:30:47.914147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.669 [2024-02-14 20:30:47.914162] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.669 qpair failed and we were unable to recover it. 00:30:10.669 [2024-02-14 20:30:47.914594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.669 [2024-02-14 20:30:47.914793] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:10.669 [2024-02-14 20:30:47.914868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.669 [2024-02-14 20:30:47.914883] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.670 qpair failed and we were unable to recover it. 00:30:10.670 [2024-02-14 20:30:47.914893] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:10.670 [2024-02-14 20:30:47.914902] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:10.670 [2024-02-14 20:30:47.914907] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:10.670 [2024-02-14 20:30:47.915020] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:30:10.670 [2024-02-14 20:30:47.915128] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:30:10.670 [2024-02-14 20:30:47.915218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.670 [2024-02-14 20:30:47.915232] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:30:10.670 [2024-02-14 20:30:47.915233] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:30:10.670 [2024-02-14 20:30:47.915563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.670 [2024-02-14 20:30:47.915578] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.670 qpair failed and we were unable to recover it. 00:30:10.670 [2024-02-14 20:30:47.915934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.670 [2024-02-14 20:30:47.916333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.670 [2024-02-14 20:30:47.916347] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.670 qpair failed and we were unable to recover it. 00:30:10.670 [2024-02-14 20:30:47.916705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.670 [2024-02-14 20:30:47.917055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.670 [2024-02-14 20:30:47.917069] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.670 qpair failed and we were unable to recover it. 00:30:10.670 [2024-02-14 20:30:47.917427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.670 [2024-02-14 20:30:47.917722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.670 [2024-02-14 20:30:47.917737] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.670 qpair failed and we were unable to recover it. 00:30:10.670 [2024-02-14 20:30:47.918165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.670 [2024-02-14 20:30:47.918525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.670 [2024-02-14 20:30:47.918540] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.670 qpair failed and we were unable to recover it. 00:30:10.670 [2024-02-14 20:30:47.918879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.670 [2024-02-14 20:30:47.919285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.670 [2024-02-14 20:30:47.919300] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.670 qpair failed and we were unable to recover it. 00:30:10.670 [2024-02-14 20:30:47.919640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.670 [2024-02-14 20:30:47.920000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.670 [2024-02-14 20:30:47.920015] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.670 qpair failed and we were unable to recover it. 00:30:10.670 [2024-02-14 20:30:47.920351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.670 [2024-02-14 20:30:47.920756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.670 [2024-02-14 20:30:47.920772] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.670 qpair failed and we were unable to recover it. 00:30:10.670 [2024-02-14 20:30:47.921124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.670 [2024-02-14 20:30:47.921487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.670 [2024-02-14 20:30:47.921503] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.670 qpair failed and we were unable to recover it. 00:30:10.670 [2024-02-14 20:30:47.921869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.670 [2024-02-14 20:30:47.922218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.670 [2024-02-14 20:30:47.922233] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.670 qpair failed and we were unable to recover it. 00:30:10.670 [2024-02-14 20:30:47.922608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.670 [2024-02-14 20:30:47.923065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.670 [2024-02-14 20:30:47.923081] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.670 qpair failed and we were unable to recover it. 00:30:10.670 [2024-02-14 20:30:47.923452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.670 [2024-02-14 20:30:47.923801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.670 [2024-02-14 20:30:47.923817] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.670 qpair failed and we were unable to recover it. 00:30:10.670 [2024-02-14 20:30:47.924212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.670 [2024-02-14 20:30:47.924614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.670 [2024-02-14 20:30:47.924630] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.670 qpair failed and we were unable to recover it. 00:30:10.670 [2024-02-14 20:30:47.924934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.670 [2024-02-14 20:30:47.925290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.670 [2024-02-14 20:30:47.925306] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.670 qpair failed and we were unable to recover it. 00:30:10.670 [2024-02-14 20:30:47.925676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.670 [2024-02-14 20:30:47.925988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.670 [2024-02-14 20:30:47.926004] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.670 qpair failed and we were unable to recover it. 00:30:10.670 [2024-02-14 20:30:47.926286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.670 [2024-02-14 20:30:47.926620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.670 [2024-02-14 20:30:47.926637] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.670 qpair failed and we were unable to recover it. 00:30:10.670 [2024-02-14 20:30:47.926933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.670 [2024-02-14 20:30:47.927338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.670 [2024-02-14 20:30:47.927354] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.670 qpair failed and we were unable to recover it. 00:30:10.670 [2024-02-14 20:30:47.927694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.670 [2024-02-14 20:30:47.928063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.670 [2024-02-14 20:30:47.928078] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.670 qpair failed and we were unable to recover it. 00:30:10.670 [2024-02-14 20:30:47.928460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.670 [2024-02-14 20:30:47.928862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.670 [2024-02-14 20:30:47.928878] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.670 qpair failed and we were unable to recover it. 00:30:10.670 [2024-02-14 20:30:47.929304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.670 [2024-02-14 20:30:47.929748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.670 [2024-02-14 20:30:47.929765] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.670 qpair failed and we were unable to recover it. 00:30:10.670 [2024-02-14 20:30:47.930165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.670 [2024-02-14 20:30:47.930525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.670 [2024-02-14 20:30:47.930541] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.670 qpair failed and we were unable to recover it. 00:30:10.670 [2024-02-14 20:30:47.930972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.670 [2024-02-14 20:30:47.931341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.670 [2024-02-14 20:30:47.931357] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.670 qpair failed and we were unable to recover it. 00:30:10.670 [2024-02-14 20:30:47.931771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.670 [2024-02-14 20:30:47.932129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.670 [2024-02-14 20:30:47.932146] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.670 qpair failed and we were unable to recover it. 00:30:10.670 [2024-02-14 20:30:47.932579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.670 [2024-02-14 20:30:47.932914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.670 [2024-02-14 20:30:47.932932] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.670 qpair failed and we were unable to recover it. 00:30:10.670 [2024-02-14 20:30:47.933383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.670 [2024-02-14 20:30:47.933678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.670 [2024-02-14 20:30:47.933695] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.670 qpair failed and we were unable to recover it. 00:30:10.670 [2024-02-14 20:30:47.934042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.670 [2024-02-14 20:30:47.934322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.670 [2024-02-14 20:30:47.934342] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.670 qpair failed and we were unable to recover it. 00:30:10.670 [2024-02-14 20:30:47.934625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.670 [2024-02-14 20:30:47.935007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.671 [2024-02-14 20:30:47.935024] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.671 qpair failed and we were unable to recover it. 00:30:10.671 [2024-02-14 20:30:47.935367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.671 [2024-02-14 20:30:47.935773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.671 [2024-02-14 20:30:47.935791] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.671 qpair failed and we were unable to recover it. 00:30:10.671 [2024-02-14 20:30:47.936220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.671 [2024-02-14 20:30:47.936518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.671 [2024-02-14 20:30:47.936534] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.671 qpair failed and we were unable to recover it. 00:30:10.671 [2024-02-14 20:30:47.936925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.671 [2024-02-14 20:30:47.937295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.671 [2024-02-14 20:30:47.937311] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.671 qpair failed and we were unable to recover it. 00:30:10.671 [2024-02-14 20:30:47.937724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.671 [2024-02-14 20:30:47.938129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.671 [2024-02-14 20:30:47.938145] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.671 qpair failed and we were unable to recover it. 00:30:10.671 [2024-02-14 20:30:47.938512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.671 [2024-02-14 20:30:47.938872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.671 [2024-02-14 20:30:47.938887] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.671 qpair failed and we were unable to recover it. 00:30:10.671 [2024-02-14 20:30:47.939162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.671 [2024-02-14 20:30:47.939565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.671 [2024-02-14 20:30:47.939580] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.671 qpair failed and we were unable to recover it. 00:30:10.671 [2024-02-14 20:30:47.939952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.671 [2024-02-14 20:30:47.940353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.671 [2024-02-14 20:30:47.940368] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.671 qpair failed and we were unable to recover it. 00:30:10.671 [2024-02-14 20:30:47.940723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.671 [2024-02-14 20:30:47.941139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.671 [2024-02-14 20:30:47.941154] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.671 qpair failed and we were unable to recover it. 00:30:10.671 [2024-02-14 20:30:47.941555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.671 [2024-02-14 20:30:47.941902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.671 [2024-02-14 20:30:47.941922] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.671 qpair failed and we were unable to recover it. 00:30:10.671 [2024-02-14 20:30:47.942270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.671 [2024-02-14 20:30:47.942619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.671 [2024-02-14 20:30:47.942634] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.671 qpair failed and we were unable to recover it. 00:30:10.671 [2024-02-14 20:30:47.943095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.671 [2024-02-14 20:30:47.943293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.671 [2024-02-14 20:30:47.943308] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.671 qpair failed and we were unable to recover it. 00:30:10.671 [2024-02-14 20:30:47.943694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.671 [2024-02-14 20:30:47.944121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.671 [2024-02-14 20:30:47.944137] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.671 qpair failed and we were unable to recover it. 00:30:10.671 [2024-02-14 20:30:47.944483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.671 [2024-02-14 20:30:47.944848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.671 [2024-02-14 20:30:47.944863] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.671 qpair failed and we were unable to recover it. 00:30:10.671 [2024-02-14 20:30:47.945294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.671 [2024-02-14 20:30:47.945461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.671 [2024-02-14 20:30:47.945476] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.671 qpair failed and we were unable to recover it. 00:30:10.671 [2024-02-14 20:30:47.945829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.671 [2024-02-14 20:30:47.946195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.671 [2024-02-14 20:30:47.946210] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.671 qpair failed and we were unable to recover it. 00:30:10.671 [2024-02-14 20:30:47.946404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.671 [2024-02-14 20:30:47.946834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.671 [2024-02-14 20:30:47.946850] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.671 qpair failed and we were unable to recover it. 00:30:10.671 [2024-02-14 20:30:47.947258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.671 [2024-02-14 20:30:47.947669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.671 [2024-02-14 20:30:47.947685] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.671 qpair failed and we were unable to recover it. 00:30:10.671 [2024-02-14 20:30:47.948033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.671 [2024-02-14 20:30:47.948221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.671 [2024-02-14 20:30:47.948235] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.671 qpair failed and we were unable to recover it. 00:30:10.671 [2024-02-14 20:30:47.948664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.671 [2024-02-14 20:30:47.948967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.671 [2024-02-14 20:30:47.948981] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.671 qpair failed and we were unable to recover it. 00:30:10.671 [2024-02-14 20:30:47.949339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.671 [2024-02-14 20:30:47.949764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.671 [2024-02-14 20:30:47.949781] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.671 qpair failed and we were unable to recover it. 00:30:10.671 [2024-02-14 20:30:47.950145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.671 [2024-02-14 20:30:47.950481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.671 [2024-02-14 20:30:47.950497] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.671 qpair failed and we were unable to recover it. 00:30:10.671 [2024-02-14 20:30:47.950836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.671 [2024-02-14 20:30:47.951182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.671 [2024-02-14 20:30:47.951196] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.671 qpair failed and we were unable to recover it. 00:30:10.671 [2024-02-14 20:30:47.951552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.671 [2024-02-14 20:30:47.951977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.671 [2024-02-14 20:30:47.951993] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.671 qpair failed and we were unable to recover it. 00:30:10.671 [2024-02-14 20:30:47.952274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.671 [2024-02-14 20:30:47.952627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.671 [2024-02-14 20:30:47.952642] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.671 qpair failed and we were unable to recover it. 00:30:10.671 [2024-02-14 20:30:47.952991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.671 [2024-02-14 20:30:47.953201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.671 [2024-02-14 20:30:47.953215] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.671 qpair failed and we were unable to recover it. 00:30:10.671 [2024-02-14 20:30:47.953499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.671 [2024-02-14 20:30:47.953923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.671 [2024-02-14 20:30:47.953937] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.671 qpair failed and we were unable to recover it. 00:30:10.671 [2024-02-14 20:30:47.954275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.671 [2024-02-14 20:30:47.954613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.671 [2024-02-14 20:30:47.954628] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.671 qpair failed and we were unable to recover it. 00:30:10.671 [2024-02-14 20:30:47.954913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.671 [2024-02-14 20:30:47.955265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.671 [2024-02-14 20:30:47.955280] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.671 qpair failed and we were unable to recover it. 00:30:10.671 [2024-02-14 20:30:47.955638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.671 [2024-02-14 20:30:47.956025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.672 [2024-02-14 20:30:47.956041] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.672 qpair failed and we were unable to recover it. 00:30:10.672 [2024-02-14 20:30:47.956394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.672 [2024-02-14 20:30:47.956823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.672 [2024-02-14 20:30:47.956838] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.672 qpair failed and we were unable to recover it. 00:30:10.672 [2024-02-14 20:30:47.957152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.672 [2024-02-14 20:30:47.957425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.672 [2024-02-14 20:30:47.957441] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.672 qpair failed and we were unable to recover it. 00:30:10.672 [2024-02-14 20:30:47.957864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.672 [2024-02-14 20:30:47.958263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.672 [2024-02-14 20:30:47.958278] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.672 qpair failed and we were unable to recover it. 00:30:10.672 [2024-02-14 20:30:47.958497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.672 [2024-02-14 20:30:47.958842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.672 [2024-02-14 20:30:47.958857] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.672 qpair failed and we were unable to recover it. 00:30:10.672 [2024-02-14 20:30:47.959293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.672 [2024-02-14 20:30:47.959711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.672 [2024-02-14 20:30:47.959728] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.672 qpair failed and we were unable to recover it. 00:30:10.672 [2024-02-14 20:30:47.960134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.672 [2024-02-14 20:30:47.960382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.672 [2024-02-14 20:30:47.960397] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.672 qpair failed and we were unable to recover it. 00:30:10.672 [2024-02-14 20:30:47.960615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.672 [2024-02-14 20:30:47.960959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.672 [2024-02-14 20:30:47.960977] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.672 qpair failed and we were unable to recover it. 00:30:10.672 [2024-02-14 20:30:47.961327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.672 [2024-02-14 20:30:47.961686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.672 [2024-02-14 20:30:47.961702] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.672 qpair failed and we were unable to recover it. 00:30:10.672 [2024-02-14 20:30:47.962077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.672 [2024-02-14 20:30:47.962478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.672 [2024-02-14 20:30:47.962494] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.672 qpair failed and we were unable to recover it. 00:30:10.672 [2024-02-14 20:30:47.962852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.672 [2024-02-14 20:30:47.963274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.672 [2024-02-14 20:30:47.963289] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.672 qpair failed and we were unable to recover it. 00:30:10.672 [2024-02-14 20:30:47.963678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.672 [2024-02-14 20:30:47.964092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.672 [2024-02-14 20:30:47.964107] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.672 qpair failed and we were unable to recover it. 00:30:10.672 [2024-02-14 20:30:47.964462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.672 [2024-02-14 20:30:47.964833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.672 [2024-02-14 20:30:47.964849] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.672 qpair failed and we were unable to recover it. 00:30:10.672 [2024-02-14 20:30:47.965204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.672 [2024-02-14 20:30:47.965618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.672 [2024-02-14 20:30:47.965633] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.672 qpair failed and we were unable to recover it. 00:30:10.672 [2024-02-14 20:30:47.965989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.672 [2024-02-14 20:30:47.966325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.672 [2024-02-14 20:30:47.966340] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.672 qpair failed and we were unable to recover it. 00:30:10.672 [2024-02-14 20:30:47.966792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.672 [2024-02-14 20:30:47.967177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.672 [2024-02-14 20:30:47.967193] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.672 qpair failed and we were unable to recover it. 00:30:10.672 [2024-02-14 20:30:47.967604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.672 [2024-02-14 20:30:47.967952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.672 [2024-02-14 20:30:47.967967] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.672 qpair failed and we were unable to recover it. 00:30:10.672 [2024-02-14 20:30:47.968371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.672 [2024-02-14 20:30:47.968724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.672 [2024-02-14 20:30:47.968740] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.672 qpair failed and we were unable to recover it. 00:30:10.672 [2024-02-14 20:30:47.969155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.672 [2024-02-14 20:30:47.969526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.672 [2024-02-14 20:30:47.969540] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.672 qpair failed and we were unable to recover it. 00:30:10.672 [2024-02-14 20:30:47.969968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.672 [2024-02-14 20:30:47.970251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.672 [2024-02-14 20:30:47.970265] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.672 qpair failed and we were unable to recover it. 00:30:10.672 [2024-02-14 20:30:47.970563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.672 [2024-02-14 20:30:47.970984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.672 [2024-02-14 20:30:47.970998] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.672 qpair failed and we were unable to recover it. 00:30:10.672 [2024-02-14 20:30:47.971344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.672 [2024-02-14 20:30:47.971772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.672 [2024-02-14 20:30:47.971786] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.672 qpair failed and we were unable to recover it. 00:30:10.672 [2024-02-14 20:30:47.972214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.672 [2024-02-14 20:30:47.972585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.672 [2024-02-14 20:30:47.972600] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.672 qpair failed and we were unable to recover it. 00:30:10.672 [2024-02-14 20:30:47.972878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.672 [2024-02-14 20:30:47.973263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.672 [2024-02-14 20:30:47.973278] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.672 qpair failed and we were unable to recover it. 00:30:10.672 [2024-02-14 20:30:47.973644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.672 [2024-02-14 20:30:47.973981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.672 [2024-02-14 20:30:47.973996] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.672 qpair failed and we were unable to recover it. 00:30:10.673 [2024-02-14 20:30:47.974330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.673 [2024-02-14 20:30:47.974729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.673 [2024-02-14 20:30:47.974744] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.673 qpair failed and we were unable to recover it. 00:30:10.673 [2024-02-14 20:30:47.975171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.673 [2024-02-14 20:30:47.975527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.673 [2024-02-14 20:30:47.975541] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.673 qpair failed and we were unable to recover it. 00:30:10.673 [2024-02-14 20:30:47.975968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.673 [2024-02-14 20:30:47.976355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.673 [2024-02-14 20:30:47.976369] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.673 qpair failed and we were unable to recover it. 00:30:10.673 [2024-02-14 20:30:47.976752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.673 [2024-02-14 20:30:47.977103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.673 [2024-02-14 20:30:47.977118] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.673 qpair failed and we were unable to recover it. 00:30:10.673 [2024-02-14 20:30:47.977481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.673 [2024-02-14 20:30:47.977834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.673 [2024-02-14 20:30:47.977849] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.673 qpair failed and we were unable to recover it. 00:30:10.673 [2024-02-14 20:30:47.978247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.673 [2024-02-14 20:30:47.978620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.673 [2024-02-14 20:30:47.978634] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.673 qpair failed and we were unable to recover it. 00:30:10.673 [2024-02-14 20:30:47.978940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.673 [2024-02-14 20:30:47.979298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.673 [2024-02-14 20:30:47.979315] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.673 qpair failed and we were unable to recover it. 00:30:10.673 [2024-02-14 20:30:47.979737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.673 [2024-02-14 20:30:47.980032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.673 [2024-02-14 20:30:47.980046] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.673 qpair failed and we were unable to recover it. 00:30:10.673 [2024-02-14 20:30:47.980510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.673 [2024-02-14 20:30:47.980862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.673 [2024-02-14 20:30:47.980877] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.673 qpair failed and we were unable to recover it. 00:30:10.673 [2024-02-14 20:30:47.981290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.673 [2024-02-14 20:30:47.981656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.673 [2024-02-14 20:30:47.981672] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.673 qpair failed and we were unable to recover it. 00:30:10.673 [2024-02-14 20:30:47.981960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.673 [2024-02-14 20:30:47.982361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.673 [2024-02-14 20:30:47.982376] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.673 qpair failed and we were unable to recover it. 00:30:10.673 [2024-02-14 20:30:47.982666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.673 [2024-02-14 20:30:47.983090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.673 [2024-02-14 20:30:47.983105] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.673 qpair failed and we were unable to recover it. 00:30:10.673 [2024-02-14 20:30:47.983386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.673 [2024-02-14 20:30:47.983811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.673 [2024-02-14 20:30:47.983825] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.673 qpair failed and we were unable to recover it. 00:30:10.673 [2024-02-14 20:30:47.984271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.673 [2024-02-14 20:30:47.984684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.673 [2024-02-14 20:30:47.984699] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.673 qpair failed and we were unable to recover it. 00:30:10.673 [2024-02-14 20:30:47.985105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.673 [2024-02-14 20:30:47.985541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.673 [2024-02-14 20:30:47.985555] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.673 qpair failed and we were unable to recover it. 00:30:10.673 [2024-02-14 20:30:47.986007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.673 [2024-02-14 20:30:47.986413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.673 [2024-02-14 20:30:47.986427] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.673 qpair failed and we were unable to recover it. 00:30:10.673 [2024-02-14 20:30:47.986720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.673 [2024-02-14 20:30:47.987144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.673 [2024-02-14 20:30:47.987158] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.673 qpair failed and we were unable to recover it. 00:30:10.673 [2024-02-14 20:30:47.987560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.673 [2024-02-14 20:30:47.987905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.673 [2024-02-14 20:30:47.987920] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.673 qpair failed and we were unable to recover it. 00:30:10.673 [2024-02-14 20:30:47.988283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.673 [2024-02-14 20:30:47.988636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.673 [2024-02-14 20:30:47.988657] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.673 qpair failed and we were unable to recover it. 00:30:10.673 [2024-02-14 20:30:47.988943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.673 [2024-02-14 20:30:47.989236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.673 [2024-02-14 20:30:47.989250] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.673 qpair failed and we were unable to recover it. 00:30:10.673 [2024-02-14 20:30:47.989657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.673 [2024-02-14 20:30:47.990021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.673 [2024-02-14 20:30:47.990036] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.673 qpair failed and we were unable to recover it. 00:30:10.673 [2024-02-14 20:30:47.990335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.673 [2024-02-14 20:30:47.990809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.673 [2024-02-14 20:30:47.990824] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.673 qpair failed and we were unable to recover it. 00:30:10.673 [2024-02-14 20:30:47.991104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.673 [2024-02-14 20:30:47.991499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.673 [2024-02-14 20:30:47.991514] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.673 qpair failed and we were unable to recover it. 00:30:10.673 [2024-02-14 20:30:47.991890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.673 [2024-02-14 20:30:47.992247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.673 [2024-02-14 20:30:47.992262] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.673 qpair failed and we were unable to recover it. 00:30:10.673 [2024-02-14 20:30:47.992540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.673 [2024-02-14 20:30:47.992826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.673 [2024-02-14 20:30:47.992841] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.673 qpair failed and we were unable to recover it. 00:30:10.673 [2024-02-14 20:30:47.993204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.673 [2024-02-14 20:30:47.993630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.673 [2024-02-14 20:30:47.993644] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.673 qpair failed and we were unable to recover it. 00:30:10.673 [2024-02-14 20:30:47.994005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.673 [2024-02-14 20:30:47.994339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.673 [2024-02-14 20:30:47.994353] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.673 qpair failed and we were unable to recover it. 00:30:10.673 [2024-02-14 20:30:47.994488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.673 [2024-02-14 20:30:47.994921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.673 [2024-02-14 20:30:47.994936] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.673 qpair failed and we were unable to recover it. 00:30:10.673 [2024-02-14 20:30:47.995362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.673 [2024-02-14 20:30:47.995781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.674 [2024-02-14 20:30:47.995796] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.674 qpair failed and we were unable to recover it. 00:30:10.674 [2024-02-14 20:30:47.996222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.674 [2024-02-14 20:30:47.996664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.674 [2024-02-14 20:30:47.996678] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.674 qpair failed and we were unable to recover it. 00:30:10.674 [2024-02-14 20:30:47.997030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.674 [2024-02-14 20:30:47.997408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.674 [2024-02-14 20:30:47.997423] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.674 qpair failed and we were unable to recover it. 00:30:10.674 [2024-02-14 20:30:47.997845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.674 [2024-02-14 20:30:47.998199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.674 [2024-02-14 20:30:47.998213] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.674 qpair failed and we were unable to recover it. 00:30:10.674 [2024-02-14 20:30:47.998590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.674 [2024-02-14 20:30:47.998998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.674 [2024-02-14 20:30:47.999013] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.674 qpair failed and we were unable to recover it. 00:30:10.674 [2024-02-14 20:30:47.999388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.674 [2024-02-14 20:30:47.999832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.674 [2024-02-14 20:30:47.999846] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.674 qpair failed and we were unable to recover it. 00:30:10.674 [2024-02-14 20:30:48.000208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.674 [2024-02-14 20:30:48.000496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.674 [2024-02-14 20:30:48.000511] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.674 qpair failed and we were unable to recover it. 00:30:10.674 [2024-02-14 20:30:48.000942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.674 [2024-02-14 20:30:48.001377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.674 [2024-02-14 20:30:48.001392] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.674 qpair failed and we were unable to recover it. 00:30:10.674 [2024-02-14 20:30:48.001677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.674 [2024-02-14 20:30:48.001971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.674 [2024-02-14 20:30:48.001985] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.674 qpair failed and we were unable to recover it. 00:30:10.674 [2024-02-14 20:30:48.002413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.674 [2024-02-14 20:30:48.002761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.674 [2024-02-14 20:30:48.002776] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.674 qpair failed and we were unable to recover it. 00:30:10.674 [2024-02-14 20:30:48.003231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.674 [2024-02-14 20:30:48.003638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.674 [2024-02-14 20:30:48.003659] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.674 qpair failed and we were unable to recover it. 00:30:10.674 [2024-02-14 20:30:48.004024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.674 [2024-02-14 20:30:48.004371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.674 [2024-02-14 20:30:48.004386] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.674 qpair failed and we were unable to recover it. 00:30:10.674 [2024-02-14 20:30:48.004734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.674 [2024-02-14 20:30:48.005264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.674 [2024-02-14 20:30:48.005279] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.674 qpair failed and we were unable to recover it. 00:30:10.674 [2024-02-14 20:30:48.005704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.674 [2024-02-14 20:30:48.006124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.674 [2024-02-14 20:30:48.006139] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.674 qpair failed and we were unable to recover it. 00:30:10.674 [2024-02-14 20:30:48.006544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.674 [2024-02-14 20:30:48.006991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.674 [2024-02-14 20:30:48.007006] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.674 qpair failed and we were unable to recover it. 00:30:10.674 [2024-02-14 20:30:48.007436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.674 [2024-02-14 20:30:48.007798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.674 [2024-02-14 20:30:48.007813] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.674 qpair failed and we were unable to recover it. 00:30:10.674 [2024-02-14 20:30:48.008259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.674 [2024-02-14 20:30:48.008675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.674 [2024-02-14 20:30:48.008690] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.674 qpair failed and we were unable to recover it. 00:30:10.674 [2024-02-14 20:30:48.009029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.674 [2024-02-14 20:30:48.009386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.674 [2024-02-14 20:30:48.009401] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.674 qpair failed and we were unable to recover it. 00:30:10.674 [2024-02-14 20:30:48.009702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.674 [2024-02-14 20:30:48.010055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.674 [2024-02-14 20:30:48.010069] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.674 qpair failed and we were unable to recover it. 00:30:10.674 [2024-02-14 20:30:48.010470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.674 [2024-02-14 20:30:48.010841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.674 [2024-02-14 20:30:48.010856] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.674 qpair failed and we were unable to recover it. 00:30:10.674 [2024-02-14 20:30:48.011168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.674 [2024-02-14 20:30:48.011516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.674 [2024-02-14 20:30:48.011530] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.674 qpair failed and we were unable to recover it. 00:30:10.674 [2024-02-14 20:30:48.011676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.674 [2024-02-14 20:30:48.012038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.674 [2024-02-14 20:30:48.012052] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.674 qpair failed and we were unable to recover it. 00:30:10.674 [2024-02-14 20:30:48.012420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.674 [2024-02-14 20:30:48.012816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.674 [2024-02-14 20:30:48.012831] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.674 qpair failed and we were unable to recover it. 00:30:10.674 [2024-02-14 20:30:48.013223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.674 [2024-02-14 20:30:48.013576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.674 [2024-02-14 20:30:48.013590] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.674 qpair failed and we were unable to recover it. 00:30:10.674 [2024-02-14 20:30:48.013868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.674 [2024-02-14 20:30:48.014250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.674 [2024-02-14 20:30:48.014264] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.674 qpair failed and we were unable to recover it. 00:30:10.674 [2024-02-14 20:30:48.014667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.674 [2024-02-14 20:30:48.015121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.674 [2024-02-14 20:30:48.015135] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.674 qpair failed and we were unable to recover it. 00:30:10.674 [2024-02-14 20:30:48.015539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.674 [2024-02-14 20:30:48.015886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.674 [2024-02-14 20:30:48.015901] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.674 qpair failed and we were unable to recover it. 00:30:10.674 [2024-02-14 20:30:48.016330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.674 [2024-02-14 20:30:48.016581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.674 [2024-02-14 20:30:48.016595] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.674 qpair failed and we were unable to recover it. 00:30:10.674 [2024-02-14 20:30:48.016972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.674 [2024-02-14 20:30:48.017325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.674 [2024-02-14 20:30:48.017340] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.674 qpair failed and we were unable to recover it. 00:30:10.674 [2024-02-14 20:30:48.017677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.675 [2024-02-14 20:30:48.017819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.675 [2024-02-14 20:30:48.017836] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.675 qpair failed and we were unable to recover it. 00:30:10.675 [2024-02-14 20:30:48.018127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.675 [2024-02-14 20:30:48.018409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.675 [2024-02-14 20:30:48.018424] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.675 qpair failed and we were unable to recover it. 00:30:10.675 [2024-02-14 20:30:48.018837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.675 [2024-02-14 20:30:48.019213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.675 [2024-02-14 20:30:48.019228] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.675 qpair failed and we were unable to recover it. 00:30:10.675 [2024-02-14 20:30:48.019659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.675 [2024-02-14 20:30:48.020089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.675 [2024-02-14 20:30:48.020104] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.675 qpair failed and we were unable to recover it. 00:30:10.675 [2024-02-14 20:30:48.020403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.675 [2024-02-14 20:30:48.020882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.675 [2024-02-14 20:30:48.020897] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.675 qpair failed and we were unable to recover it. 00:30:10.675 [2024-02-14 20:30:48.021338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.675 [2024-02-14 20:30:48.021639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.675 [2024-02-14 20:30:48.021670] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.675 qpair failed and we were unable to recover it. 00:30:10.675 [2024-02-14 20:30:48.022094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.675 [2024-02-14 20:30:48.022440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.675 [2024-02-14 20:30:48.022454] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.675 qpair failed and we were unable to recover it. 00:30:10.675 [2024-02-14 20:30:48.022742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.675 [2024-02-14 20:30:48.023165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.675 [2024-02-14 20:30:48.023179] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.675 qpair failed and we were unable to recover it. 00:30:10.675 [2024-02-14 20:30:48.023523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.675 [2024-02-14 20:30:48.023877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.675 [2024-02-14 20:30:48.023892] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.675 qpair failed and we were unable to recover it. 00:30:10.675 [2024-02-14 20:30:48.024232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.675 [2024-02-14 20:30:48.024575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.675 [2024-02-14 20:30:48.024589] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.675 qpair failed and we were unable to recover it. 00:30:10.675 [2024-02-14 20:30:48.024945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.675 [2024-02-14 20:30:48.025213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.675 [2024-02-14 20:30:48.025228] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.675 qpair failed and we were unable to recover it. 00:30:10.675 [2024-02-14 20:30:48.025568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.675 [2024-02-14 20:30:48.025919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.675 [2024-02-14 20:30:48.025933] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.675 qpair failed and we were unable to recover it. 00:30:10.675 [2024-02-14 20:30:48.026290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.675 [2024-02-14 20:30:48.026695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.675 [2024-02-14 20:30:48.026710] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.675 qpair failed and we were unable to recover it. 00:30:10.675 [2024-02-14 20:30:48.027079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.675 [2024-02-14 20:30:48.027427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.675 [2024-02-14 20:30:48.027441] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.675 qpair failed and we were unable to recover it. 00:30:10.675 [2024-02-14 20:30:48.027851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.675 [2024-02-14 20:30:48.028289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.675 [2024-02-14 20:30:48.028302] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.675 qpair failed and we were unable to recover it. 00:30:10.675 [2024-02-14 20:30:48.028597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.675 [2024-02-14 20:30:48.028949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.675 [2024-02-14 20:30:48.028964] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.675 qpair failed and we were unable to recover it. 00:30:10.675 [2024-02-14 20:30:48.029314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.675 [2024-02-14 20:30:48.029739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.675 [2024-02-14 20:30:48.029755] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.675 qpair failed and we were unable to recover it. 00:30:10.675 [2024-02-14 20:30:48.030125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.675 [2024-02-14 20:30:48.030530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.675 [2024-02-14 20:30:48.030545] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.675 qpair failed and we were unable to recover it. 00:30:10.675 [2024-02-14 20:30:48.030980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.675 [2024-02-14 20:30:48.031315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.675 [2024-02-14 20:30:48.031330] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.675 qpair failed and we were unable to recover it. 00:30:10.675 [2024-02-14 20:30:48.031756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.675 [2024-02-14 20:30:48.032161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.675 [2024-02-14 20:30:48.032176] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.675 qpair failed and we were unable to recover it. 00:30:10.675 [2024-02-14 20:30:48.032370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.675 [2024-02-14 20:30:48.032721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.675 [2024-02-14 20:30:48.032735] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.675 qpair failed and we were unable to recover it. 00:30:10.675 [2024-02-14 20:30:48.033083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.675 [2024-02-14 20:30:48.033413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.675 [2024-02-14 20:30:48.033427] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.675 qpair failed and we were unable to recover it. 00:30:10.675 [2024-02-14 20:30:48.033830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.675 [2024-02-14 20:30:48.034127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.675 [2024-02-14 20:30:48.034142] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.675 qpair failed and we were unable to recover it. 00:30:10.675 [2024-02-14 20:30:48.034444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.675 [2024-02-14 20:30:48.034859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.675 [2024-02-14 20:30:48.034874] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.675 qpair failed and we were unable to recover it. 00:30:10.675 [2024-02-14 20:30:48.035215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.675 [2024-02-14 20:30:48.035500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.675 [2024-02-14 20:30:48.035515] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.675 qpair failed and we were unable to recover it. 00:30:10.675 [2024-02-14 20:30:48.035866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.675 [2024-02-14 20:30:48.036151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.675 [2024-02-14 20:30:48.036165] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.675 qpair failed and we were unable to recover it. 00:30:10.675 [2024-02-14 20:30:48.036565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.675 [2024-02-14 20:30:48.036939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.675 [2024-02-14 20:30:48.036953] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.675 qpair failed and we were unable to recover it. 00:30:10.675 [2024-02-14 20:30:48.037310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.675 [2024-02-14 20:30:48.037723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.675 [2024-02-14 20:30:48.037738] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.675 qpair failed and we were unable to recover it. 00:30:10.675 [2024-02-14 20:30:48.038103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.675 [2024-02-14 20:30:48.038388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.675 [2024-02-14 20:30:48.038403] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.676 qpair failed and we were unable to recover it. 00:30:10.676 [2024-02-14 20:30:48.038853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.676 [2024-02-14 20:30:48.039235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.676 [2024-02-14 20:30:48.039249] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.676 qpair failed and we were unable to recover it. 00:30:10.676 [2024-02-14 20:30:48.039606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.676 [2024-02-14 20:30:48.039897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.676 [2024-02-14 20:30:48.039912] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.676 qpair failed and we were unable to recover it. 00:30:10.676 [2024-02-14 20:30:48.040337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.676 [2024-02-14 20:30:48.040631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.676 [2024-02-14 20:30:48.040652] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.676 qpair failed and we were unable to recover it. 00:30:10.676 [2024-02-14 20:30:48.041005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.676 [2024-02-14 20:30:48.041406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.676 [2024-02-14 20:30:48.041420] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.676 qpair failed and we were unable to recover it. 00:30:10.676 [2024-02-14 20:30:48.041793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.676 [2024-02-14 20:30:48.041990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.676 [2024-02-14 20:30:48.042005] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.676 qpair failed and we were unable to recover it. 00:30:10.676 [2024-02-14 20:30:48.042149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.676 [2024-02-14 20:30:48.042478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.676 [2024-02-14 20:30:48.042492] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.676 qpair failed and we were unable to recover it. 00:30:10.676 [2024-02-14 20:30:48.042766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.676 [2024-02-14 20:30:48.043035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.676 [2024-02-14 20:30:48.043049] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.676 qpair failed and we were unable to recover it. 00:30:10.676 [2024-02-14 20:30:48.043401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.676 [2024-02-14 20:30:48.043831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.676 [2024-02-14 20:30:48.043845] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.676 qpair failed and we were unable to recover it. 00:30:10.676 [2024-02-14 20:30:48.044138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.676 [2024-02-14 20:30:48.044536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.676 [2024-02-14 20:30:48.044550] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.676 qpair failed and we were unable to recover it. 00:30:10.676 [2024-02-14 20:30:48.044959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.676 [2024-02-14 20:30:48.045362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.676 [2024-02-14 20:30:48.045377] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.676 qpair failed and we were unable to recover it. 00:30:10.676 [2024-02-14 20:30:48.045745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.676 [2024-02-14 20:30:48.046167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.676 [2024-02-14 20:30:48.046182] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.676 qpair failed and we were unable to recover it. 00:30:10.676 [2024-02-14 20:30:48.046543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.676 [2024-02-14 20:30:48.046894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.676 [2024-02-14 20:30:48.046909] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.676 qpair failed and we were unable to recover it. 00:30:10.676 [2024-02-14 20:30:48.047357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.676 [2024-02-14 20:30:48.047788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.676 [2024-02-14 20:30:48.047803] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.676 qpair failed and we were unable to recover it. 00:30:10.676 [2024-02-14 20:30:48.047984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.676 [2024-02-14 20:30:48.048335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.676 [2024-02-14 20:30:48.048349] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.676 qpair failed and we were unable to recover it. 00:30:10.676 [2024-02-14 20:30:48.048754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.676 [2024-02-14 20:30:48.049090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.676 [2024-02-14 20:30:48.049105] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.676 qpair failed and we were unable to recover it. 00:30:10.676 [2024-02-14 20:30:48.049476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.676 [2024-02-14 20:30:48.049904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.676 [2024-02-14 20:30:48.049919] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.676 qpair failed and we were unable to recover it. 00:30:10.676 [2024-02-14 20:30:48.050297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.676 [2024-02-14 20:30:48.050627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.676 [2024-02-14 20:30:48.050641] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.676 qpair failed and we were unable to recover it. 00:30:10.676 [2024-02-14 20:30:48.051056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.676 [2024-02-14 20:30:48.051428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.676 [2024-02-14 20:30:48.051443] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.676 qpair failed and we were unable to recover it. 00:30:10.676 [2024-02-14 20:30:48.051888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.676 [2024-02-14 20:30:48.052234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.676 [2024-02-14 20:30:48.052248] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.676 qpair failed and we were unable to recover it. 00:30:10.676 [2024-02-14 20:30:48.052603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.676 [2024-02-14 20:30:48.052897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.676 [2024-02-14 20:30:48.052912] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.676 qpair failed and we were unable to recover it. 00:30:10.676 [2024-02-14 20:30:48.053254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.676 [2024-02-14 20:30:48.053674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.676 [2024-02-14 20:30:48.053688] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.676 qpair failed and we were unable to recover it. 00:30:10.676 [2024-02-14 20:30:48.053811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.676 [2024-02-14 20:30:48.054230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.676 [2024-02-14 20:30:48.054244] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.676 qpair failed and we were unable to recover it. 00:30:10.676 [2024-02-14 20:30:48.054666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.676 [2024-02-14 20:30:48.055028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.676 [2024-02-14 20:30:48.055045] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.676 qpair failed and we were unable to recover it. 00:30:10.676 [2024-02-14 20:30:48.055332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.676 [2024-02-14 20:30:48.055676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.676 [2024-02-14 20:30:48.055691] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.676 qpair failed and we were unable to recover it. 00:30:10.676 [2024-02-14 20:30:48.056038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.676 [2024-02-14 20:30:48.056461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.676 [2024-02-14 20:30:48.056476] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.676 qpair failed and we were unable to recover it. 00:30:10.676 [2024-02-14 20:30:48.056831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.676 [2024-02-14 20:30:48.057096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.676 [2024-02-14 20:30:48.057110] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.676 qpair failed and we were unable to recover it. 00:30:10.676 [2024-02-14 20:30:48.057462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.676 [2024-02-14 20:30:48.057812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.676 [2024-02-14 20:30:48.057827] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.676 qpair failed and we were unable to recover it. 00:30:10.676 [2024-02-14 20:30:48.058228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.676 [2024-02-14 20:30:48.058572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.676 [2024-02-14 20:30:48.058586] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.676 qpair failed and we were unable to recover it. 00:30:10.676 [2024-02-14 20:30:48.059017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.676 [2024-02-14 20:30:48.059350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.677 [2024-02-14 20:30:48.059365] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.677 qpair failed and we were unable to recover it. 00:30:10.677 [2024-02-14 20:30:48.059726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.677 [2024-02-14 20:30:48.060078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.677 [2024-02-14 20:30:48.060092] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.677 qpair failed and we were unable to recover it. 00:30:10.677 [2024-02-14 20:30:48.060444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.677 [2024-02-14 20:30:48.060845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.677 [2024-02-14 20:30:48.060860] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.677 qpair failed and we were unable to recover it. 00:30:10.677 [2024-02-14 20:30:48.061214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.677 [2024-02-14 20:30:48.061547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.677 [2024-02-14 20:30:48.061562] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.677 qpair failed and we were unable to recover it. 00:30:10.677 [2024-02-14 20:30:48.061853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.677 [2024-02-14 20:30:48.062281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.677 [2024-02-14 20:30:48.062297] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.677 qpair failed and we were unable to recover it. 00:30:10.677 [2024-02-14 20:30:48.062663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.677 [2024-02-14 20:30:48.063061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.677 [2024-02-14 20:30:48.063076] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.677 qpair failed and we were unable to recover it. 00:30:10.677 [2024-02-14 20:30:48.063518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.677 [2024-02-14 20:30:48.063960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.677 [2024-02-14 20:30:48.063975] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.677 qpair failed and we were unable to recover it. 00:30:10.677 [2024-02-14 20:30:48.064402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.677 [2024-02-14 20:30:48.064821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.677 [2024-02-14 20:30:48.064836] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.677 qpair failed and we were unable to recover it. 00:30:10.677 [2024-02-14 20:30:48.065210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.677 [2024-02-14 20:30:48.065616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.677 [2024-02-14 20:30:48.065631] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.677 qpair failed and we were unable to recover it. 00:30:10.677 [2024-02-14 20:30:48.066063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.677 [2024-02-14 20:30:48.066509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.677 [2024-02-14 20:30:48.066523] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.677 qpair failed and we were unable to recover it. 00:30:10.677 [2024-02-14 20:30:48.066830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.677 [2024-02-14 20:30:48.067162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.677 [2024-02-14 20:30:48.067176] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.677 qpair failed and we were unable to recover it. 00:30:10.677 [2024-02-14 20:30:48.067541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.677 [2024-02-14 20:30:48.067971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.677 [2024-02-14 20:30:48.067986] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.677 qpair failed and we were unable to recover it. 00:30:10.677 [2024-02-14 20:30:48.068326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.677 [2024-02-14 20:30:48.068747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.677 [2024-02-14 20:30:48.068762] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.677 qpair failed and we were unable to recover it. 00:30:10.677 [2024-02-14 20:30:48.069208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.677 [2024-02-14 20:30:48.069575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.677 [2024-02-14 20:30:48.069590] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.677 qpair failed and we were unable to recover it. 00:30:10.677 [2024-02-14 20:30:48.069939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.677 [2024-02-14 20:30:48.070344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.677 [2024-02-14 20:30:48.070358] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.677 qpair failed and we were unable to recover it. 00:30:10.677 [2024-02-14 20:30:48.070709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.677 [2024-02-14 20:30:48.071049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.677 [2024-02-14 20:30:48.071066] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.677 qpair failed and we were unable to recover it. 00:30:10.677 [2024-02-14 20:30:48.071492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.677 [2024-02-14 20:30:48.071927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.677 [2024-02-14 20:30:48.071942] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.677 qpair failed and we were unable to recover it. 00:30:10.943 [2024-02-14 20:30:48.072358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.943 [2024-02-14 20:30:48.072780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.943 [2024-02-14 20:30:48.072794] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.943 qpair failed and we were unable to recover it. 00:30:10.943 [2024-02-14 20:30:48.073196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.943 [2024-02-14 20:30:48.073569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.943 [2024-02-14 20:30:48.073583] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.943 qpair failed and we were unable to recover it. 00:30:10.943 [2024-02-14 20:30:48.074013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.943 [2024-02-14 20:30:48.074310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.943 [2024-02-14 20:30:48.074325] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.943 qpair failed and we were unable to recover it. 00:30:10.943 [2024-02-14 20:30:48.074727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.943 [2024-02-14 20:30:48.075073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.943 [2024-02-14 20:30:48.075088] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.943 qpair failed and we were unable to recover it. 00:30:10.943 [2024-02-14 20:30:48.075526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.943 [2024-02-14 20:30:48.075810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.943 [2024-02-14 20:30:48.075825] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.943 qpair failed and we were unable to recover it. 00:30:10.943 [2024-02-14 20:30:48.076175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.943 [2024-02-14 20:30:48.076592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.943 [2024-02-14 20:30:48.076606] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.943 qpair failed and we were unable to recover it. 00:30:10.943 [2024-02-14 20:30:48.076969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.943 [2024-02-14 20:30:48.077318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.943 [2024-02-14 20:30:48.077332] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.943 qpair failed and we were unable to recover it. 00:30:10.943 [2024-02-14 20:30:48.077668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.943 [2024-02-14 20:30:48.078067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.943 [2024-02-14 20:30:48.078081] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.943 qpair failed and we were unable to recover it. 00:30:10.943 [2024-02-14 20:30:48.078287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.943 [2024-02-14 20:30:48.078638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.943 [2024-02-14 20:30:48.078661] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.943 qpair failed and we were unable to recover it. 00:30:10.943 [2024-02-14 20:30:48.079109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.943 [2024-02-14 20:30:48.079511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.943 [2024-02-14 20:30:48.079525] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.943 qpair failed and we were unable to recover it. 00:30:10.943 [2024-02-14 20:30:48.079871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.943 [2024-02-14 20:30:48.080275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.943 [2024-02-14 20:30:48.080290] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.943 qpair failed and we were unable to recover it. 00:30:10.943 [2024-02-14 20:30:48.080642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.943 [2024-02-14 20:30:48.081017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.943 [2024-02-14 20:30:48.081032] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.943 qpair failed and we were unable to recover it. 00:30:10.943 [2024-02-14 20:30:48.081437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.943 [2024-02-14 20:30:48.081790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.943 [2024-02-14 20:30:48.081805] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.943 qpair failed and we were unable to recover it. 00:30:10.943 [2024-02-14 20:30:48.082229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.943 [2024-02-14 20:30:48.082515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.943 [2024-02-14 20:30:48.082530] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.943 qpair failed and we were unable to recover it. 00:30:10.943 [2024-02-14 20:30:48.082809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.943 [2024-02-14 20:30:48.083234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.943 [2024-02-14 20:30:48.083248] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.943 qpair failed and we were unable to recover it. 00:30:10.943 [2024-02-14 20:30:48.083609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.943 [2024-02-14 20:30:48.084033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.943 [2024-02-14 20:30:48.084047] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.943 qpair failed and we were unable to recover it. 00:30:10.943 [2024-02-14 20:30:48.084423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.943 [2024-02-14 20:30:48.084810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.943 [2024-02-14 20:30:48.084825] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.943 qpair failed and we were unable to recover it. 00:30:10.943 [2024-02-14 20:30:48.085267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.943 [2024-02-14 20:30:48.085665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.943 [2024-02-14 20:30:48.085680] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.943 qpair failed and we were unable to recover it. 00:30:10.943 [2024-02-14 20:30:48.086102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.943 [2024-02-14 20:30:48.086504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.943 [2024-02-14 20:30:48.086518] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.943 qpair failed and we were unable to recover it. 00:30:10.943 [2024-02-14 20:30:48.086943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.943 [2024-02-14 20:30:48.087363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.943 [2024-02-14 20:30:48.087377] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.943 qpair failed and we were unable to recover it. 00:30:10.943 [2024-02-14 20:30:48.087522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.943 [2024-02-14 20:30:48.087886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.943 [2024-02-14 20:30:48.087902] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.943 qpair failed and we were unable to recover it. 00:30:10.943 [2024-02-14 20:30:48.088280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.943 [2024-02-14 20:30:48.088619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.943 [2024-02-14 20:30:48.088633] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.943 qpair failed and we were unable to recover it. 00:30:10.943 [2024-02-14 20:30:48.089064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.943 [2024-02-14 20:30:48.089401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.943 [2024-02-14 20:30:48.089416] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.943 qpair failed and we were unable to recover it. 00:30:10.943 [2024-02-14 20:30:48.089605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.943 [2024-02-14 20:30:48.089964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.943 [2024-02-14 20:30:48.089979] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.943 qpair failed and we were unable to recover it. 00:30:10.943 [2024-02-14 20:30:48.090335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.943 [2024-02-14 20:30:48.090671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.943 [2024-02-14 20:30:48.090686] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.944 qpair failed and we were unable to recover it. 00:30:10.944 [2024-02-14 20:30:48.091121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.944 [2024-02-14 20:30:48.091534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.944 [2024-02-14 20:30:48.091549] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.944 qpair failed and we were unable to recover it. 00:30:10.944 [2024-02-14 20:30:48.091973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.944 [2024-02-14 20:30:48.092272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.944 [2024-02-14 20:30:48.092286] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.944 qpair failed and we were unable to recover it. 00:30:10.944 [2024-02-14 20:30:48.092658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.944 [2024-02-14 20:30:48.093026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.944 [2024-02-14 20:30:48.093040] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.944 qpair failed and we were unable to recover it. 00:30:10.944 [2024-02-14 20:30:48.093389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.944 [2024-02-14 20:30:48.093742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.944 [2024-02-14 20:30:48.093757] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.944 qpair failed and we were unable to recover it. 00:30:10.944 [2024-02-14 20:30:48.094180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.944 [2024-02-14 20:30:48.094539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.944 [2024-02-14 20:30:48.094553] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.944 qpair failed and we were unable to recover it. 00:30:10.944 [2024-02-14 20:30:48.094900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.944 [2024-02-14 20:30:48.095251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.944 [2024-02-14 20:30:48.095265] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.944 qpair failed and we were unable to recover it. 00:30:10.944 [2024-02-14 20:30:48.095623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.944 [2024-02-14 20:30:48.095985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.944 [2024-02-14 20:30:48.096000] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.944 qpair failed and we were unable to recover it. 00:30:10.944 [2024-02-14 20:30:48.096352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.944 [2024-02-14 20:30:48.096686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.944 [2024-02-14 20:30:48.096701] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.944 qpair failed and we were unable to recover it. 00:30:10.944 [2024-02-14 20:30:48.097117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.944 [2024-02-14 20:30:48.097516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.944 [2024-02-14 20:30:48.097531] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.944 qpair failed and we were unable to recover it. 00:30:10.944 [2024-02-14 20:30:48.097867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.944 [2024-02-14 20:30:48.098288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.944 [2024-02-14 20:30:48.098302] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.944 qpair failed and we were unable to recover it. 00:30:10.944 [2024-02-14 20:30:48.098729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.944 [2024-02-14 20:30:48.099081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.944 [2024-02-14 20:30:48.099095] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.944 qpair failed and we were unable to recover it. 00:30:10.944 [2024-02-14 20:30:48.099438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.944 [2024-02-14 20:30:48.099811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.944 [2024-02-14 20:30:48.099826] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.944 qpair failed and we were unable to recover it. 00:30:10.944 [2024-02-14 20:30:48.100178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.944 [2024-02-14 20:30:48.100313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.944 [2024-02-14 20:30:48.100327] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.944 qpair failed and we were unable to recover it. 00:30:10.944 [2024-02-14 20:30:48.100673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.944 [2024-02-14 20:30:48.101047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.944 [2024-02-14 20:30:48.101064] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.944 qpair failed and we were unable to recover it. 00:30:10.944 [2024-02-14 20:30:48.101414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.944 [2024-02-14 20:30:48.101822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.944 [2024-02-14 20:30:48.101837] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.944 qpair failed and we were unable to recover it. 00:30:10.944 [2024-02-14 20:30:48.102180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.944 [2024-02-14 20:30:48.102474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.944 [2024-02-14 20:30:48.102489] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.944 qpair failed and we were unable to recover it. 00:30:10.944 [2024-02-14 20:30:48.102858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.944 [2024-02-14 20:30:48.103158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.944 [2024-02-14 20:30:48.103172] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.944 qpair failed and we were unable to recover it. 00:30:10.944 [2024-02-14 20:30:48.103597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.944 [2024-02-14 20:30:48.103950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.944 [2024-02-14 20:30:48.103965] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.944 qpair failed and we were unable to recover it. 00:30:10.944 [2024-02-14 20:30:48.104239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.944 [2024-02-14 20:30:48.104543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.944 [2024-02-14 20:30:48.104558] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.944 qpair failed and we were unable to recover it. 00:30:10.944 [2024-02-14 20:30:48.104982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.944 [2024-02-14 20:30:48.105378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.944 [2024-02-14 20:30:48.105392] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.944 qpair failed and we were unable to recover it. 00:30:10.944 [2024-02-14 20:30:48.105817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.944 [2024-02-14 20:30:48.106162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.944 [2024-02-14 20:30:48.106176] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.944 qpair failed and we were unable to recover it. 00:30:10.944 [2024-02-14 20:30:48.106525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.944 [2024-02-14 20:30:48.106858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.944 [2024-02-14 20:30:48.106873] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.944 qpair failed and we were unable to recover it. 00:30:10.944 [2024-02-14 20:30:48.107152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.944 [2024-02-14 20:30:48.107576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.944 [2024-02-14 20:30:48.107590] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.944 qpair failed and we were unable to recover it. 00:30:10.944 [2024-02-14 20:30:48.108028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.944 [2024-02-14 20:30:48.108435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.944 [2024-02-14 20:30:48.108450] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.944 qpair failed and we were unable to recover it. 00:30:10.944 [2024-02-14 20:30:48.108875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.944 [2024-02-14 20:30:48.109221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.944 [2024-02-14 20:30:48.109235] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.944 qpair failed and we were unable to recover it. 00:30:10.944 [2024-02-14 20:30:48.109579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.944 [2024-02-14 20:30:48.109977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.944 [2024-02-14 20:30:48.109992] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.944 qpair failed and we were unable to recover it. 00:30:10.944 [2024-02-14 20:30:48.110419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.944 [2024-02-14 20:30:48.110768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.944 [2024-02-14 20:30:48.110783] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.944 qpair failed and we were unable to recover it. 00:30:10.944 [2024-02-14 20:30:48.111187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.944 [2024-02-14 20:30:48.111605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.944 [2024-02-14 20:30:48.111619] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.944 qpair failed and we were unable to recover it. 00:30:10.944 [2024-02-14 20:30:48.112105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.944 [2024-02-14 20:30:48.112535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.945 [2024-02-14 20:30:48.112549] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.945 qpair failed and we were unable to recover it. 00:30:10.945 [2024-02-14 20:30:48.112897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.945 [2024-02-14 20:30:48.113321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.945 [2024-02-14 20:30:48.113336] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.945 qpair failed and we were unable to recover it. 00:30:10.945 [2024-02-14 20:30:48.113636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.945 [2024-02-14 20:30:48.113977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.945 [2024-02-14 20:30:48.113992] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.945 qpair failed and we were unable to recover it. 00:30:10.945 [2024-02-14 20:30:48.114285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.945 [2024-02-14 20:30:48.114782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.945 [2024-02-14 20:30:48.114797] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.945 qpair failed and we were unable to recover it. 00:30:10.945 [2024-02-14 20:30:48.115200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.945 [2024-02-14 20:30:48.115478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.945 [2024-02-14 20:30:48.115493] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.945 qpair failed and we were unable to recover it. 00:30:10.945 [2024-02-14 20:30:48.115918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.945 [2024-02-14 20:30:48.116220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.945 [2024-02-14 20:30:48.116234] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.945 qpair failed and we were unable to recover it. 00:30:10.945 [2024-02-14 20:30:48.116631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.945 [2024-02-14 20:30:48.116982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.945 [2024-02-14 20:30:48.116997] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.945 qpair failed and we were unable to recover it. 00:30:10.945 [2024-02-14 20:30:48.117279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.945 [2024-02-14 20:30:48.117676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.945 [2024-02-14 20:30:48.117692] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.945 qpair failed and we were unable to recover it. 00:30:10.945 [2024-02-14 20:30:48.118058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.945 [2024-02-14 20:30:48.118359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.945 [2024-02-14 20:30:48.118373] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.945 qpair failed and we were unable to recover it. 00:30:10.945 [2024-02-14 20:30:48.118681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.945 [2024-02-14 20:30:48.119083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.945 [2024-02-14 20:30:48.119099] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.945 qpair failed and we were unable to recover it. 00:30:10.945 [2024-02-14 20:30:48.119450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.945 [2024-02-14 20:30:48.119791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.945 [2024-02-14 20:30:48.119806] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.945 qpair failed and we were unable to recover it. 00:30:10.945 [2024-02-14 20:30:48.120161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.945 [2024-02-14 20:30:48.120456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.945 [2024-02-14 20:30:48.120470] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.945 qpair failed and we were unable to recover it. 00:30:10.945 [2024-02-14 20:30:48.120773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.945 [2024-02-14 20:30:48.121296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.945 [2024-02-14 20:30:48.121311] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.945 qpair failed and we were unable to recover it. 00:30:10.945 [2024-02-14 20:30:48.121666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.945 [2024-02-14 20:30:48.121968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.945 [2024-02-14 20:30:48.121983] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.945 qpair failed and we were unable to recover it. 00:30:10.945 [2024-02-14 20:30:48.122336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.945 [2024-02-14 20:30:48.122509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.945 [2024-02-14 20:30:48.122523] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.945 qpair failed and we were unable to recover it. 00:30:10.945 [2024-02-14 20:30:48.122876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.945 [2024-02-14 20:30:48.123159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.945 [2024-02-14 20:30:48.123175] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.945 qpair failed and we were unable to recover it. 00:30:10.945 [2024-02-14 20:30:48.123518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.945 [2024-02-14 20:30:48.123724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.945 [2024-02-14 20:30:48.123739] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.945 qpair failed and we were unable to recover it. 00:30:10.945 [2024-02-14 20:30:48.124036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.945 [2024-02-14 20:30:48.124444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.945 [2024-02-14 20:30:48.124458] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.945 qpair failed and we were unable to recover it. 00:30:10.945 [2024-02-14 20:30:48.124819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.945 [2024-02-14 20:30:48.125224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.945 [2024-02-14 20:30:48.125238] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.945 qpair failed and we were unable to recover it. 00:30:10.945 [2024-02-14 20:30:48.125451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.945 [2024-02-14 20:30:48.125797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.945 [2024-02-14 20:30:48.125813] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.945 qpair failed and we were unable to recover it. 00:30:10.945 [2024-02-14 20:30:48.126155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.945 [2024-02-14 20:30:48.126500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.945 [2024-02-14 20:30:48.126515] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.945 qpair failed and we were unable to recover it. 00:30:10.945 [2024-02-14 20:30:48.126890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.945 [2024-02-14 20:30:48.127186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.945 [2024-02-14 20:30:48.127201] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.945 qpair failed and we were unable to recover it. 00:30:10.945 [2024-02-14 20:30:48.127492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.945 [2024-02-14 20:30:48.127844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.945 [2024-02-14 20:30:48.127860] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.945 qpair failed and we were unable to recover it. 00:30:10.945 [2024-02-14 20:30:48.128204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.945 [2024-02-14 20:30:48.128559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.945 [2024-02-14 20:30:48.128574] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.945 qpair failed and we were unable to recover it. 00:30:10.945 [2024-02-14 20:30:48.128961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.945 [2024-02-14 20:30:48.129321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.945 [2024-02-14 20:30:48.129336] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.945 qpair failed and we were unable to recover it. 00:30:10.945 [2024-02-14 20:30:48.129738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.945 [2024-02-14 20:30:48.130048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.945 [2024-02-14 20:30:48.130063] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.945 qpair failed and we were unable to recover it. 00:30:10.945 [2024-02-14 20:30:48.130495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.945 [2024-02-14 20:30:48.130781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.945 [2024-02-14 20:30:48.130796] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.945 qpair failed and we were unable to recover it. 00:30:10.945 [2024-02-14 20:30:48.131086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.945 [2024-02-14 20:30:48.131446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.945 [2024-02-14 20:30:48.131461] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.945 qpair failed and we were unable to recover it. 00:30:10.945 [2024-02-14 20:30:48.131814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.945 [2024-02-14 20:30:48.132216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.945 [2024-02-14 20:30:48.132230] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.945 qpair failed and we were unable to recover it. 00:30:10.946 [2024-02-14 20:30:48.132590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.946 [2024-02-14 20:30:48.132950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.946 [2024-02-14 20:30:48.132965] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.946 qpair failed and we were unable to recover it. 00:30:10.946 [2024-02-14 20:30:48.133305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.946 [2024-02-14 20:30:48.133592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.946 [2024-02-14 20:30:48.133606] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.946 qpair failed and we were unable to recover it. 00:30:10.946 [2024-02-14 20:30:48.133945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.946 [2024-02-14 20:30:48.134372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.946 [2024-02-14 20:30:48.134387] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.946 qpair failed and we were unable to recover it. 00:30:10.946 [2024-02-14 20:30:48.134726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.946 [2024-02-14 20:30:48.134968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.946 [2024-02-14 20:30:48.134982] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.946 qpair failed and we were unable to recover it. 00:30:10.946 [2024-02-14 20:30:48.135345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.946 [2024-02-14 20:30:48.135690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.946 [2024-02-14 20:30:48.135704] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.946 qpair failed and we were unable to recover it. 00:30:10.946 [2024-02-14 20:30:48.136103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.946 [2024-02-14 20:30:48.136383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.946 [2024-02-14 20:30:48.136397] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.946 qpair failed and we were unable to recover it. 00:30:10.946 [2024-02-14 20:30:48.136754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.946 [2024-02-14 20:30:48.137025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.946 [2024-02-14 20:30:48.137040] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.946 qpair failed and we were unable to recover it. 00:30:10.946 [2024-02-14 20:30:48.137443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.946 [2024-02-14 20:30:48.137792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.946 [2024-02-14 20:30:48.137809] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.946 qpair failed and we were unable to recover it. 00:30:10.946 [2024-02-14 20:30:48.138232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.946 [2024-02-14 20:30:48.138571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.946 [2024-02-14 20:30:48.138586] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.946 qpair failed and we were unable to recover it. 00:30:10.946 [2024-02-14 20:30:48.138926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.946 [2024-02-14 20:30:48.139264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.946 [2024-02-14 20:30:48.139278] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.946 qpair failed and we were unable to recover it. 00:30:10.946 [2024-02-14 20:30:48.139558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.946 [2024-02-14 20:30:48.139852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.946 [2024-02-14 20:30:48.139867] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.946 qpair failed and we were unable to recover it. 00:30:10.946 [2024-02-14 20:30:48.140267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.946 [2024-02-14 20:30:48.140563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.946 [2024-02-14 20:30:48.140577] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.946 qpair failed and we were unable to recover it. 00:30:10.946 [2024-02-14 20:30:48.140870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.946 [2024-02-14 20:30:48.141234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.946 [2024-02-14 20:30:48.141248] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.946 qpair failed and we were unable to recover it. 00:30:10.946 [2024-02-14 20:30:48.141544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.946 [2024-02-14 20:30:48.141892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.946 [2024-02-14 20:30:48.141907] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.946 qpair failed and we were unable to recover it. 00:30:10.946 [2024-02-14 20:30:48.142316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.946 [2024-02-14 20:30:48.142588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.946 [2024-02-14 20:30:48.142603] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.946 qpair failed and we were unable to recover it. 00:30:10.946 [2024-02-14 20:30:48.142958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.946 [2024-02-14 20:30:48.143307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.946 [2024-02-14 20:30:48.143321] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.946 qpair failed and we were unable to recover it. 00:30:10.946 [2024-02-14 20:30:48.143619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.946 [2024-02-14 20:30:48.143910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.946 [2024-02-14 20:30:48.143925] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.946 qpair failed and we were unable to recover it. 00:30:10.946 [2024-02-14 20:30:48.144336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.946 [2024-02-14 20:30:48.144628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.946 [2024-02-14 20:30:48.144643] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.946 qpair failed and we were unable to recover it. 00:30:10.946 [2024-02-14 20:30:48.145066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.946 [2024-02-14 20:30:48.145348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.946 [2024-02-14 20:30:48.145362] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.946 qpair failed and we were unable to recover it. 00:30:10.946 [2024-02-14 20:30:48.145784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.946 [2024-02-14 20:30:48.146154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.946 [2024-02-14 20:30:48.146168] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.946 qpair failed and we were unable to recover it. 00:30:10.946 [2024-02-14 20:30:48.146521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.946 [2024-02-14 20:30:48.146870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.946 [2024-02-14 20:30:48.146885] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.946 qpair failed and we were unable to recover it. 00:30:10.946 [2024-02-14 20:30:48.147345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.946 [2024-02-14 20:30:48.147706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.946 [2024-02-14 20:30:48.147721] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.946 qpair failed and we were unable to recover it. 00:30:10.946 [2024-02-14 20:30:48.148163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.946 [2024-02-14 20:30:48.148446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.946 [2024-02-14 20:30:48.148461] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.946 qpair failed and we were unable to recover it. 00:30:10.946 [2024-02-14 20:30:48.148749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.946 [2024-02-14 20:30:48.149176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.946 [2024-02-14 20:30:48.149191] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.946 qpair failed and we were unable to recover it. 00:30:10.946 [2024-02-14 20:30:48.149596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.946 [2024-02-14 20:30:48.149933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.946 [2024-02-14 20:30:48.149948] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.946 qpair failed and we were unable to recover it. 00:30:10.946 [2024-02-14 20:30:48.150304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.946 [2024-02-14 20:30:48.150641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.946 [2024-02-14 20:30:48.150661] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.946 qpair failed and we were unable to recover it. 00:30:10.946 [2024-02-14 20:30:48.151026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.946 [2024-02-14 20:30:48.151379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.946 [2024-02-14 20:30:48.151394] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.946 qpair failed and we were unable to recover it. 00:30:10.946 [2024-02-14 20:30:48.151819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.946 [2024-02-14 20:30:48.152094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.946 [2024-02-14 20:30:48.152109] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.946 qpair failed and we were unable to recover it. 00:30:10.946 [2024-02-14 20:30:48.152452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.946 [2024-02-14 20:30:48.152796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.946 [2024-02-14 20:30:48.152811] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.947 qpair failed and we were unable to recover it. 00:30:10.947 [2024-02-14 20:30:48.153188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.947 [2024-02-14 20:30:48.153557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.947 [2024-02-14 20:30:48.153571] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.947 qpair failed and we were unable to recover it. 00:30:10.947 [2024-02-14 20:30:48.153867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.947 [2024-02-14 20:30:48.154289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.947 [2024-02-14 20:30:48.154304] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.947 qpair failed and we were unable to recover it. 00:30:10.947 [2024-02-14 20:30:48.154658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.947 [2024-02-14 20:30:48.154944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.947 [2024-02-14 20:30:48.154958] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.947 qpair failed and we were unable to recover it. 00:30:10.947 [2024-02-14 20:30:48.155317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.947 [2024-02-14 20:30:48.155668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.947 [2024-02-14 20:30:48.155683] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.947 qpair failed and we were unable to recover it. 00:30:10.947 [2024-02-14 20:30:48.155971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.947 [2024-02-14 20:30:48.156383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.947 [2024-02-14 20:30:48.156397] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.947 qpair failed and we were unable to recover it. 00:30:10.947 [2024-02-14 20:30:48.156739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.947 [2024-02-14 20:30:48.157110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.947 [2024-02-14 20:30:48.157124] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.947 qpair failed and we were unable to recover it. 00:30:10.947 [2024-02-14 20:30:48.157468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.947 [2024-02-14 20:30:48.157756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.947 [2024-02-14 20:30:48.157771] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.947 qpair failed and we were unable to recover it. 00:30:10.947 [2024-02-14 20:30:48.158051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.947 [2024-02-14 20:30:48.158472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.947 [2024-02-14 20:30:48.158487] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.947 qpair failed and we were unable to recover it. 00:30:10.947 [2024-02-14 20:30:48.158866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.947 [2024-02-14 20:30:48.159289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.947 [2024-02-14 20:30:48.159303] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:10.947 qpair failed and we were unable to recover it. 00:30:10.947 [2024-02-14 20:30:48.159638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.947 [2024-02-14 20:30:48.160088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.947 [2024-02-14 20:30:48.160106] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.947 qpair failed and we were unable to recover it. 00:30:10.947 [2024-02-14 20:30:48.160394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.947 [2024-02-14 20:30:48.160749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.947 [2024-02-14 20:30:48.160764] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.947 qpair failed and we were unable to recover it. 00:30:10.947 [2024-02-14 20:30:48.160985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.947 [2024-02-14 20:30:48.161286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.947 [2024-02-14 20:30:48.161301] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.947 qpair failed and we were unable to recover it. 00:30:10.947 [2024-02-14 20:30:48.161728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.947 [2024-02-14 20:30:48.162130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.947 [2024-02-14 20:30:48.162145] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.947 qpair failed and we were unable to recover it. 00:30:10.947 [2024-02-14 20:30:48.162500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.947 [2024-02-14 20:30:48.162920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.947 [2024-02-14 20:30:48.162935] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.947 qpair failed and we were unable to recover it. 00:30:10.947 [2024-02-14 20:30:48.163236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.947 [2024-02-14 20:30:48.163638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.947 [2024-02-14 20:30:48.163657] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.947 qpair failed and we were unable to recover it. 00:30:10.947 [2024-02-14 20:30:48.163939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.947 [2024-02-14 20:30:48.164273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.947 [2024-02-14 20:30:48.164288] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.947 qpair failed and we were unable to recover it. 00:30:10.947 [2024-02-14 20:30:48.164711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.947 [2024-02-14 20:30:48.165066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.947 [2024-02-14 20:30:48.165081] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.947 qpair failed and we were unable to recover it. 00:30:10.947 [2024-02-14 20:30:48.165453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.947 [2024-02-14 20:30:48.165801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.947 [2024-02-14 20:30:48.165816] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.947 qpair failed and we were unable to recover it. 00:30:10.947 [2024-02-14 20:30:48.166095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.947 [2024-02-14 20:30:48.166288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.947 [2024-02-14 20:30:48.166303] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.947 qpair failed and we were unable to recover it. 00:30:10.947 [2024-02-14 20:30:48.166777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.947 [2024-02-14 20:30:48.167186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.947 [2024-02-14 20:30:48.167200] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.947 qpair failed and we were unable to recover it. 00:30:10.947 [2024-02-14 20:30:48.167365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.947 [2024-02-14 20:30:48.167560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.947 [2024-02-14 20:30:48.167575] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.947 qpair failed and we were unable to recover it. 00:30:10.947 [2024-02-14 20:30:48.167872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.947 [2024-02-14 20:30:48.168281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.947 [2024-02-14 20:30:48.168295] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.947 qpair failed and we were unable to recover it. 00:30:10.947 [2024-02-14 20:30:48.168584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.947 [2024-02-14 20:30:48.168946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.947 [2024-02-14 20:30:48.168960] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.947 qpair failed and we were unable to recover it. 00:30:10.948 [2024-02-14 20:30:48.169365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.948 [2024-02-14 20:30:48.169771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.948 [2024-02-14 20:30:48.169786] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.948 qpair failed and we were unable to recover it. 00:30:10.948 [2024-02-14 20:30:48.170199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.948 [2024-02-14 20:30:48.170602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.948 [2024-02-14 20:30:48.170617] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.948 qpair failed and we were unable to recover it. 00:30:10.948 [2024-02-14 20:30:48.170909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.948 [2024-02-14 20:30:48.171334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.948 [2024-02-14 20:30:48.171348] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.948 qpair failed and we were unable to recover it. 00:30:10.948 [2024-02-14 20:30:48.171653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.948 [2024-02-14 20:30:48.172103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.948 [2024-02-14 20:30:48.172117] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.948 qpair failed and we were unable to recover it. 00:30:10.948 [2024-02-14 20:30:48.172541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.948 [2024-02-14 20:30:48.172939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.948 [2024-02-14 20:30:48.172954] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.948 qpair failed and we were unable to recover it. 00:30:10.948 [2024-02-14 20:30:48.173356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.948 [2024-02-14 20:30:48.173784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.948 [2024-02-14 20:30:48.173798] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.948 qpair failed and we were unable to recover it. 00:30:10.948 [2024-02-14 20:30:48.173991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.948 [2024-02-14 20:30:48.174296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.948 [2024-02-14 20:30:48.174310] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.948 qpair failed and we were unable to recover it. 00:30:10.948 [2024-02-14 20:30:48.174739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.948 [2024-02-14 20:30:48.175147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.948 [2024-02-14 20:30:48.175162] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.948 qpair failed and we were unable to recover it. 00:30:10.948 [2024-02-14 20:30:48.175513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.948 [2024-02-14 20:30:48.175888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.948 [2024-02-14 20:30:48.175902] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.948 qpair failed and we were unable to recover it. 00:30:10.948 [2024-02-14 20:30:48.176252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.948 [2024-02-14 20:30:48.176593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.948 [2024-02-14 20:30:48.176608] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.948 qpair failed and we were unable to recover it. 00:30:10.948 [2024-02-14 20:30:48.176986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.948 [2024-02-14 20:30:48.177413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.948 [2024-02-14 20:30:48.177427] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.948 qpair failed and we were unable to recover it. 00:30:10.948 [2024-02-14 20:30:48.177837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.948 [2024-02-14 20:30:48.178183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.948 [2024-02-14 20:30:48.178197] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.948 qpair failed and we were unable to recover it. 00:30:10.948 [2024-02-14 20:30:48.178598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.948 [2024-02-14 20:30:48.179028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.948 [2024-02-14 20:30:48.179043] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.948 qpair failed and we were unable to recover it. 00:30:10.948 [2024-02-14 20:30:48.179189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.948 [2024-02-14 20:30:48.179543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.948 [2024-02-14 20:30:48.179557] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.948 qpair failed and we were unable to recover it. 00:30:10.948 [2024-02-14 20:30:48.179901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.948 [2024-02-14 20:30:48.180257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.948 [2024-02-14 20:30:48.180272] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.948 qpair failed and we were unable to recover it. 00:30:10.948 [2024-02-14 20:30:48.180674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.948 [2024-02-14 20:30:48.181078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.948 [2024-02-14 20:30:48.181093] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.948 qpair failed and we were unable to recover it. 00:30:10.948 [2024-02-14 20:30:48.181445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.948 [2024-02-14 20:30:48.181814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.948 [2024-02-14 20:30:48.181829] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.948 qpair failed and we were unable to recover it. 00:30:10.948 [2024-02-14 20:30:48.182243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.948 [2024-02-14 20:30:48.182656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.948 [2024-02-14 20:30:48.182671] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.948 qpair failed and we were unable to recover it. 00:30:10.948 [2024-02-14 20:30:48.183021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.948 [2024-02-14 20:30:48.183384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.948 [2024-02-14 20:30:48.183399] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.948 qpair failed and we were unable to recover it. 00:30:10.948 [2024-02-14 20:30:48.183751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.948 [2024-02-14 20:30:48.184152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.948 [2024-02-14 20:30:48.184166] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.948 qpair failed and we were unable to recover it. 00:30:10.948 [2024-02-14 20:30:48.184589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.948 [2024-02-14 20:30:48.185008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.948 [2024-02-14 20:30:48.185024] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.948 qpair failed and we were unable to recover it. 00:30:10.948 [2024-02-14 20:30:48.185379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.948 [2024-02-14 20:30:48.185783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.948 [2024-02-14 20:30:48.185798] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.948 qpair failed and we were unable to recover it. 00:30:10.948 [2024-02-14 20:30:48.186145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.948 [2024-02-14 20:30:48.186574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.948 [2024-02-14 20:30:48.186589] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.948 qpair failed and we were unable to recover it. 00:30:10.948 [2024-02-14 20:30:48.187004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.948 [2024-02-14 20:30:48.187365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.948 [2024-02-14 20:30:48.187380] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.948 qpair failed and we were unable to recover it. 00:30:10.948 [2024-02-14 20:30:48.187752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.948 [2024-02-14 20:30:48.188036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.948 [2024-02-14 20:30:48.188050] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.948 qpair failed and we were unable to recover it. 00:30:10.948 [2024-02-14 20:30:48.188474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.948 [2024-02-14 20:30:48.188936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.948 [2024-02-14 20:30:48.188951] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.948 qpair failed and we were unable to recover it. 00:30:10.948 [2024-02-14 20:30:48.189246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.948 [2024-02-14 20:30:48.189660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.948 [2024-02-14 20:30:48.189675] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.948 qpair failed and we were unable to recover it. 00:30:10.948 [2024-02-14 20:30:48.189970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.948 [2024-02-14 20:30:48.190391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.948 [2024-02-14 20:30:48.190405] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.948 qpair failed and we were unable to recover it. 00:30:10.948 [2024-02-14 20:30:48.190807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.948 [2024-02-14 20:30:48.191105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.949 [2024-02-14 20:30:48.191119] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.949 qpair failed and we were unable to recover it. 00:30:10.949 [2024-02-14 20:30:48.191478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.949 [2024-02-14 20:30:48.191878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.949 [2024-02-14 20:30:48.191892] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.949 qpair failed and we were unable to recover it. 00:30:10.949 [2024-02-14 20:30:48.192249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.949 [2024-02-14 20:30:48.192603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.949 [2024-02-14 20:30:48.192618] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.949 qpair failed and we were unable to recover it. 00:30:10.949 [2024-02-14 20:30:48.192898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.949 [2024-02-14 20:30:48.193319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.949 [2024-02-14 20:30:48.193333] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.949 qpair failed and we were unable to recover it. 00:30:10.949 [2024-02-14 20:30:48.193708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.949 [2024-02-14 20:30:48.194079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.949 [2024-02-14 20:30:48.194093] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.949 qpair failed and we were unable to recover it. 00:30:10.949 [2024-02-14 20:30:48.194446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.949 [2024-02-14 20:30:48.194870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.949 [2024-02-14 20:30:48.194885] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.949 qpair failed and we were unable to recover it. 00:30:10.949 [2024-02-14 20:30:48.195184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.949 [2024-02-14 20:30:48.195557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.949 [2024-02-14 20:30:48.195571] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.949 qpair failed and we were unable to recover it. 00:30:10.949 [2024-02-14 20:30:48.195921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.949 [2024-02-14 20:30:48.196254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.949 [2024-02-14 20:30:48.196269] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.949 qpair failed and we were unable to recover it. 00:30:10.949 [2024-02-14 20:30:48.196687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.949 [2024-02-14 20:30:48.197045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.949 [2024-02-14 20:30:48.197061] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.949 qpair failed and we were unable to recover it. 00:30:10.949 [2024-02-14 20:30:48.197432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.949 [2024-02-14 20:30:48.197834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.949 [2024-02-14 20:30:48.197849] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.949 qpair failed and we were unable to recover it. 00:30:10.949 [2024-02-14 20:30:48.198204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.949 [2024-02-14 20:30:48.198549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.949 [2024-02-14 20:30:48.198563] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.949 qpair failed and we were unable to recover it. 00:30:10.949 [2024-02-14 20:30:48.198908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.949 [2024-02-14 20:30:48.199188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.949 [2024-02-14 20:30:48.199203] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.949 qpair failed and we were unable to recover it. 00:30:10.949 [2024-02-14 20:30:48.199605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.949 [2024-02-14 20:30:48.200003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.949 [2024-02-14 20:30:48.200018] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.949 qpair failed and we were unable to recover it. 00:30:10.949 [2024-02-14 20:30:48.200371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.949 [2024-02-14 20:30:48.200666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.949 [2024-02-14 20:30:48.200681] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.949 qpair failed and we were unable to recover it. 00:30:10.949 [2024-02-14 20:30:48.200972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.949 [2024-02-14 20:30:48.201373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.949 [2024-02-14 20:30:48.201387] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.949 qpair failed and we were unable to recover it. 00:30:10.949 [2024-02-14 20:30:48.201745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.949 [2024-02-14 20:30:48.202177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.949 [2024-02-14 20:30:48.202191] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.949 qpair failed and we were unable to recover it. 00:30:10.949 [2024-02-14 20:30:48.202552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.949 [2024-02-14 20:30:48.202982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.949 [2024-02-14 20:30:48.202997] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.949 qpair failed and we were unable to recover it. 00:30:10.949 [2024-02-14 20:30:48.203147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.949 [2024-02-14 20:30:48.203486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.949 [2024-02-14 20:30:48.203501] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.949 qpair failed and we were unable to recover it. 00:30:10.949 [2024-02-14 20:30:48.203861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.949 [2024-02-14 20:30:48.204221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.949 [2024-02-14 20:30:48.204239] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.949 qpair failed and we were unable to recover it. 00:30:10.949 [2024-02-14 20:30:48.204666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.949 [2024-02-14 20:30:48.204861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.949 [2024-02-14 20:30:48.204875] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.949 qpair failed and we were unable to recover it. 00:30:10.949 [2024-02-14 20:30:48.205302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.949 [2024-02-14 20:30:48.205594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.949 [2024-02-14 20:30:48.205608] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.949 qpair failed and we were unable to recover it. 00:30:10.949 [2024-02-14 20:30:48.206012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.949 [2024-02-14 20:30:48.206386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.949 [2024-02-14 20:30:48.206400] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.949 qpair failed and we were unable to recover it. 00:30:10.949 [2024-02-14 20:30:48.206746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.949 [2024-02-14 20:30:48.207116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.949 [2024-02-14 20:30:48.207130] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.949 qpair failed and we were unable to recover it. 00:30:10.949 [2024-02-14 20:30:48.207418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.949 [2024-02-14 20:30:48.207829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.949 [2024-02-14 20:30:48.207843] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.949 qpair failed and we were unable to recover it. 00:30:10.949 [2024-02-14 20:30:48.208293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.949 [2024-02-14 20:30:48.208673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.949 [2024-02-14 20:30:48.208687] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.949 qpair failed and we were unable to recover it. 00:30:10.949 [2024-02-14 20:30:48.209114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.949 [2024-02-14 20:30:48.209471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.949 [2024-02-14 20:30:48.209485] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.949 qpair failed and we were unable to recover it. 00:30:10.949 [2024-02-14 20:30:48.209893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.949 [2024-02-14 20:30:48.210240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.949 [2024-02-14 20:30:48.210254] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.949 qpair failed and we were unable to recover it. 00:30:10.949 [2024-02-14 20:30:48.210470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.949 [2024-02-14 20:30:48.210897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.949 [2024-02-14 20:30:48.210912] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.949 qpair failed and we were unable to recover it. 00:30:10.949 [2024-02-14 20:30:48.211265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.949 [2024-02-14 20:30:48.211564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.949 [2024-02-14 20:30:48.211581] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.949 qpair failed and we were unable to recover it. 00:30:10.949 [2024-02-14 20:30:48.212008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.950 [2024-02-14 20:30:48.212354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.950 [2024-02-14 20:30:48.212369] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.950 qpair failed and we were unable to recover it. 00:30:10.950 [2024-02-14 20:30:48.212749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.950 [2024-02-14 20:30:48.213086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.950 [2024-02-14 20:30:48.213100] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.950 qpair failed and we were unable to recover it. 00:30:10.950 [2024-02-14 20:30:48.213449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.950 [2024-02-14 20:30:48.213794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.950 [2024-02-14 20:30:48.213808] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.950 qpair failed and we were unable to recover it. 00:30:10.950 [2024-02-14 20:30:48.214211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.950 [2024-02-14 20:30:48.214636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.950 [2024-02-14 20:30:48.214654] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.950 qpair failed and we were unable to recover it. 00:30:10.950 [2024-02-14 20:30:48.214942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.950 [2024-02-14 20:30:48.215319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.950 [2024-02-14 20:30:48.215333] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.950 qpair failed and we were unable to recover it. 00:30:10.950 [2024-02-14 20:30:48.215762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.950 [2024-02-14 20:30:48.216057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.950 [2024-02-14 20:30:48.216071] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.950 qpair failed and we were unable to recover it. 00:30:10.950 [2024-02-14 20:30:48.216338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.950 [2024-02-14 20:30:48.216708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.950 [2024-02-14 20:30:48.216722] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.950 qpair failed and we were unable to recover it. 00:30:10.950 [2024-02-14 20:30:48.217069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.950 [2024-02-14 20:30:48.217262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.950 [2024-02-14 20:30:48.217276] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.950 qpair failed and we were unable to recover it. 00:30:10.950 [2024-02-14 20:30:48.217625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.950 [2024-02-14 20:30:48.218038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.950 [2024-02-14 20:30:48.218053] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.950 qpair failed and we were unable to recover it. 00:30:10.950 [2024-02-14 20:30:48.218424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.950 [2024-02-14 20:30:48.218783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.950 [2024-02-14 20:30:48.218800] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.950 qpair failed and we were unable to recover it. 00:30:10.950 [2024-02-14 20:30:48.219095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.950 [2024-02-14 20:30:48.219399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.950 [2024-02-14 20:30:48.219413] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.950 qpair failed and we were unable to recover it. 00:30:10.950 [2024-02-14 20:30:48.219841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.950 [2024-02-14 20:30:48.220193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.950 [2024-02-14 20:30:48.220208] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.950 qpair failed and we were unable to recover it. 00:30:10.950 [2024-02-14 20:30:48.220603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.950 [2024-02-14 20:30:48.221005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.950 [2024-02-14 20:30:48.221020] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.950 qpair failed and we were unable to recover it. 00:30:10.950 [2024-02-14 20:30:48.221384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.950 [2024-02-14 20:30:48.221814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.950 [2024-02-14 20:30:48.221828] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.950 qpair failed and we were unable to recover it. 00:30:10.950 [2024-02-14 20:30:48.222188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.950 [2024-02-14 20:30:48.222403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.950 [2024-02-14 20:30:48.222417] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.950 qpair failed and we were unable to recover it. 00:30:10.950 [2024-02-14 20:30:48.222763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.950 [2024-02-14 20:30:48.223115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.950 [2024-02-14 20:30:48.223129] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.950 qpair failed and we were unable to recover it. 00:30:10.950 [2024-02-14 20:30:48.223536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.950 [2024-02-14 20:30:48.223836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.950 [2024-02-14 20:30:48.223851] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.950 qpair failed and we were unable to recover it. 00:30:10.950 [2024-02-14 20:30:48.224206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.950 [2024-02-14 20:30:48.224497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.950 [2024-02-14 20:30:48.224511] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.950 qpair failed and we were unable to recover it. 00:30:10.950 [2024-02-14 20:30:48.224843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.950 [2024-02-14 20:30:48.225207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.950 [2024-02-14 20:30:48.225221] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.950 qpair failed and we were unable to recover it. 00:30:10.950 [2024-02-14 20:30:48.225651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.950 [2024-02-14 20:30:48.225991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.950 [2024-02-14 20:30:48.226005] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.950 qpair failed and we were unable to recover it. 00:30:10.950 [2024-02-14 20:30:48.226454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.950 [2024-02-14 20:30:48.226811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.950 [2024-02-14 20:30:48.226825] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.950 qpair failed and we were unable to recover it. 00:30:10.950 [2024-02-14 20:30:48.227259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.950 [2024-02-14 20:30:48.227551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.950 [2024-02-14 20:30:48.227565] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.950 qpair failed and we were unable to recover it. 00:30:10.950 [2024-02-14 20:30:48.227848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.950 [2024-02-14 20:30:48.228221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.950 [2024-02-14 20:30:48.228235] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.950 qpair failed and we were unable to recover it. 00:30:10.950 [2024-02-14 20:30:48.228583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.950 [2024-02-14 20:30:48.228939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.950 [2024-02-14 20:30:48.228954] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.950 qpair failed and we were unable to recover it. 00:30:10.950 [2024-02-14 20:30:48.229314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.950 [2024-02-14 20:30:48.229661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.950 [2024-02-14 20:30:48.229676] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.950 qpair failed and we were unable to recover it. 00:30:10.950 [2024-02-14 20:30:48.230043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.950 [2024-02-14 20:30:48.230471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.950 [2024-02-14 20:30:48.230485] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.950 qpair failed and we were unable to recover it. 00:30:10.950 [2024-02-14 20:30:48.230915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.950 [2024-02-14 20:30:48.231282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.950 [2024-02-14 20:30:48.231296] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.950 qpair failed and we were unable to recover it. 00:30:10.950 [2024-02-14 20:30:48.231723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.950 [2024-02-14 20:30:48.232132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.950 [2024-02-14 20:30:48.232146] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.950 qpair failed and we were unable to recover it. 00:30:10.950 [2024-02-14 20:30:48.232549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.950 [2024-02-14 20:30:48.232857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.950 [2024-02-14 20:30:48.232872] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.951 qpair failed and we were unable to recover it. 00:30:10.951 [2024-02-14 20:30:48.233279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.951 [2024-02-14 20:30:48.233584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.951 [2024-02-14 20:30:48.233598] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.951 qpair failed and we were unable to recover it. 00:30:10.951 [2024-02-14 20:30:48.233965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.951 [2024-02-14 20:30:48.234410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.951 [2024-02-14 20:30:48.234425] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.951 qpair failed and we were unable to recover it. 00:30:10.951 [2024-02-14 20:30:48.234789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.951 [2024-02-14 20:30:48.235152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.951 [2024-02-14 20:30:48.235166] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.951 qpair failed and we were unable to recover it. 00:30:10.951 [2024-02-14 20:30:48.235519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.951 [2024-02-14 20:30:48.235872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.951 [2024-02-14 20:30:48.235886] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.951 qpair failed and we were unable to recover it. 00:30:10.951 [2024-02-14 20:30:48.236298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.951 [2024-02-14 20:30:48.236677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.951 [2024-02-14 20:30:48.236692] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.951 qpair failed and we were unable to recover it. 00:30:10.951 [2024-02-14 20:30:48.237051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.951 [2024-02-14 20:30:48.237425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.951 [2024-02-14 20:30:48.237439] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.951 qpair failed and we were unable to recover it. 00:30:10.951 [2024-02-14 20:30:48.237872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.951 [2024-02-14 20:30:48.238314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.951 [2024-02-14 20:30:48.238328] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.951 qpair failed and we were unable to recover it. 00:30:10.951 [2024-02-14 20:30:48.238667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.951 [2024-02-14 20:30:48.238943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.951 [2024-02-14 20:30:48.238957] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.951 qpair failed and we were unable to recover it. 00:30:10.951 [2024-02-14 20:30:48.239241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.951 [2024-02-14 20:30:48.239665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.951 [2024-02-14 20:30:48.239679] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.951 qpair failed and we were unable to recover it. 00:30:10.951 [2024-02-14 20:30:48.239977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.951 [2024-02-14 20:30:48.240324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.951 [2024-02-14 20:30:48.240338] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.951 qpair failed and we were unable to recover it. 00:30:10.951 [2024-02-14 20:30:48.240685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.951 [2024-02-14 20:30:48.241062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.951 [2024-02-14 20:30:48.241077] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.951 qpair failed and we were unable to recover it. 00:30:10.951 [2024-02-14 20:30:48.241436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.951 [2024-02-14 20:30:48.241782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.951 [2024-02-14 20:30:48.241797] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.951 qpair failed and we were unable to recover it. 00:30:10.951 [2024-02-14 20:30:48.242151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.951 [2024-02-14 20:30:48.242551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.951 [2024-02-14 20:30:48.242565] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.951 qpair failed and we were unable to recover it. 00:30:10.951 [2024-02-14 20:30:48.242988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.951 [2024-02-14 20:30:48.243336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.951 [2024-02-14 20:30:48.243351] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.951 qpair failed and we were unable to recover it. 00:30:10.951 [2024-02-14 20:30:48.243776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.951 [2024-02-14 20:30:48.244125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.951 [2024-02-14 20:30:48.244139] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.951 qpair failed and we were unable to recover it. 00:30:10.951 [2024-02-14 20:30:48.244494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.951 [2024-02-14 20:30:48.244895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.951 [2024-02-14 20:30:48.244910] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.951 qpair failed and we were unable to recover it. 00:30:10.951 [2024-02-14 20:30:48.245250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.951 [2024-02-14 20:30:48.245699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.951 [2024-02-14 20:30:48.245714] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.951 qpair failed and we were unable to recover it. 00:30:10.951 [2024-02-14 20:30:48.246072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.951 [2024-02-14 20:30:48.246407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.951 [2024-02-14 20:30:48.246421] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.951 qpair failed and we were unable to recover it. 00:30:10.951 [2024-02-14 20:30:48.246850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.951 [2024-02-14 20:30:48.247280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.951 [2024-02-14 20:30:48.247295] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.951 qpair failed and we were unable to recover it. 00:30:10.951 [2024-02-14 20:30:48.247639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.951 [2024-02-14 20:30:48.248039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.951 [2024-02-14 20:30:48.248054] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.951 qpair failed and we were unable to recover it. 00:30:10.951 [2024-02-14 20:30:48.248410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.951 [2024-02-14 20:30:48.248756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.951 [2024-02-14 20:30:48.248771] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.951 qpair failed and we were unable to recover it. 00:30:10.951 [2024-02-14 20:30:48.249150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.951 [2024-02-14 20:30:48.249567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.951 [2024-02-14 20:30:48.249582] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.951 qpair failed and we were unable to recover it. 00:30:10.951 [2024-02-14 20:30:48.249960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.951 [2024-02-14 20:30:48.250362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.951 [2024-02-14 20:30:48.250376] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.951 qpair failed and we were unable to recover it. 00:30:10.951 [2024-02-14 20:30:48.250806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.951 [2024-02-14 20:30:48.251248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.951 [2024-02-14 20:30:48.251262] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.951 qpair failed and we were unable to recover it. 00:30:10.951 [2024-02-14 20:30:48.251666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.951 [2024-02-14 20:30:48.252039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.951 [2024-02-14 20:30:48.252053] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.951 qpair failed and we were unable to recover it. 00:30:10.951 [2024-02-14 20:30:48.252471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.951 [2024-02-14 20:30:48.252817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.951 [2024-02-14 20:30:48.252832] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.951 qpair failed and we were unable to recover it. 00:30:10.951 [2024-02-14 20:30:48.253236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.951 [2024-02-14 20:30:48.253571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.951 [2024-02-14 20:30:48.253586] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.951 qpair failed and we were unable to recover it. 00:30:10.951 [2024-02-14 20:30:48.253947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.951 [2024-02-14 20:30:48.254375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.951 [2024-02-14 20:30:48.254389] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.951 qpair failed and we were unable to recover it. 00:30:10.951 [2024-02-14 20:30:48.254803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.952 [2024-02-14 20:30:48.255237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.952 [2024-02-14 20:30:48.255250] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.952 qpair failed and we were unable to recover it. 00:30:10.952 [2024-02-14 20:30:48.255667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.952 [2024-02-14 20:30:48.256014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.952 [2024-02-14 20:30:48.256028] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.952 qpair failed and we were unable to recover it. 00:30:10.952 [2024-02-14 20:30:48.256451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.952 [2024-02-14 20:30:48.256803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.952 [2024-02-14 20:30:48.256818] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.952 qpair failed and we were unable to recover it. 00:30:10.952 [2024-02-14 20:30:48.257250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.952 [2024-02-14 20:30:48.257607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.952 [2024-02-14 20:30:48.257622] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.952 qpair failed and we were unable to recover it. 00:30:10.952 [2024-02-14 20:30:48.257982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.952 [2024-02-14 20:30:48.258322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.952 [2024-02-14 20:30:48.258337] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.952 qpair failed and we were unable to recover it. 00:30:10.952 [2024-02-14 20:30:48.258900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.952 [2024-02-14 20:30:48.259345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.952 [2024-02-14 20:30:48.259360] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.952 qpair failed and we were unable to recover it. 00:30:10.952 [2024-02-14 20:30:48.259708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.952 [2024-02-14 20:30:48.259995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.952 [2024-02-14 20:30:48.260009] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.952 qpair failed and we were unable to recover it. 00:30:10.952 [2024-02-14 20:30:48.260439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.952 [2024-02-14 20:30:48.260776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.952 [2024-02-14 20:30:48.260791] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.952 qpair failed and we were unable to recover it. 00:30:10.952 [2024-02-14 20:30:48.261148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.952 [2024-02-14 20:30:48.261571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.952 [2024-02-14 20:30:48.261585] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.952 qpair failed and we were unable to recover it. 00:30:10.952 [2024-02-14 20:30:48.261940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.952 [2024-02-14 20:30:48.262242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.952 [2024-02-14 20:30:48.262256] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.952 qpair failed and we were unable to recover it. 00:30:10.952 [2024-02-14 20:30:48.262628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.952 [2024-02-14 20:30:48.263046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.952 [2024-02-14 20:30:48.263060] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.952 qpair failed and we were unable to recover it. 00:30:10.952 [2024-02-14 20:30:48.263365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.952 [2024-02-14 20:30:48.263769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.952 [2024-02-14 20:30:48.263784] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.952 qpair failed and we were unable to recover it. 00:30:10.952 [2024-02-14 20:30:48.264189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.952 [2024-02-14 20:30:48.264529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.952 [2024-02-14 20:30:48.264544] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.952 qpair failed and we were unable to recover it. 00:30:10.952 [2024-02-14 20:30:48.264839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.952 [2024-02-14 20:30:48.265195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.952 [2024-02-14 20:30:48.265210] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.952 qpair failed and we were unable to recover it. 00:30:10.952 [2024-02-14 20:30:48.265615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.952 [2024-02-14 20:30:48.265913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.952 [2024-02-14 20:30:48.265927] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.952 qpair failed and we were unable to recover it. 00:30:10.952 [2024-02-14 20:30:48.266286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.952 [2024-02-14 20:30:48.266653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.952 [2024-02-14 20:30:48.266668] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.952 qpair failed and we were unable to recover it. 00:30:10.952 [2024-02-14 20:30:48.266807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.952 [2024-02-14 20:30:48.267143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.952 [2024-02-14 20:30:48.267158] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.952 qpair failed and we were unable to recover it. 00:30:10.952 [2024-02-14 20:30:48.267598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.952 [2024-02-14 20:30:48.267946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.952 [2024-02-14 20:30:48.267961] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.952 qpair failed and we were unable to recover it. 00:30:10.952 [2024-02-14 20:30:48.268316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.952 [2024-02-14 20:30:48.268680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.952 [2024-02-14 20:30:48.268695] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.952 qpair failed and we were unable to recover it. 00:30:10.952 [2024-02-14 20:30:48.269099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.952 [2024-02-14 20:30:48.269473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.952 [2024-02-14 20:30:48.269488] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.952 qpair failed and we were unable to recover it. 00:30:10.952 [2024-02-14 20:30:48.269869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.952 [2024-02-14 20:30:48.270269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.952 [2024-02-14 20:30:48.270283] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.952 qpair failed and we were unable to recover it. 00:30:10.952 [2024-02-14 20:30:48.270566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.952 [2024-02-14 20:30:48.270914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.952 [2024-02-14 20:30:48.270929] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.952 qpair failed and we were unable to recover it. 00:30:10.952 [2024-02-14 20:30:48.271218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.952 [2024-02-14 20:30:48.271566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.952 [2024-02-14 20:30:48.271580] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.952 qpair failed and we were unable to recover it. 00:30:10.952 [2024-02-14 20:30:48.271883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.952 [2024-02-14 20:30:48.272175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.952 [2024-02-14 20:30:48.272190] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.952 qpair failed and we were unable to recover it. 00:30:10.952 [2024-02-14 20:30:48.272598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.952 [2024-02-14 20:30:48.272959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.952 [2024-02-14 20:30:48.272974] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.953 qpair failed and we were unable to recover it. 00:30:10.953 [2024-02-14 20:30:48.273338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.953 [2024-02-14 20:30:48.273749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.953 [2024-02-14 20:30:48.273764] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.953 qpair failed and we were unable to recover it. 00:30:10.953 [2024-02-14 20:30:48.274116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.953 [2024-02-14 20:30:48.274536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.953 [2024-02-14 20:30:48.274550] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.953 qpair failed and we were unable to recover it. 00:30:10.953 [2024-02-14 20:30:48.274905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.953 [2024-02-14 20:30:48.275254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.953 [2024-02-14 20:30:48.275268] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.953 qpair failed and we were unable to recover it. 00:30:10.953 [2024-02-14 20:30:48.275699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.953 [2024-02-14 20:30:48.276100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.953 [2024-02-14 20:30:48.276114] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.953 qpair failed and we were unable to recover it. 00:30:10.953 [2024-02-14 20:30:48.276418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.953 [2024-02-14 20:30:48.276700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.953 [2024-02-14 20:30:48.276715] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.953 qpair failed and we were unable to recover it. 00:30:10.953 [2024-02-14 20:30:48.277139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.953 [2024-02-14 20:30:48.277417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.953 [2024-02-14 20:30:48.277431] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.953 qpair failed and we were unable to recover it. 00:30:10.953 [2024-02-14 20:30:48.277710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.953 [2024-02-14 20:30:48.278075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.953 [2024-02-14 20:30:48.278089] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.953 qpair failed and we were unable to recover it. 00:30:10.953 [2024-02-14 20:30:48.278375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.953 [2024-02-14 20:30:48.278736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.953 [2024-02-14 20:30:48.278761] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.953 qpair failed and we were unable to recover it. 00:30:10.953 [2024-02-14 20:30:48.279066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.953 [2024-02-14 20:30:48.279402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.953 [2024-02-14 20:30:48.279416] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.953 qpair failed and we were unable to recover it. 00:30:10.953 [2024-02-14 20:30:48.279820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.953 [2024-02-14 20:30:48.280166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.953 [2024-02-14 20:30:48.280181] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.953 qpair failed and we were unable to recover it. 00:30:10.953 [2024-02-14 20:30:48.280557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.953 [2024-02-14 20:30:48.280895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.953 [2024-02-14 20:30:48.280910] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.953 qpair failed and we were unable to recover it. 00:30:10.953 [2024-02-14 20:30:48.281337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.953 [2024-02-14 20:30:48.281744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.953 [2024-02-14 20:30:48.281759] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.953 qpair failed and we were unable to recover it. 00:30:10.953 [2024-02-14 20:30:48.282159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.953 [2024-02-14 20:30:48.282416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.953 [2024-02-14 20:30:48.282431] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.953 qpair failed and we were unable to recover it. 00:30:10.953 [2024-02-14 20:30:48.282799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.953 [2024-02-14 20:30:48.283253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.953 [2024-02-14 20:30:48.283267] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.953 qpair failed and we were unable to recover it. 00:30:10.953 [2024-02-14 20:30:48.283637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.953 [2024-02-14 20:30:48.284081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.953 [2024-02-14 20:30:48.284095] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.953 qpair failed and we were unable to recover it. 00:30:10.953 [2024-02-14 20:30:48.284445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.953 [2024-02-14 20:30:48.284834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.953 [2024-02-14 20:30:48.284849] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.953 qpair failed and we were unable to recover it. 00:30:10.953 [2024-02-14 20:30:48.285140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.953 [2024-02-14 20:30:48.285540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.953 [2024-02-14 20:30:48.285555] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.953 qpair failed and we were unable to recover it. 00:30:10.953 [2024-02-14 20:30:48.285917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.953 [2024-02-14 20:30:48.286267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.953 [2024-02-14 20:30:48.286282] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.953 qpair failed and we were unable to recover it. 00:30:10.953 [2024-02-14 20:30:48.286585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.953 [2024-02-14 20:30:48.286987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.953 [2024-02-14 20:30:48.287002] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.953 qpair failed and we were unable to recover it. 00:30:10.953 [2024-02-14 20:30:48.287361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.953 [2024-02-14 20:30:48.287715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.953 [2024-02-14 20:30:48.287730] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.953 qpair failed and we were unable to recover it. 00:30:10.953 [2024-02-14 20:30:48.288138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.953 [2024-02-14 20:30:48.288565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.953 [2024-02-14 20:30:48.288579] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.953 qpair failed and we were unable to recover it. 00:30:10.953 [2024-02-14 20:30:48.288862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.953 [2024-02-14 20:30:48.289261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.953 [2024-02-14 20:30:48.289275] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.953 qpair failed and we were unable to recover it. 00:30:10.953 [2024-02-14 20:30:48.289615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.953 [2024-02-14 20:30:48.290016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.953 [2024-02-14 20:30:48.290031] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.953 qpair failed and we were unable to recover it. 00:30:10.953 [2024-02-14 20:30:48.290407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.953 [2024-02-14 20:30:48.290694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.953 [2024-02-14 20:30:48.290709] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.953 qpair failed and we were unable to recover it. 00:30:10.953 [2024-02-14 20:30:48.291110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.953 [2024-02-14 20:30:48.291401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.953 [2024-02-14 20:30:48.291416] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.953 qpair failed and we were unable to recover it. 00:30:10.953 [2024-02-14 20:30:48.291752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.953 [2024-02-14 20:30:48.292175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.953 [2024-02-14 20:30:48.292189] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.953 qpair failed and we were unable to recover it. 00:30:10.953 [2024-02-14 20:30:48.292625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.953 [2024-02-14 20:30:48.292847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.953 [2024-02-14 20:30:48.292862] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.953 qpair failed and we were unable to recover it. 00:30:10.953 [2024-02-14 20:30:48.293308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.953 [2024-02-14 20:30:48.293661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.953 [2024-02-14 20:30:48.293677] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.953 qpair failed and we were unable to recover it. 00:30:10.953 [2024-02-14 20:30:48.294084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.954 [2024-02-14 20:30:48.294435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.954 [2024-02-14 20:30:48.294449] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.954 qpair failed and we were unable to recover it. 00:30:10.954 [2024-02-14 20:30:48.294786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.954 [2024-02-14 20:30:48.295086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.954 [2024-02-14 20:30:48.295100] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.954 qpair failed and we were unable to recover it. 00:30:10.954 [2024-02-14 20:30:48.295454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.954 [2024-02-14 20:30:48.295805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.954 [2024-02-14 20:30:48.295819] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.954 qpair failed and we were unable to recover it. 00:30:10.954 [2024-02-14 20:30:48.296219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.954 [2024-02-14 20:30:48.296620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.954 [2024-02-14 20:30:48.296635] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.954 qpair failed and we were unable to recover it. 00:30:10.954 [2024-02-14 20:30:48.296995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.954 [2024-02-14 20:30:48.297361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.954 [2024-02-14 20:30:48.297375] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.954 qpair failed and we were unable to recover it. 00:30:10.954 [2024-02-14 20:30:48.297801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.954 [2024-02-14 20:30:48.298155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.954 [2024-02-14 20:30:48.298169] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.954 qpair failed and we were unable to recover it. 00:30:10.954 [2024-02-14 20:30:48.298536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.954 [2024-02-14 20:30:48.298920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.954 [2024-02-14 20:30:48.298934] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.954 qpair failed and we were unable to recover it. 00:30:10.954 [2024-02-14 20:30:48.299312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.954 [2024-02-14 20:30:48.299669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.954 [2024-02-14 20:30:48.299683] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.954 qpair failed and we were unable to recover it. 00:30:10.954 [2024-02-14 20:30:48.300048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.954 [2024-02-14 20:30:48.300448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.954 [2024-02-14 20:30:48.300463] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.954 qpair failed and we were unable to recover it. 00:30:10.954 [2024-02-14 20:30:48.300862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.954 [2024-02-14 20:30:48.301210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.954 [2024-02-14 20:30:48.301225] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.954 qpair failed and we were unable to recover it. 00:30:10.954 [2024-02-14 20:30:48.301575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.954 [2024-02-14 20:30:48.301875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.954 [2024-02-14 20:30:48.301890] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.954 qpair failed and we were unable to recover it. 00:30:10.954 [2024-02-14 20:30:48.302297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.954 [2024-02-14 20:30:48.302728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.954 [2024-02-14 20:30:48.302742] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.954 qpair failed and we were unable to recover it. 00:30:10.954 [2024-02-14 20:30:48.303162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.954 [2024-02-14 20:30:48.303408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.954 [2024-02-14 20:30:48.303423] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.954 qpair failed and we were unable to recover it. 00:30:10.954 [2024-02-14 20:30:48.303617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.954 [2024-02-14 20:30:48.303896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.954 [2024-02-14 20:30:48.303910] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.954 qpair failed and we were unable to recover it. 00:30:10.954 [2024-02-14 20:30:48.304107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.954 [2024-02-14 20:30:48.304460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.954 [2024-02-14 20:30:48.304474] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.954 qpair failed and we were unable to recover it. 00:30:10.954 [2024-02-14 20:30:48.304757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.954 [2024-02-14 20:30:48.305179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.954 [2024-02-14 20:30:48.305193] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.954 qpair failed and we were unable to recover it. 00:30:10.954 [2024-02-14 20:30:48.305623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.954 [2024-02-14 20:30:48.305976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.954 [2024-02-14 20:30:48.305991] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.954 qpair failed and we were unable to recover it. 00:30:10.954 [2024-02-14 20:30:48.306280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.954 [2024-02-14 20:30:48.306625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.954 [2024-02-14 20:30:48.306639] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.954 qpair failed and we were unable to recover it. 00:30:10.954 [2024-02-14 20:30:48.307055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.954 [2024-02-14 20:30:48.307487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.954 [2024-02-14 20:30:48.307501] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.954 qpair failed and we were unable to recover it. 00:30:10.954 [2024-02-14 20:30:48.307852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.954 [2024-02-14 20:30:48.308251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.954 [2024-02-14 20:30:48.308266] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.954 qpair failed and we were unable to recover it. 00:30:10.954 [2024-02-14 20:30:48.308669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.954 [2024-02-14 20:30:48.309015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.954 [2024-02-14 20:30:48.309029] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.954 qpair failed and we were unable to recover it. 00:30:10.954 [2024-02-14 20:30:48.309368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.954 [2024-02-14 20:30:48.309833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.954 [2024-02-14 20:30:48.309847] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.954 qpair failed and we were unable to recover it. 00:30:10.954 [2024-02-14 20:30:48.310282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.954 [2024-02-14 20:30:48.310686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.954 [2024-02-14 20:30:48.310700] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.954 qpair failed and we were unable to recover it. 00:30:10.954 [2024-02-14 20:30:48.311007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.954 [2024-02-14 20:30:48.311358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.954 [2024-02-14 20:30:48.311372] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.954 qpair failed and we were unable to recover it. 00:30:10.954 [2024-02-14 20:30:48.311668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.954 [2024-02-14 20:30:48.312029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.954 [2024-02-14 20:30:48.312043] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.954 qpair failed and we were unable to recover it. 00:30:10.954 [2024-02-14 20:30:48.312472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.954 [2024-02-14 20:30:48.312891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.954 [2024-02-14 20:30:48.312906] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.954 qpair failed and we were unable to recover it. 00:30:10.954 [2024-02-14 20:30:48.313356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.954 [2024-02-14 20:30:48.313778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.954 [2024-02-14 20:30:48.313792] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.954 qpair failed and we were unable to recover it. 00:30:10.954 [2024-02-14 20:30:48.314086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.954 [2024-02-14 20:30:48.314278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.954 [2024-02-14 20:30:48.314293] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.954 qpair failed and we were unable to recover it. 00:30:10.954 [2024-02-14 20:30:48.314730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.954 [2024-02-14 20:30:48.315065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.954 [2024-02-14 20:30:48.315080] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.955 qpair failed and we were unable to recover it. 00:30:10.955 [2024-02-14 20:30:48.315417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.955 [2024-02-14 20:30:48.315706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.955 [2024-02-14 20:30:48.315721] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.955 qpair failed and we were unable to recover it. 00:30:10.955 [2024-02-14 20:30:48.316082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.955 [2024-02-14 20:30:48.316457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.955 [2024-02-14 20:30:48.316473] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.955 qpair failed and we were unable to recover it. 00:30:10.955 [2024-02-14 20:30:48.316852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.955 [2024-02-14 20:30:48.317202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.955 [2024-02-14 20:30:48.317216] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.955 qpair failed and we were unable to recover it. 00:30:10.955 [2024-02-14 20:30:48.317586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.955 [2024-02-14 20:30:48.317988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.955 [2024-02-14 20:30:48.318002] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.955 qpair failed and we were unable to recover it. 00:30:10.955 [2024-02-14 20:30:48.318412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.955 [2024-02-14 20:30:48.318843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.955 [2024-02-14 20:30:48.318858] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.955 qpair failed and we were unable to recover it. 00:30:10.955 [2024-02-14 20:30:48.319208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.955 [2024-02-14 20:30:48.319655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.955 [2024-02-14 20:30:48.319670] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.955 qpair failed and we were unable to recover it. 00:30:10.955 [2024-02-14 20:30:48.320075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.955 [2024-02-14 20:30:48.320504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.955 [2024-02-14 20:30:48.320519] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.955 qpair failed and we were unable to recover it. 00:30:10.955 [2024-02-14 20:30:48.320866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.955 [2024-02-14 20:30:48.321260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.955 [2024-02-14 20:30:48.321274] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.955 qpair failed and we were unable to recover it. 00:30:10.955 [2024-02-14 20:30:48.321628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.955 [2024-02-14 20:30:48.321993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.955 [2024-02-14 20:30:48.322008] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.955 qpair failed and we were unable to recover it. 00:30:10.955 [2024-02-14 20:30:48.322460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.955 [2024-02-14 20:30:48.322757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.955 [2024-02-14 20:30:48.322772] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.955 qpair failed and we were unable to recover it. 00:30:10.955 [2024-02-14 20:30:48.323188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.955 [2024-02-14 20:30:48.323585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.955 [2024-02-14 20:30:48.323599] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.955 qpair failed and we were unable to recover it. 00:30:10.955 [2024-02-14 20:30:48.323818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.955 [2024-02-14 20:30:48.324240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.955 [2024-02-14 20:30:48.324257] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.955 qpair failed and we were unable to recover it. 00:30:10.955 [2024-02-14 20:30:48.324615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.955 [2024-02-14 20:30:48.324954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.955 [2024-02-14 20:30:48.324969] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.955 qpair failed and we were unable to recover it. 00:30:10.955 [2024-02-14 20:30:48.325323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.955 [2024-02-14 20:30:48.325489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.955 [2024-02-14 20:30:48.325503] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.955 qpair failed and we were unable to recover it. 00:30:10.955 [2024-02-14 20:30:48.325909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.955 [2024-02-14 20:30:48.326313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.955 [2024-02-14 20:30:48.326327] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.955 qpair failed and we were unable to recover it. 00:30:10.955 [2024-02-14 20:30:48.326688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.955 [2024-02-14 20:30:48.327075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.955 [2024-02-14 20:30:48.327089] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.955 qpair failed and we were unable to recover it. 00:30:10.955 [2024-02-14 20:30:48.327493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.955 [2024-02-14 20:30:48.327914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.955 [2024-02-14 20:30:48.327928] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.955 qpair failed and we were unable to recover it. 00:30:10.955 [2024-02-14 20:30:48.328353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.955 [2024-02-14 20:30:48.328752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.955 [2024-02-14 20:30:48.328766] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.955 qpair failed and we were unable to recover it. 00:30:10.955 [2024-02-14 20:30:48.329169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.955 [2024-02-14 20:30:48.329513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.955 [2024-02-14 20:30:48.329527] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.955 qpair failed and we were unable to recover it. 00:30:10.955 [2024-02-14 20:30:48.329943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.955 [2024-02-14 20:30:48.330363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.955 [2024-02-14 20:30:48.330378] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.955 qpair failed and we were unable to recover it. 00:30:10.955 [2024-02-14 20:30:48.330756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.955 [2024-02-14 20:30:48.331122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.955 [2024-02-14 20:30:48.331137] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.955 qpair failed and we were unable to recover it. 00:30:10.955 [2024-02-14 20:30:48.331503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.955 [2024-02-14 20:30:48.331703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.955 [2024-02-14 20:30:48.331721] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.955 qpair failed and we were unable to recover it. 00:30:10.955 [2024-02-14 20:30:48.332158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.955 [2024-02-14 20:30:48.332503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.955 [2024-02-14 20:30:48.332517] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.955 qpair failed and we were unable to recover it. 00:30:10.955 [2024-02-14 20:30:48.332965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.955 [2024-02-14 20:30:48.333315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.955 [2024-02-14 20:30:48.333330] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.955 qpair failed and we were unable to recover it. 00:30:10.955 [2024-02-14 20:30:48.333742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.955 [2024-02-14 20:30:48.334159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.955 [2024-02-14 20:30:48.334173] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.955 qpair failed and we were unable to recover it. 00:30:10.955 [2024-02-14 20:30:48.334537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.955 [2024-02-14 20:30:48.334843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.955 [2024-02-14 20:30:48.334858] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.955 qpair failed and we were unable to recover it. 00:30:10.955 [2024-02-14 20:30:48.335215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.955 [2024-02-14 20:30:48.335640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.955 [2024-02-14 20:30:48.335665] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.955 qpair failed and we were unable to recover it. 00:30:10.955 [2024-02-14 20:30:48.336091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.955 [2024-02-14 20:30:48.336438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.955 [2024-02-14 20:30:48.336453] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.955 qpair failed and we were unable to recover it. 00:30:10.955 [2024-02-14 20:30:48.336752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.956 [2024-02-14 20:30:48.337156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.956 [2024-02-14 20:30:48.337171] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.956 qpair failed and we were unable to recover it. 00:30:10.956 [2024-02-14 20:30:48.337527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.956 [2024-02-14 20:30:48.337883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.956 [2024-02-14 20:30:48.337897] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.956 qpair failed and we were unable to recover it. 00:30:10.956 [2024-02-14 20:30:48.338246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.956 [2024-02-14 20:30:48.338603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.956 [2024-02-14 20:30:48.338617] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.956 qpair failed and we were unable to recover it. 00:30:10.956 [2024-02-14 20:30:48.338969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.956 [2024-02-14 20:30:48.339395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.956 [2024-02-14 20:30:48.339411] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.956 qpair failed and we were unable to recover it. 00:30:10.956 [2024-02-14 20:30:48.339837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.956 [2024-02-14 20:30:48.340190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.956 [2024-02-14 20:30:48.340204] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.956 qpair failed and we were unable to recover it. 00:30:10.956 [2024-02-14 20:30:48.340560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.956 [2024-02-14 20:30:48.340913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.956 [2024-02-14 20:30:48.340928] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.956 qpair failed and we were unable to recover it. 00:30:10.956 [2024-02-14 20:30:48.341290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.956 [2024-02-14 20:30:48.341718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.956 [2024-02-14 20:30:48.341733] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.956 qpair failed and we were unable to recover it. 00:30:10.956 [2024-02-14 20:30:48.342092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.956 [2024-02-14 20:30:48.342369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.956 [2024-02-14 20:30:48.342383] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.956 qpair failed and we were unable to recover it. 00:30:10.956 [2024-02-14 20:30:48.342730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.956 [2024-02-14 20:30:48.343129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.956 [2024-02-14 20:30:48.343144] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.956 qpair failed and we were unable to recover it. 00:30:10.956 [2024-02-14 20:30:48.343509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.956 [2024-02-14 20:30:48.343867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.956 [2024-02-14 20:30:48.343882] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.956 qpair failed and we were unable to recover it. 00:30:10.956 [2024-02-14 20:30:48.344248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.956 [2024-02-14 20:30:48.344665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.956 [2024-02-14 20:30:48.344680] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.956 qpair failed and we were unable to recover it. 00:30:10.956 [2024-02-14 20:30:48.345109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.956 [2024-02-14 20:30:48.345453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.956 [2024-02-14 20:30:48.345467] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.956 qpair failed and we were unable to recover it. 00:30:10.956 [2024-02-14 20:30:48.345886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.956 [2024-02-14 20:30:48.346230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.956 [2024-02-14 20:30:48.346245] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.956 qpair failed and we were unable to recover it. 00:30:10.956 [2024-02-14 20:30:48.346626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.956 [2024-02-14 20:30:48.346978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.956 [2024-02-14 20:30:48.346992] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.956 qpair failed and we were unable to recover it. 00:30:10.956 [2024-02-14 20:30:48.347442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.956 [2024-02-14 20:30:48.347846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.956 [2024-02-14 20:30:48.347861] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.956 qpair failed and we were unable to recover it. 00:30:10.956 [2024-02-14 20:30:48.348275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.956 [2024-02-14 20:30:48.348631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.956 [2024-02-14 20:30:48.348657] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.956 qpair failed and we were unable to recover it. 00:30:10.956 [2024-02-14 20:30:48.349014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.956 [2024-02-14 20:30:48.349359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.956 [2024-02-14 20:30:48.349373] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.956 qpair failed and we were unable to recover it. 00:30:10.956 [2024-02-14 20:30:48.349801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.956 [2024-02-14 20:30:48.350205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.956 [2024-02-14 20:30:48.350220] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.956 qpair failed and we were unable to recover it. 00:30:10.956 [2024-02-14 20:30:48.350656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.956 [2024-02-14 20:30:48.350960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.956 [2024-02-14 20:30:48.350975] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.956 qpair failed and we were unable to recover it. 00:30:10.956 [2024-02-14 20:30:48.351331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.956 [2024-02-14 20:30:48.351760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.956 [2024-02-14 20:30:48.351775] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:10.956 qpair failed and we were unable to recover it. 00:30:11.222 [2024-02-14 20:30:48.352132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.222 [2024-02-14 20:30:48.352495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.222 [2024-02-14 20:30:48.352509] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:11.222 qpair failed and we were unable to recover it. 00:30:11.222 [2024-02-14 20:30:48.352817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.222 [2024-02-14 20:30:48.353239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.222 [2024-02-14 20:30:48.353253] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:11.222 qpair failed and we were unable to recover it. 00:30:11.222 [2024-02-14 20:30:48.353659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.222 [2024-02-14 20:30:48.354014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.222 [2024-02-14 20:30:48.354028] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:11.222 qpair failed and we were unable to recover it. 00:30:11.222 [2024-02-14 20:30:48.354313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.222 [2024-02-14 20:30:48.354688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.222 [2024-02-14 20:30:48.354703] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:11.222 qpair failed and we were unable to recover it. 00:30:11.222 [2024-02-14 20:30:48.355063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.222 [2024-02-14 20:30:48.355409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.222 [2024-02-14 20:30:48.355423] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:11.222 qpair failed and we were unable to recover it. 00:30:11.222 [2024-02-14 20:30:48.355773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.222 [2024-02-14 20:30:48.356142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.222 [2024-02-14 20:30:48.356156] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:11.222 qpair failed and we were unable to recover it. 00:30:11.222 [2024-02-14 20:30:48.356506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.222 [2024-02-14 20:30:48.356909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.222 [2024-02-14 20:30:48.356924] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:11.222 qpair failed and we were unable to recover it. 00:30:11.222 [2024-02-14 20:30:48.357279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.222 [2024-02-14 20:30:48.357633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.222 [2024-02-14 20:30:48.357650] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:11.222 qpair failed and we were unable to recover it. 00:30:11.222 [2024-02-14 20:30:48.358014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.222 [2024-02-14 20:30:48.358444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.222 [2024-02-14 20:30:48.358458] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:11.222 qpair failed and we were unable to recover it. 00:30:11.222 [2024-02-14 20:30:48.358884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.222 [2024-02-14 20:30:48.359237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.222 [2024-02-14 20:30:48.359251] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:11.222 qpair failed and we were unable to recover it. 00:30:11.222 [2024-02-14 20:30:48.359546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.222 [2024-02-14 20:30:48.359970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.223 [2024-02-14 20:30:48.359984] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:11.223 qpair failed and we were unable to recover it. 00:30:11.223 [2024-02-14 20:30:48.360330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.223 [2024-02-14 20:30:48.360681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.223 [2024-02-14 20:30:48.360695] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:11.223 qpair failed and we were unable to recover it. 00:30:11.223 [2024-02-14 20:30:48.361121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.223 [2024-02-14 20:30:48.361462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.223 [2024-02-14 20:30:48.361476] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:11.223 qpair failed and we were unable to recover it. 00:30:11.223 [2024-02-14 20:30:48.361833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.223 [2024-02-14 20:30:48.362168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.223 [2024-02-14 20:30:48.362183] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:11.223 qpair failed and we were unable to recover it. 00:30:11.223 [2024-02-14 20:30:48.362551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.223 [2024-02-14 20:30:48.362896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.223 [2024-02-14 20:30:48.362911] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:11.223 qpair failed and we were unable to recover it. 00:30:11.223 [2024-02-14 20:30:48.363317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.223 [2024-02-14 20:30:48.363691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.223 [2024-02-14 20:30:48.363706] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:11.223 qpair failed and we were unable to recover it. 00:30:11.223 [2024-02-14 20:30:48.364054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.223 [2024-02-14 20:30:48.364399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.223 [2024-02-14 20:30:48.364413] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:11.223 qpair failed and we were unable to recover it. 00:30:11.223 [2024-02-14 20:30:48.364825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.223 [2024-02-14 20:30:48.365236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.223 [2024-02-14 20:30:48.365250] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:11.223 qpair failed and we were unable to recover it. 00:30:11.223 [2024-02-14 20:30:48.365657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.223 [2024-02-14 20:30:48.365994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.223 [2024-02-14 20:30:48.366009] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:11.223 qpair failed and we were unable to recover it. 00:30:11.223 [2024-02-14 20:30:48.366386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.223 [2024-02-14 20:30:48.366786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.223 [2024-02-14 20:30:48.366801] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:11.223 qpair failed and we were unable to recover it. 00:30:11.223 [2024-02-14 20:30:48.367213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.223 [2024-02-14 20:30:48.367568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.223 [2024-02-14 20:30:48.367582] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:11.223 qpair failed and we were unable to recover it. 00:30:11.223 [2024-02-14 20:30:48.367985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.223 [2024-02-14 20:30:48.368354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.223 [2024-02-14 20:30:48.368369] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:11.223 qpair failed and we were unable to recover it. 00:30:11.223 [2024-02-14 20:30:48.368724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.223 [2024-02-14 20:30:48.369020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.223 [2024-02-14 20:30:48.369034] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:11.223 qpair failed and we were unable to recover it. 00:30:11.223 [2024-02-14 20:30:48.369365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.223 [2024-02-14 20:30:48.369604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.223 [2024-02-14 20:30:48.369618] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:11.223 qpair failed and we were unable to recover it. 00:30:11.223 [2024-02-14 20:30:48.370063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.223 [2024-02-14 20:30:48.370527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.223 [2024-02-14 20:30:48.370542] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:11.223 qpair failed and we were unable to recover it. 00:30:11.223 [2024-02-14 20:30:48.370959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.223 [2024-02-14 20:30:48.371389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.223 [2024-02-14 20:30:48.371403] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:11.223 qpair failed and we were unable to recover it. 00:30:11.223 [2024-02-14 20:30:48.371836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.223 [2024-02-14 20:30:48.372261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.223 [2024-02-14 20:30:48.372276] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:11.223 qpair failed and we were unable to recover it. 00:30:11.223 [2024-02-14 20:30:48.372681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.223 [2024-02-14 20:30:48.373121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.223 [2024-02-14 20:30:48.373135] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:11.223 qpair failed and we were unable to recover it. 00:30:11.223 [2024-02-14 20:30:48.373546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.223 [2024-02-14 20:30:48.373917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.223 [2024-02-14 20:30:48.373931] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:11.223 qpair failed and we were unable to recover it. 00:30:11.223 [2024-02-14 20:30:48.374274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.223 [2024-02-14 20:30:48.374699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.223 [2024-02-14 20:30:48.374715] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:11.223 qpair failed and we were unable to recover it. 00:30:11.223 [2024-02-14 20:30:48.375140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.223 [2024-02-14 20:30:48.375516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.223 [2024-02-14 20:30:48.375530] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:11.223 qpair failed and we were unable to recover it. 00:30:11.223 [2024-02-14 20:30:48.375932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.223 [2024-02-14 20:30:48.376286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.223 [2024-02-14 20:30:48.376300] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:11.223 qpair failed and we were unable to recover it. 00:30:11.223 [2024-02-14 20:30:48.376777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.223 [2024-02-14 20:30:48.377247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.223 [2024-02-14 20:30:48.377261] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:11.223 qpair failed and we were unable to recover it. 00:30:11.223 [2024-02-14 20:30:48.377686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.223 [2024-02-14 20:30:48.378102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.223 [2024-02-14 20:30:48.378116] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:11.223 qpair failed and we were unable to recover it. 00:30:11.223 [2024-02-14 20:30:48.378520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.223 [2024-02-14 20:30:48.378932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.223 [2024-02-14 20:30:48.378947] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:11.223 qpair failed and we were unable to recover it. 00:30:11.223 [2024-02-14 20:30:48.379369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.223 [2024-02-14 20:30:48.379769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.223 [2024-02-14 20:30:48.379784] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:11.223 qpair failed and we were unable to recover it. 00:30:11.223 [2024-02-14 20:30:48.380225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.223 [2024-02-14 20:30:48.380654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.223 [2024-02-14 20:30:48.380669] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:11.223 qpair failed and we were unable to recover it. 00:30:11.223 [2024-02-14 20:30:48.381008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.223 [2024-02-14 20:30:48.381433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.223 [2024-02-14 20:30:48.381447] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:11.223 qpair failed and we were unable to recover it. 00:30:11.223 [2024-02-14 20:30:48.381809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.223 [2024-02-14 20:30:48.382260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.224 [2024-02-14 20:30:48.382275] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:11.224 qpair failed and we were unable to recover it. 00:30:11.224 [2024-02-14 20:30:48.382691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.224 [2024-02-14 20:30:48.383121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.224 [2024-02-14 20:30:48.383136] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:11.224 qpair failed and we were unable to recover it. 00:30:11.224 [2024-02-14 20:30:48.383558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.224 [2024-02-14 20:30:48.383978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.224 [2024-02-14 20:30:48.383992] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:11.224 qpair failed and we were unable to recover it. 00:30:11.224 [2024-02-14 20:30:48.384394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.224 [2024-02-14 20:30:48.384690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.224 [2024-02-14 20:30:48.384706] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:11.224 qpair failed and we were unable to recover it. 00:30:11.224 [2024-02-14 20:30:48.385054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.224 [2024-02-14 20:30:48.385456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.224 [2024-02-14 20:30:48.385471] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:11.224 qpair failed and we were unable to recover it. 00:30:11.224 [2024-02-14 20:30:48.385872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.224 [2024-02-14 20:30:48.386170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.224 [2024-02-14 20:30:48.386184] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:11.224 qpair failed and we were unable to recover it. 00:30:11.224 [2024-02-14 20:30:48.386592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.224 [2024-02-14 20:30:48.386999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.224 [2024-02-14 20:30:48.387014] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:11.224 qpair failed and we were unable to recover it. 00:30:11.224 [2024-02-14 20:30:48.387419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.224 [2024-02-14 20:30:48.387828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.224 [2024-02-14 20:30:48.387842] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:11.224 qpair failed and we were unable to recover it. 00:30:11.224 [2024-02-14 20:30:48.388179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.224 [2024-02-14 20:30:48.388601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.224 [2024-02-14 20:30:48.388615] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:11.224 qpair failed and we were unable to recover it. 00:30:11.224 [2024-02-14 20:30:48.389037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.224 [2024-02-14 20:30:48.389460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.224 [2024-02-14 20:30:48.389474] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:11.224 qpair failed and we were unable to recover it. 00:30:11.224 [2024-02-14 20:30:48.389902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.224 [2024-02-14 20:30:48.390321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.224 [2024-02-14 20:30:48.390336] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:11.224 qpair failed and we were unable to recover it. 00:30:11.224 [2024-02-14 20:30:48.390707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.224 [2024-02-14 20:30:48.391139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.224 [2024-02-14 20:30:48.391154] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:11.224 qpair failed and we were unable to recover it. 00:30:11.224 [2024-02-14 20:30:48.391557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.224 [2024-02-14 20:30:48.391970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.224 [2024-02-14 20:30:48.391984] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:11.224 qpair failed and we were unable to recover it. 00:30:11.224 [2024-02-14 20:30:48.392388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.224 [2024-02-14 20:30:48.392743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.224 [2024-02-14 20:30:48.392758] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:11.224 qpair failed and we were unable to recover it. 00:30:11.224 [2024-02-14 20:30:48.393094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.224 [2024-02-14 20:30:48.393441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.224 [2024-02-14 20:30:48.393455] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:11.224 qpair failed and we were unable to recover it. 00:30:11.224 [2024-02-14 20:30:48.393815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.224 [2024-02-14 20:30:48.394251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.224 [2024-02-14 20:30:48.394265] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc780000b90 with addr=10.0.0.2, port=4420 00:30:11.224 qpair failed and we were unable to recover it. 00:30:11.224 [2024-02-14 20:30:48.394763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.224 [2024-02-14 20:30:48.395162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.224 [2024-02-14 20:30:48.395180] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.224 qpair failed and we were unable to recover it. 00:30:11.224 [2024-02-14 20:30:48.395591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.224 [2024-02-14 20:30:48.395945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.224 [2024-02-14 20:30:48.395962] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.224 qpair failed and we were unable to recover it. 00:30:11.224 [2024-02-14 20:30:48.396416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.224 [2024-02-14 20:30:48.396840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.224 [2024-02-14 20:30:48.396854] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.224 qpair failed and we were unable to recover it. 00:30:11.224 [2024-02-14 20:30:48.397285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.224 [2024-02-14 20:30:48.397706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.224 [2024-02-14 20:30:48.397723] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.224 qpair failed and we were unable to recover it. 00:30:11.224 [2024-02-14 20:30:48.398152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.224 [2024-02-14 20:30:48.398571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.224 [2024-02-14 20:30:48.398585] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.224 qpair failed and we were unable to recover it. 00:30:11.224 [2024-02-14 20:30:48.398995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.224 [2024-02-14 20:30:48.399420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.224 [2024-02-14 20:30:48.399434] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.224 qpair failed and we were unable to recover it. 00:30:11.224 [2024-02-14 20:30:48.399859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.224 [2024-02-14 20:30:48.400228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.224 [2024-02-14 20:30:48.400242] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.224 qpair failed and we were unable to recover it. 00:30:11.224 [2024-02-14 20:30:48.400675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.224 [2024-02-14 20:30:48.401088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.224 [2024-02-14 20:30:48.401103] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.224 qpair failed and we were unable to recover it. 00:30:11.224 [2024-02-14 20:30:48.401528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.224 [2024-02-14 20:30:48.401881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.224 [2024-02-14 20:30:48.401896] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.224 qpair failed and we were unable to recover it. 00:30:11.224 [2024-02-14 20:30:48.402323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.224 [2024-02-14 20:30:48.402751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.224 [2024-02-14 20:30:48.402766] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.224 qpair failed and we were unable to recover it. 00:30:11.224 [2024-02-14 20:30:48.403194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.224 [2024-02-14 20:30:48.403576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.224 [2024-02-14 20:30:48.403593] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.224 qpair failed and we were unable to recover it. 00:30:11.224 [2024-02-14 20:30:48.403932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.224 [2024-02-14 20:30:48.404290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.224 [2024-02-14 20:30:48.404305] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.224 qpair failed and we were unable to recover it. 00:30:11.224 [2024-02-14 20:30:48.404733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.224 [2024-02-14 20:30:48.405162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.224 [2024-02-14 20:30:48.405177] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.224 qpair failed and we were unable to recover it. 00:30:11.225 [2024-02-14 20:30:48.405601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.225 [2024-02-14 20:30:48.405988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.225 [2024-02-14 20:30:48.406003] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.225 qpair failed and we were unable to recover it. 00:30:11.225 [2024-02-14 20:30:48.406431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.225 [2024-02-14 20:30:48.406811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.225 [2024-02-14 20:30:48.406826] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.225 qpair failed and we were unable to recover it. 00:30:11.225 [2024-02-14 20:30:48.407267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.225 [2024-02-14 20:30:48.407652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.225 [2024-02-14 20:30:48.407667] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.225 qpair failed and we were unable to recover it. 00:30:11.225 [2024-02-14 20:30:48.408021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.225 [2024-02-14 20:30:48.408366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.225 [2024-02-14 20:30:48.408381] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.225 qpair failed and we were unable to recover it. 00:30:11.225 [2024-02-14 20:30:48.408807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.225 [2024-02-14 20:30:48.409233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.225 [2024-02-14 20:30:48.409248] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.225 qpair failed and we were unable to recover it. 00:30:11.225 [2024-02-14 20:30:48.409677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.225 [2024-02-14 20:30:48.410098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.225 [2024-02-14 20:30:48.410112] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.225 qpair failed and we were unable to recover it. 00:30:11.225 [2024-02-14 20:30:48.410465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.225 [2024-02-14 20:30:48.410822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.225 [2024-02-14 20:30:48.410836] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.225 qpair failed and we were unable to recover it. 00:30:11.225 [2024-02-14 20:30:48.411261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.225 [2024-02-14 20:30:48.411687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.225 [2024-02-14 20:30:48.411705] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.225 qpair failed and we were unable to recover it. 00:30:11.225 [2024-02-14 20:30:48.412125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.225 [2024-02-14 20:30:48.412487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.225 [2024-02-14 20:30:48.412501] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.225 qpair failed and we were unable to recover it. 00:30:11.225 [2024-02-14 20:30:48.412901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.225 [2024-02-14 20:30:48.413315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.225 [2024-02-14 20:30:48.413330] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.225 qpair failed and we were unable to recover it. 00:30:11.225 [2024-02-14 20:30:48.413735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.225 [2024-02-14 20:30:48.414152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.225 [2024-02-14 20:30:48.414166] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.225 qpair failed and we were unable to recover it. 00:30:11.225 [2024-02-14 20:30:48.414567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.225 [2024-02-14 20:30:48.414938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.225 [2024-02-14 20:30:48.414953] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.225 qpair failed and we were unable to recover it. 00:30:11.225 [2024-02-14 20:30:48.415339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.225 [2024-02-14 20:30:48.415768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.225 [2024-02-14 20:30:48.415783] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.225 qpair failed and we were unable to recover it. 00:30:11.225 [2024-02-14 20:30:48.416184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.225 [2024-02-14 20:30:48.416607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.225 [2024-02-14 20:30:48.416621] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.225 qpair failed and we were unable to recover it. 00:30:11.225 [2024-02-14 20:30:48.416974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.225 [2024-02-14 20:30:48.417345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.225 [2024-02-14 20:30:48.417360] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.225 qpair failed and we were unable to recover it. 00:30:11.225 [2024-02-14 20:30:48.417800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.225 [2024-02-14 20:30:48.418227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.225 [2024-02-14 20:30:48.418242] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.225 qpair failed and we were unable to recover it. 00:30:11.225 [2024-02-14 20:30:48.418664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.225 [2024-02-14 20:30:48.419083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.225 [2024-02-14 20:30:48.419097] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.225 qpair failed and we were unable to recover it. 00:30:11.225 [2024-02-14 20:30:48.419498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.225 [2024-02-14 20:30:48.419836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.225 [2024-02-14 20:30:48.419853] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.225 qpair failed and we were unable to recover it. 00:30:11.225 [2024-02-14 20:30:48.420203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.225 [2024-02-14 20:30:48.420661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.225 [2024-02-14 20:30:48.420677] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.225 qpair failed and we were unable to recover it. 00:30:11.225 [2024-02-14 20:30:48.420967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.225 [2024-02-14 20:30:48.421384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.225 [2024-02-14 20:30:48.421398] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.225 qpair failed and we were unable to recover it. 00:30:11.225 [2024-02-14 20:30:48.421799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.225 [2024-02-14 20:30:48.422222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.225 [2024-02-14 20:30:48.422236] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.225 qpair failed and we were unable to recover it. 00:30:11.225 [2024-02-14 20:30:48.422659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.225 [2024-02-14 20:30:48.423001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.225 [2024-02-14 20:30:48.423015] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.225 qpair failed and we were unable to recover it. 00:30:11.225 [2024-02-14 20:30:48.423384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.225 [2024-02-14 20:30:48.423732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.225 [2024-02-14 20:30:48.423746] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.225 qpair failed and we were unable to recover it. 00:30:11.225 [2024-02-14 20:30:48.424147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.225 [2024-02-14 20:30:48.424488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.225 [2024-02-14 20:30:48.424503] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.225 qpair failed and we were unable to recover it. 00:30:11.225 [2024-02-14 20:30:48.424908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.225 [2024-02-14 20:30:48.425325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.225 [2024-02-14 20:30:48.425339] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.225 qpair failed and we were unable to recover it. 00:30:11.225 [2024-02-14 20:30:48.425763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.225 [2024-02-14 20:30:48.426119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.225 [2024-02-14 20:30:48.426133] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.225 qpair failed and we were unable to recover it. 00:30:11.225 [2024-02-14 20:30:48.426582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.225 [2024-02-14 20:30:48.426964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.225 [2024-02-14 20:30:48.426979] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.225 qpair failed and we were unable to recover it. 00:30:11.225 [2024-02-14 20:30:48.427316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.225 [2024-02-14 20:30:48.427720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.225 [2024-02-14 20:30:48.427735] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.225 qpair failed and we were unable to recover it. 00:30:11.225 [2024-02-14 20:30:48.428150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.225 [2024-02-14 20:30:48.428502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.225 [2024-02-14 20:30:48.428517] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.225 qpair failed and we were unable to recover it. 00:30:11.226 [2024-02-14 20:30:48.428866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.226 [2024-02-14 20:30:48.429245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.226 [2024-02-14 20:30:48.429260] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.226 qpair failed and we were unable to recover it. 00:30:11.226 [2024-02-14 20:30:48.429564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.226 [2024-02-14 20:30:48.429906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.226 [2024-02-14 20:30:48.429921] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.226 qpair failed and we were unable to recover it. 00:30:11.226 [2024-02-14 20:30:48.430302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.226 [2024-02-14 20:30:48.430589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.226 [2024-02-14 20:30:48.430604] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.226 qpair failed and we were unable to recover it. 00:30:11.226 [2024-02-14 20:30:48.431006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.226 [2024-02-14 20:30:48.431292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.226 [2024-02-14 20:30:48.431306] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.226 qpair failed and we were unable to recover it. 00:30:11.226 [2024-02-14 20:30:48.431734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.226 [2024-02-14 20:30:48.432078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.226 [2024-02-14 20:30:48.432093] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.226 qpair failed and we were unable to recover it. 00:30:11.226 [2024-02-14 20:30:48.432439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.226 [2024-02-14 20:30:48.432775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.226 [2024-02-14 20:30:48.432790] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.226 qpair failed and we were unable to recover it. 00:30:11.226 [2024-02-14 20:30:48.433215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.226 [2024-02-14 20:30:48.433534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.226 [2024-02-14 20:30:48.433549] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.226 qpair failed and we were unable to recover it. 00:30:11.226 [2024-02-14 20:30:48.433833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.226 [2024-02-14 20:30:48.434118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.226 [2024-02-14 20:30:48.434133] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.226 qpair failed and we were unable to recover it. 00:30:11.226 [2024-02-14 20:30:48.434541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.226 [2024-02-14 20:30:48.434877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.226 [2024-02-14 20:30:48.434892] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.226 qpair failed and we were unable to recover it. 00:30:11.226 [2024-02-14 20:30:48.435246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.226 [2024-02-14 20:30:48.435601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.226 [2024-02-14 20:30:48.435616] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.226 qpair failed and we were unable to recover it. 00:30:11.226 [2024-02-14 20:30:48.436018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.226 [2024-02-14 20:30:48.436303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.226 [2024-02-14 20:30:48.436318] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.226 qpair failed and we were unable to recover it. 00:30:11.226 [2024-02-14 20:30:48.436746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.226 [2024-02-14 20:30:48.437115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.226 [2024-02-14 20:30:48.437129] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.226 qpair failed and we were unable to recover it. 00:30:11.226 [2024-02-14 20:30:48.437445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.226 [2024-02-14 20:30:48.437867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.226 [2024-02-14 20:30:48.437882] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.226 qpair failed and we were unable to recover it. 00:30:11.226 [2024-02-14 20:30:48.438283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.226 [2024-02-14 20:30:48.438640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.226 [2024-02-14 20:30:48.438659] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.226 qpair failed and we were unable to recover it. 00:30:11.226 [2024-02-14 20:30:48.439008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.226 [2024-02-14 20:30:48.439359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.226 [2024-02-14 20:30:48.439374] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.226 qpair failed and we were unable to recover it. 00:30:11.226 [2024-02-14 20:30:48.439750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.226 [2024-02-14 20:30:48.440196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.226 [2024-02-14 20:30:48.440211] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.226 qpair failed and we were unable to recover it. 00:30:11.226 [2024-02-14 20:30:48.440572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.226 [2024-02-14 20:30:48.440924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.226 [2024-02-14 20:30:48.440939] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.226 qpair failed and we were unable to recover it. 00:30:11.226 [2024-02-14 20:30:48.441301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.226 [2024-02-14 20:30:48.441669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.226 [2024-02-14 20:30:48.441684] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.226 qpair failed and we were unable to recover it. 00:30:11.226 [2024-02-14 20:30:48.441962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.226 [2024-02-14 20:30:48.442259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.226 [2024-02-14 20:30:48.442273] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.226 qpair failed and we were unable to recover it. 00:30:11.226 [2024-02-14 20:30:48.442625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.226 [2024-02-14 20:30:48.442974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.226 [2024-02-14 20:30:48.442990] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.226 qpair failed and we were unable to recover it. 00:30:11.226 [2024-02-14 20:30:48.443393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.226 [2024-02-14 20:30:48.443743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.226 [2024-02-14 20:30:48.443758] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.226 qpair failed and we were unable to recover it. 00:30:11.226 [2024-02-14 20:30:48.444040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.226 [2024-02-14 20:30:48.444458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.226 [2024-02-14 20:30:48.444472] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.226 qpair failed and we were unable to recover it. 00:30:11.226 [2024-02-14 20:30:48.444837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.226 [2024-02-14 20:30:48.445212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.226 [2024-02-14 20:30:48.445226] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.226 qpair failed and we were unable to recover it. 00:30:11.226 [2024-02-14 20:30:48.445502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.226 [2024-02-14 20:30:48.445908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.226 [2024-02-14 20:30:48.445923] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.226 qpair failed and we were unable to recover it. 00:30:11.226 [2024-02-14 20:30:48.446299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.226 [2024-02-14 20:30:48.446672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.226 [2024-02-14 20:30:48.446686] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.226 qpair failed and we were unable to recover it. 00:30:11.226 [2024-02-14 20:30:48.447031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.226 [2024-02-14 20:30:48.447388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.226 [2024-02-14 20:30:48.447403] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.226 qpair failed and we were unable to recover it. 00:30:11.226 [2024-02-14 20:30:48.447853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.226 [2024-02-14 20:30:48.448236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.226 [2024-02-14 20:30:48.448251] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.226 qpair failed and we were unable to recover it. 00:30:11.226 [2024-02-14 20:30:48.448589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.226 [2024-02-14 20:30:48.448936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.226 [2024-02-14 20:30:48.448950] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.226 qpair failed and we were unable to recover it. 00:30:11.227 [2024-02-14 20:30:48.449309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.227 [2024-02-14 20:30:48.449662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.227 [2024-02-14 20:30:48.449677] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.227 qpair failed and we were unable to recover it. 00:30:11.227 [2024-02-14 20:30:48.450124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.227 [2024-02-14 20:30:48.450526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.227 [2024-02-14 20:30:48.450541] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.227 qpair failed and we were unable to recover it. 00:30:11.227 [2024-02-14 20:30:48.450967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.227 [2024-02-14 20:30:48.451398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.227 [2024-02-14 20:30:48.451413] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.227 qpair failed and we were unable to recover it. 00:30:11.227 [2024-02-14 20:30:48.451783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.227 [2024-02-14 20:30:48.452133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.227 [2024-02-14 20:30:48.452147] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.227 qpair failed and we were unable to recover it. 00:30:11.227 [2024-02-14 20:30:48.452525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.227 [2024-02-14 20:30:48.452929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.227 [2024-02-14 20:30:48.452943] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.227 qpair failed and we were unable to recover it. 00:30:11.227 [2024-02-14 20:30:48.453326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.227 [2024-02-14 20:30:48.453739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.227 [2024-02-14 20:30:48.453754] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.227 qpair failed and we were unable to recover it. 00:30:11.227 [2024-02-14 20:30:48.454134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.227 [2024-02-14 20:30:48.454489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.227 [2024-02-14 20:30:48.454504] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.227 qpair failed and we were unable to recover it. 00:30:11.227 [2024-02-14 20:30:48.454919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.227 [2024-02-14 20:30:48.455212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.227 [2024-02-14 20:30:48.455226] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.227 qpair failed and we were unable to recover it. 00:30:11.227 [2024-02-14 20:30:48.455523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.227 [2024-02-14 20:30:48.455925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.227 [2024-02-14 20:30:48.455939] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.227 qpair failed and we were unable to recover it. 00:30:11.227 [2024-02-14 20:30:48.456362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.227 [2024-02-14 20:30:48.456763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.227 [2024-02-14 20:30:48.456778] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.227 qpair failed and we were unable to recover it. 00:30:11.227 [2024-02-14 20:30:48.457207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.227 [2024-02-14 20:30:48.457626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.227 [2024-02-14 20:30:48.457641] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.227 qpair failed and we were unable to recover it. 00:30:11.227 [2024-02-14 20:30:48.458084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.227 [2024-02-14 20:30:48.458482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.227 [2024-02-14 20:30:48.458500] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.227 qpair failed and we were unable to recover it. 00:30:11.227 [2024-02-14 20:30:48.458857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.227 [2024-02-14 20:30:48.459267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.227 [2024-02-14 20:30:48.459282] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.227 qpair failed and we were unable to recover it. 00:30:11.227 [2024-02-14 20:30:48.459684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.227 [2024-02-14 20:30:48.460023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.227 [2024-02-14 20:30:48.460037] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.227 qpair failed and we were unable to recover it. 00:30:11.227 [2024-02-14 20:30:48.460416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.227 [2024-02-14 20:30:48.460766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.227 [2024-02-14 20:30:48.460781] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.227 qpair failed and we were unable to recover it. 00:30:11.227 [2024-02-14 20:30:48.461151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.227 [2024-02-14 20:30:48.461520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.227 [2024-02-14 20:30:48.461534] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.227 qpair failed and we were unable to recover it. 00:30:11.227 [2024-02-14 20:30:48.461899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.227 [2024-02-14 20:30:48.462274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.227 [2024-02-14 20:30:48.462288] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.227 qpair failed and we were unable to recover it. 00:30:11.227 [2024-02-14 20:30:48.462645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.227 [2024-02-14 20:30:48.462998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.227 [2024-02-14 20:30:48.463013] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.227 qpair failed and we were unable to recover it. 00:30:11.227 [2024-02-14 20:30:48.463443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.227 [2024-02-14 20:30:48.463780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.227 [2024-02-14 20:30:48.463795] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.227 qpair failed and we were unable to recover it. 00:30:11.227 [2024-02-14 20:30:48.464133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.227 [2024-02-14 20:30:48.464501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.227 [2024-02-14 20:30:48.464515] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.227 qpair failed and we were unable to recover it. 00:30:11.227 [2024-02-14 20:30:48.464994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.227 [2024-02-14 20:30:48.465339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.227 [2024-02-14 20:30:48.465354] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.227 qpair failed and we were unable to recover it. 00:30:11.227 [2024-02-14 20:30:48.465796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.227 [2024-02-14 20:30:48.466171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.227 [2024-02-14 20:30:48.466185] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.227 qpair failed and we were unable to recover it. 00:30:11.227 [2024-02-14 20:30:48.466604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.227 [2024-02-14 20:30:48.467039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.227 [2024-02-14 20:30:48.467055] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.227 qpair failed and we were unable to recover it. 00:30:11.227 [2024-02-14 20:30:48.467396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.227 [2024-02-14 20:30:48.467816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.227 [2024-02-14 20:30:48.467831] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.227 qpair failed and we were unable to recover it. 00:30:11.228 [2024-02-14 20:30:48.468035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.228 [2024-02-14 20:30:48.468387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.228 [2024-02-14 20:30:48.468402] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.228 qpair failed and we were unable to recover it. 00:30:11.228 [2024-02-14 20:30:48.468803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.228 [2024-02-14 20:30:48.469200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.228 [2024-02-14 20:30:48.469216] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.228 qpair failed and we were unable to recover it. 00:30:11.228 [2024-02-14 20:30:48.469552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.228 [2024-02-14 20:30:48.469911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.228 [2024-02-14 20:30:48.469927] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.228 qpair failed and we were unable to recover it. 00:30:11.228 [2024-02-14 20:30:48.470285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.228 [2024-02-14 20:30:48.470706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.228 [2024-02-14 20:30:48.470721] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.228 qpair failed and we were unable to recover it. 00:30:11.228 [2024-02-14 20:30:48.471195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.228 [2024-02-14 20:30:48.471546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.228 [2024-02-14 20:30:48.471561] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.228 qpair failed and we were unable to recover it. 00:30:11.228 [2024-02-14 20:30:48.471919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.228 [2024-02-14 20:30:48.472289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.228 [2024-02-14 20:30:48.472303] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.228 qpair failed and we were unable to recover it. 00:30:11.228 [2024-02-14 20:30:48.472577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.228 [2024-02-14 20:30:48.473000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.228 [2024-02-14 20:30:48.473015] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.228 qpair failed and we were unable to recover it. 00:30:11.228 [2024-02-14 20:30:48.473426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.228 [2024-02-14 20:30:48.473816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.228 [2024-02-14 20:30:48.473831] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.228 qpair failed and we were unable to recover it. 00:30:11.228 [2024-02-14 20:30:48.474263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.228 [2024-02-14 20:30:48.474688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.228 [2024-02-14 20:30:48.474703] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.228 qpair failed and we were unable to recover it. 00:30:11.228 [2024-02-14 20:30:48.475107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.228 [2024-02-14 20:30:48.475486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.228 [2024-02-14 20:30:48.475501] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.228 qpair failed and we were unable to recover it. 00:30:11.228 [2024-02-14 20:30:48.475902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.228 [2024-02-14 20:30:48.476257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.228 [2024-02-14 20:30:48.476271] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.228 qpair failed and we were unable to recover it. 00:30:11.228 [2024-02-14 20:30:48.476691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.228 [2024-02-14 20:30:48.477090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.228 [2024-02-14 20:30:48.477104] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.228 qpair failed and we were unable to recover it. 00:30:11.228 [2024-02-14 20:30:48.477480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.228 [2024-02-14 20:30:48.477837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.228 [2024-02-14 20:30:48.477851] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.228 qpair failed and we were unable to recover it. 00:30:11.228 [2024-02-14 20:30:48.478205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.228 [2024-02-14 20:30:48.478615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.228 [2024-02-14 20:30:48.478629] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.228 qpair failed and we were unable to recover it. 00:30:11.228 [2024-02-14 20:30:48.478942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.228 [2024-02-14 20:30:48.479322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.228 [2024-02-14 20:30:48.479336] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.228 qpair failed and we were unable to recover it. 00:30:11.228 [2024-02-14 20:30:48.479696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.228 [2024-02-14 20:30:48.480142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.228 [2024-02-14 20:30:48.480157] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.228 qpair failed and we were unable to recover it. 00:30:11.228 [2024-02-14 20:30:48.480584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.228 [2024-02-14 20:30:48.480945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.228 [2024-02-14 20:30:48.480960] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.228 qpair failed and we were unable to recover it. 00:30:11.228 [2024-02-14 20:30:48.481300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.228 [2024-02-14 20:30:48.481598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.228 [2024-02-14 20:30:48.481612] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.228 qpair failed and we were unable to recover it. 00:30:11.228 [2024-02-14 20:30:48.481970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.228 [2024-02-14 20:30:48.482371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.228 [2024-02-14 20:30:48.482385] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.228 qpair failed and we were unable to recover it. 00:30:11.228 [2024-02-14 20:30:48.482738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.228 [2024-02-14 20:30:48.483136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.228 [2024-02-14 20:30:48.483151] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.228 qpair failed and we were unable to recover it. 00:30:11.228 [2024-02-14 20:30:48.483433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.228 [2024-02-14 20:30:48.483857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.228 [2024-02-14 20:30:48.483872] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.228 qpair failed and we were unable to recover it. 00:30:11.228 [2024-02-14 20:30:48.484280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.228 [2024-02-14 20:30:48.484679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.228 [2024-02-14 20:30:48.484694] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.228 qpair failed and we were unable to recover it. 00:30:11.228 [2024-02-14 20:30:48.485124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.228 [2024-02-14 20:30:48.485535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.228 [2024-02-14 20:30:48.485549] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.228 qpair failed and we were unable to recover it. 00:30:11.228 [2024-02-14 20:30:48.485978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.228 [2024-02-14 20:30:48.486320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.228 [2024-02-14 20:30:48.486334] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.228 qpair failed and we were unable to recover it. 00:30:11.228 [2024-02-14 20:30:48.486689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.228 [2024-02-14 20:30:48.487153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.228 [2024-02-14 20:30:48.487168] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.228 qpair failed and we were unable to recover it. 00:30:11.228 [2024-02-14 20:30:48.487528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.228 [2024-02-14 20:30:48.487933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.228 [2024-02-14 20:30:48.487948] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.228 qpair failed and we were unable to recover it. 00:30:11.228 [2024-02-14 20:30:48.488383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.228 [2024-02-14 20:30:48.488957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.228 [2024-02-14 20:30:48.488972] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.228 qpair failed and we were unable to recover it. 00:30:11.228 [2024-02-14 20:30:48.489532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.228 [2024-02-14 20:30:48.489885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.228 [2024-02-14 20:30:48.489900] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.228 qpair failed and we were unable to recover it. 00:30:11.228 [2024-02-14 20:30:48.490274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.228 [2024-02-14 20:30:48.490557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.229 [2024-02-14 20:30:48.490572] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.229 qpair failed and we were unable to recover it. 00:30:11.229 [2024-02-14 20:30:48.490923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.229 [2024-02-14 20:30:48.491213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.229 [2024-02-14 20:30:48.491227] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.229 qpair failed and we were unable to recover it. 00:30:11.229 [2024-02-14 20:30:48.491577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.229 [2024-02-14 20:30:48.491933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.229 [2024-02-14 20:30:48.491948] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.229 qpair failed and we were unable to recover it. 00:30:11.229 [2024-02-14 20:30:48.492365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.229 [2024-02-14 20:30:48.492718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.229 [2024-02-14 20:30:48.492733] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.229 qpair failed and we were unable to recover it. 00:30:11.229 [2024-02-14 20:30:48.493160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.229 [2024-02-14 20:30:48.493587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.229 [2024-02-14 20:30:48.493602] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.229 qpair failed and we were unable to recover it. 00:30:11.229 [2024-02-14 20:30:48.493872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.229 [2024-02-14 20:30:48.494292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.229 [2024-02-14 20:30:48.494307] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.229 qpair failed and we were unable to recover it. 00:30:11.229 [2024-02-14 20:30:48.494616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.229 [2024-02-14 20:30:48.495031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.229 [2024-02-14 20:30:48.495047] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.229 qpair failed and we were unable to recover it. 00:30:11.229 [2024-02-14 20:30:48.495451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.229 [2024-02-14 20:30:48.495912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.229 [2024-02-14 20:30:48.495927] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.229 qpair failed and we were unable to recover it. 00:30:11.229 [2024-02-14 20:30:48.496311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.229 [2024-02-14 20:30:48.496658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.229 [2024-02-14 20:30:48.496673] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.229 qpair failed and we were unable to recover it. 00:30:11.229 [2024-02-14 20:30:48.497045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.229 [2024-02-14 20:30:48.497342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.229 [2024-02-14 20:30:48.497357] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.229 qpair failed and we were unable to recover it. 00:30:11.229 [2024-02-14 20:30:48.497761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.229 [2024-02-14 20:30:48.497899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.229 [2024-02-14 20:30:48.497916] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.229 qpair failed and we were unable to recover it. 00:30:11.229 [2024-02-14 20:30:48.498317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.229 [2024-02-14 20:30:48.498661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.229 [2024-02-14 20:30:48.498676] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.229 qpair failed and we were unable to recover it. 00:30:11.229 [2024-02-14 20:30:48.499024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.229 [2024-02-14 20:30:48.499368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.229 [2024-02-14 20:30:48.499382] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.229 qpair failed and we were unable to recover it. 00:30:11.229 [2024-02-14 20:30:48.499720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.229 [2024-02-14 20:30:48.500064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.229 [2024-02-14 20:30:48.500078] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.229 qpair failed and we were unable to recover it. 00:30:11.229 [2024-02-14 20:30:48.500443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.229 [2024-02-14 20:30:48.500864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.229 [2024-02-14 20:30:48.500879] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.229 qpair failed and we were unable to recover it. 00:30:11.229 [2024-02-14 20:30:48.501329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.229 [2024-02-14 20:30:48.501913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.229 [2024-02-14 20:30:48.501928] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.229 qpair failed and we were unable to recover it. 00:30:11.229 [2024-02-14 20:30:48.502229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.229 [2024-02-14 20:30:48.502654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.229 [2024-02-14 20:30:48.502669] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.229 qpair failed and we were unable to recover it. 00:30:11.229 [2024-02-14 20:30:48.503072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.229 [2024-02-14 20:30:48.503351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.229 [2024-02-14 20:30:48.503366] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.229 qpair failed and we were unable to recover it. 00:30:11.229 [2024-02-14 20:30:48.503707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.229 [2024-02-14 20:30:48.504002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.229 [2024-02-14 20:30:48.504017] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.229 qpair failed and we were unable to recover it. 00:30:11.229 [2024-02-14 20:30:48.504380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.229 [2024-02-14 20:30:48.504801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.229 [2024-02-14 20:30:48.504817] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.229 qpair failed and we were unable to recover it. 00:30:11.229 [2024-02-14 20:30:48.505170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.229 [2024-02-14 20:30:48.505510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.229 [2024-02-14 20:30:48.505524] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.229 qpair failed and we were unable to recover it. 00:30:11.229 [2024-02-14 20:30:48.505957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.229 [2024-02-14 20:30:48.506380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.229 [2024-02-14 20:30:48.506394] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.229 qpair failed and we were unable to recover it. 00:30:11.229 [2024-02-14 20:30:48.506689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.229 [2024-02-14 20:30:48.507110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.229 [2024-02-14 20:30:48.507124] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.229 qpair failed and we were unable to recover it. 00:30:11.229 [2024-02-14 20:30:48.507390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.229 [2024-02-14 20:30:48.507804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.229 [2024-02-14 20:30:48.507819] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.229 qpair failed and we were unable to recover it. 00:30:11.229 [2024-02-14 20:30:48.508163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.229 [2024-02-14 20:30:48.508588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.229 [2024-02-14 20:30:48.508603] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.229 qpair failed and we were unable to recover it. 00:30:11.229 [2024-02-14 20:30:48.508907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.229 [2024-02-14 20:30:48.509195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.229 [2024-02-14 20:30:48.509210] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.229 qpair failed and we were unable to recover it. 00:30:11.229 [2024-02-14 20:30:48.509632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.229 [2024-02-14 20:30:48.509835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.229 [2024-02-14 20:30:48.509852] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.229 qpair failed and we were unable to recover it. 00:30:11.229 [2024-02-14 20:30:48.510206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.229 [2024-02-14 20:30:48.510500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.229 [2024-02-14 20:30:48.510515] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.229 qpair failed and we were unable to recover it. 00:30:11.229 [2024-02-14 20:30:48.510940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.229 [2024-02-14 20:30:48.511344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.229 [2024-02-14 20:30:48.511359] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.229 qpair failed and we were unable to recover it. 00:30:11.229 [2024-02-14 20:30:48.511773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.229 [2024-02-14 20:30:48.512061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.230 [2024-02-14 20:30:48.512076] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.230 qpair failed and we were unable to recover it. 00:30:11.230 [2024-02-14 20:30:48.512439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.230 [2024-02-14 20:30:48.512842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.230 [2024-02-14 20:30:48.512857] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.230 qpair failed and we were unable to recover it. 00:30:11.230 [2024-02-14 20:30:48.513153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.230 [2024-02-14 20:30:48.513507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.230 [2024-02-14 20:30:48.513522] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.230 qpair failed and we were unable to recover it. 00:30:11.230 [2024-02-14 20:30:48.513829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.230 [2024-02-14 20:30:48.514190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.230 [2024-02-14 20:30:48.514206] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.230 qpair failed and we were unable to recover it. 00:30:11.230 [2024-02-14 20:30:48.514553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.230 [2024-02-14 20:30:48.514986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.230 [2024-02-14 20:30:48.515001] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.230 qpair failed and we were unable to recover it. 00:30:11.230 [2024-02-14 20:30:48.515283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.230 [2024-02-14 20:30:48.515712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.230 [2024-02-14 20:30:48.515728] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.230 qpair failed and we were unable to recover it. 00:30:11.230 [2024-02-14 20:30:48.516104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.230 [2024-02-14 20:30:48.516380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.230 [2024-02-14 20:30:48.516395] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.230 qpair failed and we were unable to recover it. 00:30:11.230 [2024-02-14 20:30:48.516820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.230 [2024-02-14 20:30:48.517186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.230 [2024-02-14 20:30:48.517201] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.230 qpair failed and we were unable to recover it. 00:30:11.230 [2024-02-14 20:30:48.517631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.230 [2024-02-14 20:30:48.518041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.230 [2024-02-14 20:30:48.518056] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.230 qpair failed and we were unable to recover it. 00:30:11.230 [2024-02-14 20:30:48.518428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.230 [2024-02-14 20:30:48.518869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.230 [2024-02-14 20:30:48.518884] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.230 qpair failed and we were unable to recover it. 00:30:11.230 [2024-02-14 20:30:48.519310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.230 [2024-02-14 20:30:48.519612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.230 [2024-02-14 20:30:48.519628] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.230 qpair failed and we were unable to recover it. 00:30:11.230 [2024-02-14 20:30:48.520063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.230 [2024-02-14 20:30:48.520371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.230 [2024-02-14 20:30:48.520386] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.230 qpair failed and we were unable to recover it. 00:30:11.230 [2024-02-14 20:30:48.520790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.230 [2024-02-14 20:30:48.521135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.230 [2024-02-14 20:30:48.521150] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.230 qpair failed and we were unable to recover it. 00:30:11.230 [2024-02-14 20:30:48.521589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.230 [2024-02-14 20:30:48.521936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.230 [2024-02-14 20:30:48.521951] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.230 qpair failed and we were unable to recover it. 00:30:11.230 [2024-02-14 20:30:48.522401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.230 [2024-02-14 20:30:48.522775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.230 [2024-02-14 20:30:48.522790] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.230 qpair failed and we were unable to recover it. 00:30:11.230 [2024-02-14 20:30:48.523146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.230 [2024-02-14 20:30:48.523449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.230 [2024-02-14 20:30:48.523464] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.230 qpair failed and we were unable to recover it. 00:30:11.230 [2024-02-14 20:30:48.523864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.230 [2024-02-14 20:30:48.524212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.230 [2024-02-14 20:30:48.524227] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.230 qpair failed and we were unable to recover it. 00:30:11.230 [2024-02-14 20:30:48.524600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.230 [2024-02-14 20:30:48.524979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.230 [2024-02-14 20:30:48.524995] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.230 qpair failed and we were unable to recover it. 00:30:11.230 [2024-02-14 20:30:48.525345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.230 [2024-02-14 20:30:48.525703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.230 [2024-02-14 20:30:48.525719] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.230 qpair failed and we were unable to recover it. 00:30:11.230 [2024-02-14 20:30:48.526063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.230 [2024-02-14 20:30:48.526463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.230 [2024-02-14 20:30:48.526478] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.230 qpair failed and we were unable to recover it. 00:30:11.230 [2024-02-14 20:30:48.526873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.230 [2024-02-14 20:30:48.527396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.230 [2024-02-14 20:30:48.527410] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.230 qpair failed and we were unable to recover it. 00:30:11.230 [2024-02-14 20:30:48.527710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.230 [2024-02-14 20:30:48.528062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.230 [2024-02-14 20:30:48.528077] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.230 qpair failed and we were unable to recover it. 00:30:11.230 [2024-02-14 20:30:48.528443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.230 [2024-02-14 20:30:48.528867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.230 [2024-02-14 20:30:48.528884] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.230 qpair failed and we were unable to recover it. 00:30:11.230 [2024-02-14 20:30:48.529241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.230 [2024-02-14 20:30:48.529591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.230 [2024-02-14 20:30:48.529606] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.230 qpair failed and we were unable to recover it. 00:30:11.230 [2024-02-14 20:30:48.529974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.230 [2024-02-14 20:30:48.530331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.230 [2024-02-14 20:30:48.530346] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.230 qpair failed and we were unable to recover it. 00:30:11.230 [2024-02-14 20:30:48.530792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.230 [2024-02-14 20:30:48.531159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.230 [2024-02-14 20:30:48.531174] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.230 qpair failed and we were unable to recover it. 00:30:11.230 [2024-02-14 20:30:48.531545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.230 [2024-02-14 20:30:48.531917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.230 [2024-02-14 20:30:48.531932] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.230 qpair failed and we were unable to recover it. 00:30:11.230 [2024-02-14 20:30:48.532297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.230 [2024-02-14 20:30:48.532486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.230 [2024-02-14 20:30:48.532501] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.230 qpair failed and we were unable to recover it. 00:30:11.230 [2024-02-14 20:30:48.532850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.230 [2024-02-14 20:30:48.533228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.230 [2024-02-14 20:30:48.533243] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.230 qpair failed and we were unable to recover it. 00:30:11.230 [2024-02-14 20:30:48.533623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.231 [2024-02-14 20:30:48.533913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.231 [2024-02-14 20:30:48.533929] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.231 qpair failed and we were unable to recover it. 00:30:11.231 [2024-02-14 20:30:48.534267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.231 [2024-02-14 20:30:48.534686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.231 [2024-02-14 20:30:48.534701] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.231 qpair failed and we were unable to recover it. 00:30:11.231 [2024-02-14 20:30:48.535114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.231 [2024-02-14 20:30:48.535539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.231 [2024-02-14 20:30:48.535554] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.231 qpair failed and we were unable to recover it. 00:30:11.231 [2024-02-14 20:30:48.535926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.231 [2024-02-14 20:30:48.536246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.231 [2024-02-14 20:30:48.536264] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.231 qpair failed and we were unable to recover it. 00:30:11.231 [2024-02-14 20:30:48.536691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.231 [2024-02-14 20:30:48.537027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.231 [2024-02-14 20:30:48.537042] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.231 qpair failed and we were unable to recover it. 00:30:11.231 [2024-02-14 20:30:48.537457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.231 [2024-02-14 20:30:48.537800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.231 [2024-02-14 20:30:48.537815] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.231 qpair failed and we were unable to recover it. 00:30:11.231 [2024-02-14 20:30:48.538169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.231 [2024-02-14 20:30:48.538597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.231 [2024-02-14 20:30:48.538611] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.231 qpair failed and we were unable to recover it. 00:30:11.231 [2024-02-14 20:30:48.539036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.231 [2024-02-14 20:30:48.539313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.231 [2024-02-14 20:30:48.539328] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.231 qpair failed and we were unable to recover it. 00:30:11.231 [2024-02-14 20:30:48.539671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.231 [2024-02-14 20:30:48.540095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.231 [2024-02-14 20:30:48.540110] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.231 qpair failed and we were unable to recover it. 00:30:11.231 [2024-02-14 20:30:48.540509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.231 [2024-02-14 20:30:48.540918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.231 [2024-02-14 20:30:48.540933] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.231 qpair failed and we were unable to recover it. 00:30:11.231 [2024-02-14 20:30:48.541283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.231 [2024-02-14 20:30:48.541694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.231 [2024-02-14 20:30:48.541710] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.231 qpair failed and we were unable to recover it. 00:30:11.231 [2024-02-14 20:30:48.542003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.231 [2024-02-14 20:30:48.542429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.231 [2024-02-14 20:30:48.542443] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.231 qpair failed and we were unable to recover it. 00:30:11.231 [2024-02-14 20:30:48.542731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.231 [2024-02-14 20:30:48.543080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.231 [2024-02-14 20:30:48.543095] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.231 qpair failed and we were unable to recover it. 00:30:11.231 [2024-02-14 20:30:48.543442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.231 [2024-02-14 20:30:48.543797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.231 [2024-02-14 20:30:48.543813] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.231 qpair failed and we were unable to recover it. 00:30:11.231 [2024-02-14 20:30:48.544167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.231 [2024-02-14 20:30:48.544519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.231 [2024-02-14 20:30:48.544534] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.231 qpair failed and we were unable to recover it. 00:30:11.231 [2024-02-14 20:30:48.544983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.231 [2024-02-14 20:30:48.545331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.231 [2024-02-14 20:30:48.545346] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.231 qpair failed and we were unable to recover it. 00:30:11.231 [2024-02-14 20:30:48.545748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.231 [2024-02-14 20:30:48.546149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.231 [2024-02-14 20:30:48.546164] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.231 qpair failed and we were unable to recover it. 00:30:11.231 [2024-02-14 20:30:48.546500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.231 [2024-02-14 20:30:48.546882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.231 [2024-02-14 20:30:48.546897] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.231 qpair failed and we were unable to recover it. 00:30:11.231 [2024-02-14 20:30:48.547250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.231 [2024-02-14 20:30:48.547603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.231 [2024-02-14 20:30:48.547618] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.231 qpair failed and we were unable to recover it. 00:30:11.231 [2024-02-14 20:30:48.547969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.231 [2024-02-14 20:30:48.548227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.231 [2024-02-14 20:30:48.548242] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.231 qpair failed and we were unable to recover it. 00:30:11.231 [2024-02-14 20:30:48.548686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.231 [2024-02-14 20:30:48.549085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.231 [2024-02-14 20:30:48.549100] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.231 qpair failed and we were unable to recover it. 00:30:11.231 [2024-02-14 20:30:48.549525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.231 [2024-02-14 20:30:48.549874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.231 [2024-02-14 20:30:48.549890] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.231 qpair failed and we were unable to recover it. 00:30:11.231 [2024-02-14 20:30:48.550240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.231 [2024-02-14 20:30:48.550577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.231 [2024-02-14 20:30:48.550592] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.231 qpair failed and we were unable to recover it. 00:30:11.231 [2024-02-14 20:30:48.551024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.231 [2024-02-14 20:30:48.551448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.231 [2024-02-14 20:30:48.551463] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.231 qpair failed and we were unable to recover it. 00:30:11.231 [2024-02-14 20:30:48.551819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.231 [2024-02-14 20:30:48.552112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.231 [2024-02-14 20:30:48.552127] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.232 qpair failed and we were unable to recover it. 00:30:11.232 [2024-02-14 20:30:48.552321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.232 [2024-02-14 20:30:48.552721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.232 [2024-02-14 20:30:48.552737] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.232 qpair failed and we were unable to recover it. 00:30:11.232 [2024-02-14 20:30:48.552992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.232 [2024-02-14 20:30:48.553396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.232 [2024-02-14 20:30:48.553411] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.232 qpair failed and we were unable to recover it. 00:30:11.232 [2024-02-14 20:30:48.553746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.232 [2024-02-14 20:30:48.554099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.232 [2024-02-14 20:30:48.554113] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.232 qpair failed and we were unable to recover it. 00:30:11.232 [2024-02-14 20:30:48.554480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.232 [2024-02-14 20:30:48.554784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.232 [2024-02-14 20:30:48.554799] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.232 qpair failed and we were unable to recover it. 00:30:11.232 [2024-02-14 20:30:48.555095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.232 [2024-02-14 20:30:48.555527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.232 [2024-02-14 20:30:48.555542] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.232 qpair failed and we were unable to recover it. 00:30:11.232 [2024-02-14 20:30:48.555897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.232 [2024-02-14 20:30:48.556333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.232 [2024-02-14 20:30:48.556347] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.232 qpair failed and we were unable to recover it. 00:30:11.232 [2024-02-14 20:30:48.556539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.232 [2024-02-14 20:30:48.556828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.232 [2024-02-14 20:30:48.556843] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.232 qpair failed and we were unable to recover it. 00:30:11.232 [2024-02-14 20:30:48.557270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.232 [2024-02-14 20:30:48.557607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.232 [2024-02-14 20:30:48.557622] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.232 qpair failed and we were unable to recover it. 00:30:11.232 [2024-02-14 20:30:48.558067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.232 [2024-02-14 20:30:48.558480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.232 [2024-02-14 20:30:48.558495] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.232 qpair failed and we were unable to recover it. 00:30:11.232 [2024-02-14 20:30:48.558927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.232 [2024-02-14 20:30:48.559353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.232 [2024-02-14 20:30:48.559368] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.232 qpair failed and we were unable to recover it. 00:30:11.232 [2024-02-14 20:30:48.559769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.232 [2024-02-14 20:30:48.560102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.232 [2024-02-14 20:30:48.560116] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.232 qpair failed and we were unable to recover it. 00:30:11.232 [2024-02-14 20:30:48.560406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.232 [2024-02-14 20:30:48.560770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.232 [2024-02-14 20:30:48.560785] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.232 qpair failed and we were unable to recover it. 00:30:11.232 [2024-02-14 20:30:48.561136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.232 [2024-02-14 20:30:48.561489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.232 [2024-02-14 20:30:48.561503] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.232 qpair failed and we were unable to recover it. 00:30:11.232 [2024-02-14 20:30:48.561872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.232 [2024-02-14 20:30:48.562297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.232 [2024-02-14 20:30:48.562312] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.232 qpair failed and we were unable to recover it. 00:30:11.232 [2024-02-14 20:30:48.562715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.232 [2024-02-14 20:30:48.563126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.232 [2024-02-14 20:30:48.563140] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.232 qpair failed and we were unable to recover it. 00:30:11.232 [2024-02-14 20:30:48.563542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.232 [2024-02-14 20:30:48.563955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.232 [2024-02-14 20:30:48.563970] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.232 qpair failed and we were unable to recover it. 00:30:11.232 [2024-02-14 20:30:48.564372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.232 [2024-02-14 20:30:48.564658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.232 [2024-02-14 20:30:48.564673] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.232 qpair failed and we were unable to recover it. 00:30:11.232 [2024-02-14 20:30:48.565035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.232 [2024-02-14 20:30:48.565404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.232 [2024-02-14 20:30:48.565418] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.232 qpair failed and we were unable to recover it. 00:30:11.232 [2024-02-14 20:30:48.565844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.232 [2024-02-14 20:30:48.566272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.232 [2024-02-14 20:30:48.566286] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.232 qpair failed and we were unable to recover it. 00:30:11.232 [2024-02-14 20:30:48.566729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.232 [2024-02-14 20:30:48.567159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.232 [2024-02-14 20:30:48.567173] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.232 qpair failed and we were unable to recover it. 00:30:11.232 [2024-02-14 20:30:48.567596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.232 [2024-02-14 20:30:48.568015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.232 [2024-02-14 20:30:48.568030] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.232 qpair failed and we were unable to recover it. 00:30:11.232 [2024-02-14 20:30:48.568375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.232 [2024-02-14 20:30:48.568747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.232 [2024-02-14 20:30:48.568761] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.232 qpair failed and we were unable to recover it. 00:30:11.232 [2024-02-14 20:30:48.569187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.232 [2024-02-14 20:30:48.569527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.232 [2024-02-14 20:30:48.569542] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.232 qpair failed and we were unable to recover it. 00:30:11.232 [2024-02-14 20:30:48.569969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.232 [2024-02-14 20:30:48.570371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.232 [2024-02-14 20:30:48.570386] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.232 qpair failed and we were unable to recover it. 00:30:11.232 [2024-02-14 20:30:48.570797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.232 [2024-02-14 20:30:48.571218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.232 [2024-02-14 20:30:48.571232] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.232 qpair failed and we were unable to recover it. 00:30:11.232 [2024-02-14 20:30:48.571661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.232 [2024-02-14 20:30:48.572084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.232 [2024-02-14 20:30:48.572098] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.232 qpair failed and we were unable to recover it. 00:30:11.232 [2024-02-14 20:30:48.572525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.232 [2024-02-14 20:30:48.572925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.232 [2024-02-14 20:30:48.572939] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.232 qpair failed and we were unable to recover it. 00:30:11.232 [2024-02-14 20:30:48.573307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.232 [2024-02-14 20:30:48.573709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.232 [2024-02-14 20:30:48.573723] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.232 qpair failed and we were unable to recover it. 00:30:11.232 [2024-02-14 20:30:48.574136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.232 [2024-02-14 20:30:48.574540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.233 [2024-02-14 20:30:48.574554] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.233 qpair failed and we were unable to recover it. 00:30:11.233 [2024-02-14 20:30:48.574984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.233 [2024-02-14 20:30:48.575334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.233 [2024-02-14 20:30:48.575350] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.233 qpair failed and we were unable to recover it. 00:30:11.233 [2024-02-14 20:30:48.575753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.233 [2024-02-14 20:30:48.576101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.233 [2024-02-14 20:30:48.576115] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.233 qpair failed and we were unable to recover it. 00:30:11.233 [2024-02-14 20:30:48.576471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.233 [2024-02-14 20:30:48.576891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.233 [2024-02-14 20:30:48.576906] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.233 qpair failed and we were unable to recover it. 00:30:11.233 [2024-02-14 20:30:48.577307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.233 [2024-02-14 20:30:48.577655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.233 [2024-02-14 20:30:48.577670] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.233 qpair failed and we were unable to recover it. 00:30:11.233 [2024-02-14 20:30:48.578042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.233 [2024-02-14 20:30:48.578466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.233 [2024-02-14 20:30:48.578481] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.233 qpair failed and we were unable to recover it. 00:30:11.233 [2024-02-14 20:30:48.578831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.233 [2024-02-14 20:30:48.579278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.233 [2024-02-14 20:30:48.579293] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.233 qpair failed and we were unable to recover it. 00:30:11.233 [2024-02-14 20:30:48.579659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.233 [2024-02-14 20:30:48.580082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.233 [2024-02-14 20:30:48.580097] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.233 qpair failed and we were unable to recover it. 00:30:11.233 [2024-02-14 20:30:48.580523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.233 [2024-02-14 20:30:48.580922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.233 [2024-02-14 20:30:48.580937] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.233 qpair failed and we were unable to recover it. 00:30:11.233 [2024-02-14 20:30:48.581384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.233 [2024-02-14 20:30:48.581804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.233 [2024-02-14 20:30:48.581819] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.233 qpair failed and we were unable to recover it. 00:30:11.233 [2024-02-14 20:30:48.582242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.233 [2024-02-14 20:30:48.582617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.233 [2024-02-14 20:30:48.582631] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.233 qpair failed and we were unable to recover it. 00:30:11.233 [2024-02-14 20:30:48.583011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.233 [2024-02-14 20:30:48.583436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.233 [2024-02-14 20:30:48.583452] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.233 qpair failed and we were unable to recover it. 00:30:11.233 [2024-02-14 20:30:48.583879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.233 [2024-02-14 20:30:48.584263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.233 [2024-02-14 20:30:48.584277] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.233 qpair failed and we were unable to recover it. 00:30:11.233 [2024-02-14 20:30:48.584706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.233 [2024-02-14 20:30:48.585128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.233 [2024-02-14 20:30:48.585142] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.233 qpair failed and we were unable to recover it. 00:30:11.233 [2024-02-14 20:30:48.585439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.233 [2024-02-14 20:30:48.585864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.233 [2024-02-14 20:30:48.585879] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.233 qpair failed and we were unable to recover it. 00:30:11.233 [2024-02-14 20:30:48.586224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.233 [2024-02-14 20:30:48.586659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.233 [2024-02-14 20:30:48.586674] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.233 qpair failed and we were unable to recover it. 00:30:11.233 [2024-02-14 20:30:48.587101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.233 [2024-02-14 20:30:48.587444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.233 [2024-02-14 20:30:48.587458] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.233 qpair failed and we were unable to recover it. 00:30:11.233 [2024-02-14 20:30:48.587811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.233 [2024-02-14 20:30:48.588151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.233 [2024-02-14 20:30:48.588166] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.233 qpair failed and we were unable to recover it. 00:30:11.233 [2024-02-14 20:30:48.588579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.233 [2024-02-14 20:30:48.588927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.233 [2024-02-14 20:30:48.588942] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.233 qpair failed and we were unable to recover it. 00:30:11.233 [2024-02-14 20:30:48.589345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.233 [2024-02-14 20:30:48.589757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.233 [2024-02-14 20:30:48.589772] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.233 qpair failed and we were unable to recover it. 00:30:11.233 [2024-02-14 20:30:48.590210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.233 [2024-02-14 20:30:48.590560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.233 [2024-02-14 20:30:48.590575] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.233 qpair failed and we were unable to recover it. 00:30:11.233 20:30:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:11.233 [2024-02-14 20:30:48.591023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.233 20:30:48 -- common/autotest_common.sh@850 -- # return 0 00:30:11.233 [2024-02-14 20:30:48.591370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.233 [2024-02-14 20:30:48.591388] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.233 qpair failed and we were unable to recover it. 00:30:11.233 20:30:48 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:30:11.233 20:30:48 -- common/autotest_common.sh@716 -- # xtrace_disable 00:30:11.233 [2024-02-14 20:30:48.591835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.233 20:30:48 -- common/autotest_common.sh@10 -- # set +x 00:30:11.233 [2024-02-14 20:30:48.592207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.233 [2024-02-14 20:30:48.592222] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.233 qpair failed and we were unable to recover it. 00:30:11.233 [2024-02-14 20:30:48.592599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.233 [2024-02-14 20:30:48.592928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.233 [2024-02-14 20:30:48.592942] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.233 qpair failed and we were unable to recover it. 00:30:11.233 [2024-02-14 20:30:48.593308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.233 [2024-02-14 20:30:48.593665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.233 [2024-02-14 20:30:48.593680] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.233 qpair failed and we were unable to recover it. 00:30:11.233 [2024-02-14 20:30:48.594129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.233 [2024-02-14 20:30:48.594464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.233 [2024-02-14 20:30:48.594479] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.233 qpair failed and we were unable to recover it. 00:30:11.233 [2024-02-14 20:30:48.594835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.233 [2024-02-14 20:30:48.595192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.233 [2024-02-14 20:30:48.595206] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.233 qpair failed and we were unable to recover it. 00:30:11.233 [2024-02-14 20:30:48.595559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.233 [2024-02-14 20:30:48.596003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.233 [2024-02-14 20:30:48.596018] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.233 qpair failed and we were unable to recover it. 00:30:11.233 [2024-02-14 20:30:48.596385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.234 [2024-02-14 20:30:48.596785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.234 [2024-02-14 20:30:48.596801] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.234 qpair failed and we were unable to recover it. 00:30:11.234 [2024-02-14 20:30:48.596993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.234 [2024-02-14 20:30:48.597428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.234 [2024-02-14 20:30:48.597443] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.234 qpair failed and we were unable to recover it. 00:30:11.234 [2024-02-14 20:30:48.597871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.234 [2024-02-14 20:30:48.598176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.234 [2024-02-14 20:30:48.598191] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.234 qpair failed and we were unable to recover it. 00:30:11.234 [2024-02-14 20:30:48.598643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.234 [2024-02-14 20:30:48.599084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.234 [2024-02-14 20:30:48.599099] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.234 qpair failed and we were unable to recover it. 00:30:11.234 [2024-02-14 20:30:48.599539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.234 [2024-02-14 20:30:48.599888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.234 [2024-02-14 20:30:48.599903] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.234 qpair failed and we were unable to recover it. 00:30:11.234 [2024-02-14 20:30:48.600069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.234 [2024-02-14 20:30:48.600421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.234 [2024-02-14 20:30:48.600436] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.234 qpair failed and we were unable to recover it. 00:30:11.234 [2024-02-14 20:30:48.600760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.234 [2024-02-14 20:30:48.601120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.234 [2024-02-14 20:30:48.601135] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.234 qpair failed and we were unable to recover it. 00:30:11.234 [2024-02-14 20:30:48.601604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.234 [2024-02-14 20:30:48.601968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.234 [2024-02-14 20:30:48.601983] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.234 qpair failed and we were unable to recover it. 00:30:11.234 [2024-02-14 20:30:48.602408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.234 [2024-02-14 20:30:48.602833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.234 [2024-02-14 20:30:48.602848] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.234 qpair failed and we were unable to recover it. 00:30:11.234 [2024-02-14 20:30:48.603127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.234 [2024-02-14 20:30:48.603541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.234 [2024-02-14 20:30:48.603556] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.234 qpair failed and we were unable to recover it. 00:30:11.234 [2024-02-14 20:30:48.603957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.234 [2024-02-14 20:30:48.604293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.234 [2024-02-14 20:30:48.604307] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.234 qpair failed and we were unable to recover it. 00:30:11.234 [2024-02-14 20:30:48.604736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.234 [2024-02-14 20:30:48.605095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.234 [2024-02-14 20:30:48.605110] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.234 qpair failed and we were unable to recover it. 00:30:11.234 [2024-02-14 20:30:48.605400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.234 [2024-02-14 20:30:48.605756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.234 [2024-02-14 20:30:48.605771] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.234 qpair failed and we were unable to recover it. 00:30:11.234 [2024-02-14 20:30:48.606054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.234 [2024-02-14 20:30:48.606469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.234 [2024-02-14 20:30:48.606486] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.234 qpair failed and we were unable to recover it. 00:30:11.234 [2024-02-14 20:30:48.606879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.234 [2024-02-14 20:30:48.607268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.234 [2024-02-14 20:30:48.607283] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.234 qpair failed and we were unable to recover it. 00:30:11.234 [2024-02-14 20:30:48.607635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.234 [2024-02-14 20:30:48.607990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.234 [2024-02-14 20:30:48.608005] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.234 qpair failed and we were unable to recover it. 00:30:11.234 [2024-02-14 20:30:48.608304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.234 [2024-02-14 20:30:48.608729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.234 [2024-02-14 20:30:48.608745] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.234 qpair failed and we were unable to recover it. 00:30:11.234 [2024-02-14 20:30:48.609155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.234 [2024-02-14 20:30:48.609588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.234 [2024-02-14 20:30:48.609603] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.234 qpair failed and we were unable to recover it. 00:30:11.234 [2024-02-14 20:30:48.609883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.234 [2024-02-14 20:30:48.610230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.234 [2024-02-14 20:30:48.610245] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.234 qpair failed and we were unable to recover it. 00:30:11.234 [2024-02-14 20:30:48.610698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.234 [2024-02-14 20:30:48.610930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.234 [2024-02-14 20:30:48.610944] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.234 qpair failed and we were unable to recover it. 00:30:11.234 [2024-02-14 20:30:48.611346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.234 [2024-02-14 20:30:48.611692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.234 [2024-02-14 20:30:48.611707] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.234 qpair failed and we were unable to recover it. 00:30:11.234 [2024-02-14 20:30:48.612059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.234 [2024-02-14 20:30:48.612418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.234 [2024-02-14 20:30:48.612432] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.234 qpair failed and we were unable to recover it. 00:30:11.234 [2024-02-14 20:30:48.612878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.234 [2024-02-14 20:30:48.613174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.234 [2024-02-14 20:30:48.613188] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.234 qpair failed and we were unable to recover it. 00:30:11.234 [2024-02-14 20:30:48.613611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.234 [2024-02-14 20:30:48.614002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.234 [2024-02-14 20:30:48.614021] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.234 qpair failed and we were unable to recover it. 00:30:11.234 [2024-02-14 20:30:48.614277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.234 [2024-02-14 20:30:48.614632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.234 [2024-02-14 20:30:48.614650] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.234 qpair failed and we were unable to recover it. 00:30:11.234 [2024-02-14 20:30:48.615025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.234 [2024-02-14 20:30:48.615390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.234 [2024-02-14 20:30:48.615404] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.234 qpair failed and we were unable to recover it. 00:30:11.234 [2024-02-14 20:30:48.615971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.234 [2024-02-14 20:30:48.616392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.234 [2024-02-14 20:30:48.616407] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.234 qpair failed and we were unable to recover it. 00:30:11.234 [2024-02-14 20:30:48.616829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.234 [2024-02-14 20:30:48.617191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.234 [2024-02-14 20:30:48.617205] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.234 qpair failed and we were unable to recover it. 00:30:11.234 [2024-02-14 20:30:48.617584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.234 [2024-02-14 20:30:48.617940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.234 [2024-02-14 20:30:48.617955] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.234 qpair failed and we were unable to recover it. 00:30:11.234 [2024-02-14 20:30:48.618309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.235 [2024-02-14 20:30:48.618724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.235 [2024-02-14 20:30:48.618739] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.235 qpair failed and we were unable to recover it. 00:30:11.235 [2024-02-14 20:30:48.619097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.235 [2024-02-14 20:30:48.619512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.235 [2024-02-14 20:30:48.619528] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.235 qpair failed and we were unable to recover it. 00:30:11.235 [2024-02-14 20:30:48.619985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.235 [2024-02-14 20:30:48.620412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.235 [2024-02-14 20:30:48.620427] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.235 qpair failed and we were unable to recover it. 00:30:11.235 [2024-02-14 20:30:48.620853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.235 [2024-02-14 20:30:48.621219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.235 [2024-02-14 20:30:48.621234] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.235 qpair failed and we were unable to recover it. 00:30:11.235 [2024-02-14 20:30:48.621587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.235 [2024-02-14 20:30:48.622035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.235 [2024-02-14 20:30:48.622050] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.235 qpair failed and we were unable to recover it. 00:30:11.235 [2024-02-14 20:30:48.622404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.235 [2024-02-14 20:30:48.622749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.235 [2024-02-14 20:30:48.622764] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.235 qpair failed and we were unable to recover it. 00:30:11.235 [2024-02-14 20:30:48.623051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.235 [2024-02-14 20:30:48.623427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.235 [2024-02-14 20:30:48.623443] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.235 qpair failed and we were unable to recover it. 00:30:11.235 [2024-02-14 20:30:48.623898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.235 [2024-02-14 20:30:48.624257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.235 [2024-02-14 20:30:48.624272] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.235 qpair failed and we were unable to recover it. 00:30:11.235 [2024-02-14 20:30:48.624698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.235 [2024-02-14 20:30:48.625120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.235 [2024-02-14 20:30:48.625135] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.235 qpair failed and we were unable to recover it. 00:30:11.235 [2024-02-14 20:30:48.625442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.235 20:30:48 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:11.235 [2024-02-14 20:30:48.625843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.235 [2024-02-14 20:30:48.625861] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.235 20:30:48 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:11.235 qpair failed and we were unable to recover it. 00:30:11.235 20:30:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:11.235 [2024-02-14 20:30:48.626290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.235 20:30:48 -- common/autotest_common.sh@10 -- # set +x 00:30:11.235 [2024-02-14 20:30:48.626690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.235 [2024-02-14 20:30:48.626707] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.235 qpair failed and we were unable to recover it. 00:30:11.235 [2024-02-14 20:30:48.627063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.235 [2024-02-14 20:30:48.627429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.235 [2024-02-14 20:30:48.627443] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.235 qpair failed and we were unable to recover it. 00:30:11.235 [2024-02-14 20:30:48.627831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.235 [2024-02-14 20:30:48.628135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.235 [2024-02-14 20:30:48.628149] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.235 qpair failed and we were unable to recover it. 00:30:11.235 [2024-02-14 20:30:48.628569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.235 [2024-02-14 20:30:48.628976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.235 [2024-02-14 20:30:48.628991] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.235 qpair failed and we were unable to recover it. 00:30:11.235 [2024-02-14 20:30:48.629276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.235 [2024-02-14 20:30:48.629628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.235 [2024-02-14 20:30:48.629643] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.235 qpair failed and we were unable to recover it. 00:30:11.235 [2024-02-14 20:30:48.630061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.235 [2024-02-14 20:30:48.630416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.235 [2024-02-14 20:30:48.630431] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.235 qpair failed and we were unable to recover it. 00:30:11.235 [2024-02-14 20:30:48.630834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.235 [2024-02-14 20:30:48.631180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.235 [2024-02-14 20:30:48.631194] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.235 qpair failed and we were unable to recover it. 00:30:11.235 [2024-02-14 20:30:48.631604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.497 [2024-02-14 20:30:48.632015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.497 [2024-02-14 20:30:48.632030] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.497 qpair failed and we were unable to recover it. 00:30:11.497 [2024-02-14 20:30:48.632335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.497 [2024-02-14 20:30:48.632760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.497 [2024-02-14 20:30:48.632776] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.497 qpair failed and we were unable to recover it. 00:30:11.497 [2024-02-14 20:30:48.633180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.497 [2024-02-14 20:30:48.633525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.497 [2024-02-14 20:30:48.633540] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.497 qpair failed and we were unable to recover it. 00:30:11.497 [2024-02-14 20:30:48.633944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.497 [2024-02-14 20:30:48.634312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.497 [2024-02-14 20:30:48.634327] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.497 qpair failed and we were unable to recover it. 00:30:11.497 [2024-02-14 20:30:48.634769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.497 [2024-02-14 20:30:48.635182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.497 [2024-02-14 20:30:48.635198] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.497 qpair failed and we were unable to recover it. 00:30:11.497 [2024-02-14 20:30:48.635643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.497 [2024-02-14 20:30:48.636103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.497 [2024-02-14 20:30:48.636119] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.497 qpair failed and we were unable to recover it. 00:30:11.497 [2024-02-14 20:30:48.636420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.497 [2024-02-14 20:30:48.636821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.497 [2024-02-14 20:30:48.636838] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.497 qpair failed and we were unable to recover it. 00:30:11.497 [2024-02-14 20:30:48.637141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.497 [2024-02-14 20:30:48.637506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.497 [2024-02-14 20:30:48.637522] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.497 qpair failed and we were unable to recover it. 00:30:11.497 [2024-02-14 20:30:48.637976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.497 [2024-02-14 20:30:48.638335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.497 [2024-02-14 20:30:48.638351] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.497 qpair failed and we were unable to recover it. 00:30:11.497 [2024-02-14 20:30:48.638733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.497 [2024-02-14 20:30:48.639094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.497 [2024-02-14 20:30:48.639110] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.497 qpair failed and we were unable to recover it. 00:30:11.497 [2024-02-14 20:30:48.639536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.497 [2024-02-14 20:30:48.639911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.497 [2024-02-14 20:30:48.639928] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.497 qpair failed and we were unable to recover it. 00:30:11.497 [2024-02-14 20:30:48.640288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.497 [2024-02-14 20:30:48.640739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.497 [2024-02-14 20:30:48.640755] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.497 qpair failed and we were unable to recover it. 00:30:11.497 [2024-02-14 20:30:48.641125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.497 [2024-02-14 20:30:48.641531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.497 [2024-02-14 20:30:48.641547] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.497 qpair failed and we were unable to recover it. 00:30:11.497 [2024-02-14 20:30:48.641981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.497 [2024-02-14 20:30:48.642383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.497 [2024-02-14 20:30:48.642398] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.497 qpair failed and we were unable to recover it. 00:30:11.497 [2024-02-14 20:30:48.642820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.497 [2024-02-14 20:30:48.643168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.497 [2024-02-14 20:30:48.643183] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.497 qpair failed and we were unable to recover it. 00:30:11.497 Malloc0 00:30:11.497 [2024-02-14 20:30:48.643615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.497 [2024-02-14 20:30:48.643978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.497 [2024-02-14 20:30:48.643994] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.497 qpair failed and we were unable to recover it. 00:30:11.497 20:30:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:11.497 20:30:48 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:11.497 [2024-02-14 20:30:48.644448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.497 20:30:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:11.497 [2024-02-14 20:30:48.644802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.497 [2024-02-14 20:30:48.644818] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.497 qpair failed and we were unable to recover it. 00:30:11.497 20:30:48 -- common/autotest_common.sh@10 -- # set +x 00:30:11.497 [2024-02-14 20:30:48.645250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.497 [2024-02-14 20:30:48.645594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.497 [2024-02-14 20:30:48.645609] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.497 qpair failed and we were unable to recover it. 00:30:11.497 [2024-02-14 20:30:48.645978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.497 [2024-02-14 20:30:48.646332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.497 [2024-02-14 20:30:48.646347] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.497 qpair failed and we were unable to recover it. 00:30:11.497 [2024-02-14 20:30:48.646685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.497 [2024-02-14 20:30:48.647107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.497 [2024-02-14 20:30:48.647121] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.497 qpair failed and we were unable to recover it. 00:30:11.497 [2024-02-14 20:30:48.647477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.497 [2024-02-14 20:30:48.647929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.497 [2024-02-14 20:30:48.647944] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.498 qpair failed and we were unable to recover it. 00:30:11.498 [2024-02-14 20:30:48.648372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.498 [2024-02-14 20:30:48.648721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.498 [2024-02-14 20:30:48.648736] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.498 qpair failed and we were unable to recover it. 00:30:11.498 [2024-02-14 20:30:48.649185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.498 [2024-02-14 20:30:48.649534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.498 [2024-02-14 20:30:48.649548] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.498 qpair failed and we were unable to recover it. 00:30:11.498 [2024-02-14 20:30:48.649960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.498 [2024-02-14 20:30:48.650370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.498 [2024-02-14 20:30:48.650384] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.498 qpair failed and we were unable to recover it. 00:30:11.498 [2024-02-14 20:30:48.650778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.498 [2024-02-14 20:30:48.650864] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:11.498 [2024-02-14 20:30:48.651131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.498 [2024-02-14 20:30:48.651146] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.498 qpair failed and we were unable to recover it. 00:30:11.498 [2024-02-14 20:30:48.651550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.498 [2024-02-14 20:30:48.651964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.498 [2024-02-14 20:30:48.651979] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.498 qpair failed and we were unable to recover it. 00:30:11.498 [2024-02-14 20:30:48.652406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.498 [2024-02-14 20:30:48.652782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.498 [2024-02-14 20:30:48.652797] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.498 qpair failed and we were unable to recover it. 00:30:11.498 [2024-02-14 20:30:48.653179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.498 [2024-02-14 20:30:48.653473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.498 [2024-02-14 20:30:48.653487] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.498 qpair failed and we were unable to recover it. 00:30:11.498 [2024-02-14 20:30:48.653910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.498 [2024-02-14 20:30:48.654256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.498 [2024-02-14 20:30:48.654270] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.498 qpair failed and we were unable to recover it. 00:30:11.498 [2024-02-14 20:30:48.654671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.498 [2024-02-14 20:30:48.655024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.498 [2024-02-14 20:30:48.655038] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.498 qpair failed and we were unable to recover it. 00:30:11.498 [2024-02-14 20:30:48.655489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.498 [2024-02-14 20:30:48.655915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.498 [2024-02-14 20:30:48.655930] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.498 qpair failed and we were unable to recover it. 00:30:11.498 [2024-02-14 20:30:48.656358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.498 20:30:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:11.498 [2024-02-14 20:30:48.656707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.498 [2024-02-14 20:30:48.656722] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.498 qpair failed and we were unable to recover it. 00:30:11.498 20:30:48 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:11.498 [2024-02-14 20:30:48.657020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.498 20:30:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:11.498 20:30:48 -- common/autotest_common.sh@10 -- # set +x 00:30:11.498 [2024-02-14 20:30:48.657443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.498 [2024-02-14 20:30:48.657459] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.498 qpair failed and we were unable to recover it. 00:30:11.498 [2024-02-14 20:30:48.657850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.498 [2024-02-14 20:30:48.658199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.498 [2024-02-14 20:30:48.658213] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.498 qpair failed and we were unable to recover it. 00:30:11.498 [2024-02-14 20:30:48.658638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.498 [2024-02-14 20:30:48.659015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.498 [2024-02-14 20:30:48.659030] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.498 qpair failed and we were unable to recover it. 00:30:11.498 [2024-02-14 20:30:48.659447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.498 [2024-02-14 20:30:48.659797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.498 [2024-02-14 20:30:48.659812] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.498 qpair failed and we were unable to recover it. 00:30:11.498 [2024-02-14 20:30:48.660213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.498 [2024-02-14 20:30:48.660502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.498 [2024-02-14 20:30:48.660516] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.498 qpair failed and we were unable to recover it. 00:30:11.498 [2024-02-14 20:30:48.660966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.498 [2024-02-14 20:30:48.661393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.498 [2024-02-14 20:30:48.661408] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.498 qpair failed and we were unable to recover it. 00:30:11.498 [2024-02-14 20:30:48.661840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.498 [2024-02-14 20:30:48.662260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.498 [2024-02-14 20:30:48.662274] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.498 qpair failed and we were unable to recover it. 00:30:11.498 [2024-02-14 20:30:48.662653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.498 [2024-02-14 20:30:48.662953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.498 [2024-02-14 20:30:48.662967] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.498 qpair failed and we were unable to recover it. 00:30:11.498 [2024-02-14 20:30:48.663392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.498 [2024-02-14 20:30:48.663790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.498 [2024-02-14 20:30:48.663805] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.498 qpair failed and we were unable to recover it. 00:30:11.498 [2024-02-14 20:30:48.664230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.498 20:30:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:11.498 [2024-02-14 20:30:48.664629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.498 [2024-02-14 20:30:48.664644] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.498 qpair failed and we were unable to recover it. 00:30:11.498 20:30:48 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:11.498 [2024-02-14 20:30:48.665005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.498 20:30:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:11.498 20:30:48 -- common/autotest_common.sh@10 -- # set +x 00:30:11.498 [2024-02-14 20:30:48.665451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.498 [2024-02-14 20:30:48.665466] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.498 qpair failed and we were unable to recover it. 00:30:11.498 [2024-02-14 20:30:48.665872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.498 [2024-02-14 20:30:48.666210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.498 [2024-02-14 20:30:48.666225] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.498 qpair failed and we were unable to recover it. 00:30:11.498 [2024-02-14 20:30:48.666685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.498 [2024-02-14 20:30:48.667021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.498 [2024-02-14 20:30:48.667036] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.498 qpair failed and we were unable to recover it. 00:30:11.498 [2024-02-14 20:30:48.667383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.498 [2024-02-14 20:30:48.667830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.498 [2024-02-14 20:30:48.667845] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.498 qpair failed and we were unable to recover it. 00:30:11.498 [2024-02-14 20:30:48.668187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.498 [2024-02-14 20:30:48.668480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.498 [2024-02-14 20:30:48.668494] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.498 qpair failed and we were unable to recover it. 00:30:11.498 [2024-02-14 20:30:48.668874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.498 [2024-02-14 20:30:48.669301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.499 [2024-02-14 20:30:48.669316] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.499 qpair failed and we were unable to recover it. 00:30:11.499 [2024-02-14 20:30:48.669757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.499 [2024-02-14 20:30:48.670158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.499 [2024-02-14 20:30:48.670172] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.499 qpair failed and we were unable to recover it. 00:30:11.499 [2024-02-14 20:30:48.670600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.499 [2024-02-14 20:30:48.670898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.499 [2024-02-14 20:30:48.670912] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.499 qpair failed and we were unable to recover it. 00:30:11.499 [2024-02-14 20:30:48.671314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.499 [2024-02-14 20:30:48.671659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.499 [2024-02-14 20:30:48.671674] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.499 qpair failed and we were unable to recover it. 00:30:11.499 [2024-02-14 20:30:48.672075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.499 20:30:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:11.499 [2024-02-14 20:30:48.672428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.499 [2024-02-14 20:30:48.672443] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.499 qpair failed and we were unable to recover it. 00:30:11.499 [2024-02-14 20:30:48.672785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.499 20:30:48 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:11.499 20:30:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:11.499 [2024-02-14 20:30:48.673187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.499 [2024-02-14 20:30:48.673202] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.499 qpair failed and we were unable to recover it. 00:30:11.499 20:30:48 -- common/autotest_common.sh@10 -- # set +x 00:30:11.499 [2024-02-14 20:30:48.673615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.499 [2024-02-14 20:30:48.673965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.499 [2024-02-14 20:30:48.673980] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.499 qpair failed and we were unable to recover it. 00:30:11.499 [2024-02-14 20:30:48.674330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.499 [2024-02-14 20:30:48.674752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.499 [2024-02-14 20:30:48.674767] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.499 qpair failed and we were unable to recover it. 00:30:11.499 [2024-02-14 20:30:48.675146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.499 [2024-02-14 20:30:48.675502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.499 [2024-02-14 20:30:48.675516] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181510 with addr=10.0.0.2, port=4420 00:30:11.499 qpair failed and we were unable to recover it. 00:30:11.499 [2024-02-14 20:30:48.675934] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:11.499 [2024-02-14 20:30:48.675966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.499 [2024-02-14 20:30:48.679245] posix.c: 675:posix_sock_psk_use_session_client_cb: *ERROR*: PSK is not set 00:30:11.499 [2024-02-14 20:30:48.679294] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2181510 (107): Transport endpoint is not connected 00:30:11.499 [2024-02-14 20:30:48.679346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.499 qpair failed and we were unable to recover it. 00:30:11.499 20:30:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:11.499 20:30:48 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:11.499 20:30:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:11.499 20:30:48 -- common/autotest_common.sh@10 -- # set +x 00:30:11.499 [2024-02-14 20:30:48.681564] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.499 [2024-02-14 20:30:48.681758] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.499 [2024-02-14 20:30:48.681785] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.499 [2024-02-14 20:30:48.681797] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.499 [2024-02-14 20:30:48.681807] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:11.499 [2024-02-14 20:30:48.681833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.499 qpair failed and we were unable to recover it. 00:30:11.499 20:30:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:11.499 20:30:48 -- host/target_disconnect.sh@58 -- # wait 1966440 00:30:11.499 [2024-02-14 20:30:48.691461] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.499 [2024-02-14 20:30:48.691631] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.499 [2024-02-14 20:30:48.691663] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.499 [2024-02-14 20:30:48.691677] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.499 [2024-02-14 20:30:48.691686] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:11.499 [2024-02-14 20:30:48.691709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.499 qpair failed and we were unable to recover it. 00:30:11.499 [2024-02-14 20:30:48.701418] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.499 [2024-02-14 20:30:48.701550] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.499 [2024-02-14 20:30:48.701569] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.499 [2024-02-14 20:30:48.701577] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.499 [2024-02-14 20:30:48.701583] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:11.499 [2024-02-14 20:30:48.701601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.499 qpair failed and we were unable to recover it. 00:30:11.499 [2024-02-14 20:30:48.711427] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.499 [2024-02-14 20:30:48.711554] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.499 [2024-02-14 20:30:48.711572] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.499 [2024-02-14 20:30:48.711579] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.499 [2024-02-14 20:30:48.711585] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:11.499 [2024-02-14 20:30:48.711601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.499 qpair failed and we were unable to recover it. 00:30:11.499 [2024-02-14 20:30:48.721452] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.499 [2024-02-14 20:30:48.721569] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.499 [2024-02-14 20:30:48.721587] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.499 [2024-02-14 20:30:48.721594] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.499 [2024-02-14 20:30:48.721599] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:11.499 [2024-02-14 20:30:48.721615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.499 qpair failed and we were unable to recover it. 00:30:11.499 [2024-02-14 20:30:48.731488] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.499 [2024-02-14 20:30:48.731621] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.499 [2024-02-14 20:30:48.731639] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.499 [2024-02-14 20:30:48.731652] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.499 [2024-02-14 20:30:48.731658] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:11.499 [2024-02-14 20:30:48.731675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.499 qpair failed and we were unable to recover it. 00:30:11.499 [2024-02-14 20:30:48.741491] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.499 [2024-02-14 20:30:48.741613] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.499 [2024-02-14 20:30:48.741631] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.499 [2024-02-14 20:30:48.741638] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.499 [2024-02-14 20:30:48.741644] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:11.499 [2024-02-14 20:30:48.741666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.499 qpair failed and we were unable to recover it. 00:30:11.499 [2024-02-14 20:30:48.751530] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.499 [2024-02-14 20:30:48.751674] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.499 [2024-02-14 20:30:48.751697] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.499 [2024-02-14 20:30:48.751704] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.500 [2024-02-14 20:30:48.751710] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:11.500 [2024-02-14 20:30:48.751727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.500 qpair failed and we were unable to recover it. 00:30:11.500 [2024-02-14 20:30:48.761551] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.500 [2024-02-14 20:30:48.761672] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.500 [2024-02-14 20:30:48.761690] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.500 [2024-02-14 20:30:48.761696] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.500 [2024-02-14 20:30:48.761702] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:11.500 [2024-02-14 20:30:48.761718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.500 qpair failed and we were unable to recover it. 00:30:11.500 [2024-02-14 20:30:48.771619] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.500 [2024-02-14 20:30:48.771750] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.500 [2024-02-14 20:30:48.771767] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.500 [2024-02-14 20:30:48.771773] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.500 [2024-02-14 20:30:48.771779] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:11.500 [2024-02-14 20:30:48.771795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.500 qpair failed and we were unable to recover it. 00:30:11.500 [2024-02-14 20:30:48.781571] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.500 [2024-02-14 20:30:48.781700] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.500 [2024-02-14 20:30:48.781718] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.500 [2024-02-14 20:30:48.781725] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.500 [2024-02-14 20:30:48.781731] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:11.500 [2024-02-14 20:30:48.781748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.500 qpair failed and we were unable to recover it. 00:30:11.500 [2024-02-14 20:30:48.791608] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.500 [2024-02-14 20:30:48.791727] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.500 [2024-02-14 20:30:48.791745] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.500 [2024-02-14 20:30:48.791751] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.500 [2024-02-14 20:30:48.791757] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:11.500 [2024-02-14 20:30:48.791773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.500 qpair failed and we were unable to recover it. 00:30:11.500 [2024-02-14 20:30:48.801641] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.500 [2024-02-14 20:30:48.801765] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.500 [2024-02-14 20:30:48.801782] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.500 [2024-02-14 20:30:48.801789] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.500 [2024-02-14 20:30:48.801795] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:11.500 [2024-02-14 20:30:48.801810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.500 qpair failed and we were unable to recover it. 00:30:11.500 [2024-02-14 20:30:48.811696] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.500 [2024-02-14 20:30:48.811815] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.500 [2024-02-14 20:30:48.811833] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.500 [2024-02-14 20:30:48.811840] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.500 [2024-02-14 20:30:48.811846] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:11.500 [2024-02-14 20:30:48.811862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.500 qpair failed and we were unable to recover it. 00:30:11.500 [2024-02-14 20:30:48.821704] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.500 [2024-02-14 20:30:48.821826] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.500 [2024-02-14 20:30:48.821843] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.500 [2024-02-14 20:30:48.821850] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.500 [2024-02-14 20:30:48.821855] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:11.500 [2024-02-14 20:30:48.821871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.500 qpair failed and we were unable to recover it. 00:30:11.500 [2024-02-14 20:30:48.831741] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.500 [2024-02-14 20:30:48.831861] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.500 [2024-02-14 20:30:48.831878] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.500 [2024-02-14 20:30:48.831884] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.500 [2024-02-14 20:30:48.831890] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:11.500 [2024-02-14 20:30:48.831906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.500 qpair failed and we were unable to recover it. 00:30:11.500 [2024-02-14 20:30:48.841771] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.500 [2024-02-14 20:30:48.841891] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.500 [2024-02-14 20:30:48.841911] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.500 [2024-02-14 20:30:48.841918] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.500 [2024-02-14 20:30:48.841923] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:11.500 [2024-02-14 20:30:48.841939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.500 qpair failed and we were unable to recover it. 00:30:11.500 [2024-02-14 20:30:48.851795] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.500 [2024-02-14 20:30:48.851906] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.500 [2024-02-14 20:30:48.851923] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.500 [2024-02-14 20:30:48.851929] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.500 [2024-02-14 20:30:48.851935] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:11.500 [2024-02-14 20:30:48.851951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.500 qpair failed and we were unable to recover it. 00:30:11.500 [2024-02-14 20:30:48.861821] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.500 [2024-02-14 20:30:48.861949] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.500 [2024-02-14 20:30:48.861966] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.500 [2024-02-14 20:30:48.861973] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.500 [2024-02-14 20:30:48.861979] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:11.500 [2024-02-14 20:30:48.861995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.500 qpair failed and we were unable to recover it. 00:30:11.500 [2024-02-14 20:30:48.871834] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.500 [2024-02-14 20:30:48.871956] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.500 [2024-02-14 20:30:48.871974] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.500 [2024-02-14 20:30:48.871981] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.500 [2024-02-14 20:30:48.871987] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:11.500 [2024-02-14 20:30:48.872002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.500 qpair failed and we were unable to recover it. 00:30:11.500 [2024-02-14 20:30:48.881853] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.500 [2024-02-14 20:30:48.881976] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.500 [2024-02-14 20:30:48.881992] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.500 [2024-02-14 20:30:48.881999] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.500 [2024-02-14 20:30:48.882005] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:11.500 [2024-02-14 20:30:48.882025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.500 qpair failed and we were unable to recover it. 00:30:11.500 [2024-02-14 20:30:48.891835] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.501 [2024-02-14 20:30:48.891982] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.501 [2024-02-14 20:30:48.891999] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.501 [2024-02-14 20:30:48.892006] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.501 [2024-02-14 20:30:48.892012] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:11.501 [2024-02-14 20:30:48.892027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.501 qpair failed and we were unable to recover it. 00:30:11.501 [2024-02-14 20:30:48.901886] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.501 [2024-02-14 20:30:48.902011] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.501 [2024-02-14 20:30:48.902028] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.501 [2024-02-14 20:30:48.902034] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.501 [2024-02-14 20:30:48.902040] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:11.501 [2024-02-14 20:30:48.902056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.501 qpair failed and we were unable to recover it. 00:30:11.761 [2024-02-14 20:30:48.911872] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.761 [2024-02-14 20:30:48.912003] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.761 [2024-02-14 20:30:48.912020] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.761 [2024-02-14 20:30:48.912027] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.761 [2024-02-14 20:30:48.912032] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:11.761 [2024-02-14 20:30:48.912048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.761 qpair failed and we were unable to recover it. 00:30:11.761 [2024-02-14 20:30:48.922055] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.761 [2024-02-14 20:30:48.922196] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.761 [2024-02-14 20:30:48.922213] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.761 [2024-02-14 20:30:48.922220] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.761 [2024-02-14 20:30:48.922225] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:11.761 [2024-02-14 20:30:48.922241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.761 qpair failed and we were unable to recover it. 00:30:11.761 [2024-02-14 20:30:48.931977] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.761 [2024-02-14 20:30:48.932095] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.761 [2024-02-14 20:30:48.932114] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.761 [2024-02-14 20:30:48.932121] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.761 [2024-02-14 20:30:48.932127] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:11.761 [2024-02-14 20:30:48.932142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.761 qpair failed and we were unable to recover it. 00:30:11.761 [2024-02-14 20:30:48.942100] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.761 [2024-02-14 20:30:48.942230] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.761 [2024-02-14 20:30:48.942255] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.761 [2024-02-14 20:30:48.942262] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.761 [2024-02-14 20:30:48.942268] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:11.761 [2024-02-14 20:30:48.942284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.761 qpair failed and we were unable to recover it. 00:30:11.761 [2024-02-14 20:30:48.952050] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.761 [2024-02-14 20:30:48.952168] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.761 [2024-02-14 20:30:48.952184] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.761 [2024-02-14 20:30:48.952191] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.761 [2024-02-14 20:30:48.952196] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:11.761 [2024-02-14 20:30:48.952212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.761 qpair failed and we were unable to recover it. 00:30:11.761 [2024-02-14 20:30:48.962012] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.761 [2024-02-14 20:30:48.962131] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.761 [2024-02-14 20:30:48.962148] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.761 [2024-02-14 20:30:48.962154] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.761 [2024-02-14 20:30:48.962160] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:11.761 [2024-02-14 20:30:48.962175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.761 qpair failed and we were unable to recover it. 00:30:11.761 [2024-02-14 20:30:48.972056] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.761 [2024-02-14 20:30:48.972188] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.761 [2024-02-14 20:30:48.972205] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.761 [2024-02-14 20:30:48.972212] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.761 [2024-02-14 20:30:48.972217] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:11.761 [2024-02-14 20:30:48.972236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.762 qpair failed and we were unable to recover it. 00:30:11.762 [2024-02-14 20:30:48.982067] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.762 [2024-02-14 20:30:48.982187] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.762 [2024-02-14 20:30:48.982203] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.762 [2024-02-14 20:30:48.982210] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.762 [2024-02-14 20:30:48.982216] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:11.762 [2024-02-14 20:30:48.982231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.762 qpair failed and we were unable to recover it. 00:30:11.762 [2024-02-14 20:30:48.992174] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.762 [2024-02-14 20:30:48.992295] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.762 [2024-02-14 20:30:48.992312] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.762 [2024-02-14 20:30:48.992319] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.762 [2024-02-14 20:30:48.992324] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:11.762 [2024-02-14 20:30:48.992339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.762 qpair failed and we were unable to recover it. 00:30:11.762 [2024-02-14 20:30:49.002335] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.762 [2024-02-14 20:30:49.002454] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.762 [2024-02-14 20:30:49.002471] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.762 [2024-02-14 20:30:49.002478] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.762 [2024-02-14 20:30:49.002483] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:11.762 [2024-02-14 20:30:49.002499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.762 qpair failed and we were unable to recover it. 00:30:11.762 [2024-02-14 20:30:49.012175] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.762 [2024-02-14 20:30:49.012323] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.762 [2024-02-14 20:30:49.012341] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.762 [2024-02-14 20:30:49.012348] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.762 [2024-02-14 20:30:49.012353] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:11.762 [2024-02-14 20:30:49.012369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.762 qpair failed and we were unable to recover it. 00:30:11.762 [2024-02-14 20:30:49.022433] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.762 [2024-02-14 20:30:49.022709] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.762 [2024-02-14 20:30:49.022731] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.762 [2024-02-14 20:30:49.022738] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.762 [2024-02-14 20:30:49.022744] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:11.762 [2024-02-14 20:30:49.022760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.762 qpair failed and we were unable to recover it. 00:30:11.762 [2024-02-14 20:30:49.032470] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.762 [2024-02-14 20:30:49.032598] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.762 [2024-02-14 20:30:49.032616] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.762 [2024-02-14 20:30:49.032622] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.762 [2024-02-14 20:30:49.032628] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:11.762 [2024-02-14 20:30:49.032644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.762 qpair failed and we were unable to recover it. 00:30:11.762 [2024-02-14 20:30:49.042283] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.762 [2024-02-14 20:30:49.042406] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.762 [2024-02-14 20:30:49.042422] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.762 [2024-02-14 20:30:49.042429] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.762 [2024-02-14 20:30:49.042434] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:11.762 [2024-02-14 20:30:49.042450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.762 qpair failed and we were unable to recover it. 00:30:11.762 [2024-02-14 20:30:49.052318] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.762 [2024-02-14 20:30:49.052433] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.762 [2024-02-14 20:30:49.052450] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.762 [2024-02-14 20:30:49.052456] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.762 [2024-02-14 20:30:49.052462] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:11.762 [2024-02-14 20:30:49.052478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.762 qpair failed and we were unable to recover it. 00:30:11.762 [2024-02-14 20:30:49.062372] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.762 [2024-02-14 20:30:49.062494] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.762 [2024-02-14 20:30:49.062511] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.762 [2024-02-14 20:30:49.062518] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.762 [2024-02-14 20:30:49.062523] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:11.762 [2024-02-14 20:30:49.062542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.762 qpair failed and we were unable to recover it. 00:30:11.762 [2024-02-14 20:30:49.072422] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.762 [2024-02-14 20:30:49.072542] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.762 [2024-02-14 20:30:49.072559] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.762 [2024-02-14 20:30:49.072566] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.762 [2024-02-14 20:30:49.072572] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:11.762 [2024-02-14 20:30:49.072587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.762 qpair failed and we were unable to recover it. 00:30:11.762 [2024-02-14 20:30:49.082452] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.762 [2024-02-14 20:30:49.082574] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.762 [2024-02-14 20:30:49.082591] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.762 [2024-02-14 20:30:49.082598] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.762 [2024-02-14 20:30:49.082604] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:11.762 [2024-02-14 20:30:49.082619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.762 qpair failed and we were unable to recover it. 00:30:11.762 [2024-02-14 20:30:49.092440] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.762 [2024-02-14 20:30:49.092556] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.762 [2024-02-14 20:30:49.092572] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.762 [2024-02-14 20:30:49.092579] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.762 [2024-02-14 20:30:49.092585] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:11.762 [2024-02-14 20:30:49.092601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.762 qpair failed and we were unable to recover it. 00:30:11.762 [2024-02-14 20:30:49.102497] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.762 [2024-02-14 20:30:49.102617] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.762 [2024-02-14 20:30:49.102634] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.762 [2024-02-14 20:30:49.102641] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.762 [2024-02-14 20:30:49.102652] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:11.762 [2024-02-14 20:30:49.102669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.762 qpair failed and we were unable to recover it. 00:30:11.762 [2024-02-14 20:30:49.112523] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.762 [2024-02-14 20:30:49.112644] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.763 [2024-02-14 20:30:49.112672] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.763 [2024-02-14 20:30:49.112679] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.763 [2024-02-14 20:30:49.112685] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:11.763 [2024-02-14 20:30:49.112701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.763 qpair failed and we were unable to recover it. 00:30:11.763 [2024-02-14 20:30:49.122481] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.763 [2024-02-14 20:30:49.122598] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.763 [2024-02-14 20:30:49.122615] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.763 [2024-02-14 20:30:49.122622] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.763 [2024-02-14 20:30:49.122628] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:11.763 [2024-02-14 20:30:49.122644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.763 qpair failed and we were unable to recover it. 00:30:11.763 [2024-02-14 20:30:49.132580] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.763 [2024-02-14 20:30:49.132702] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.763 [2024-02-14 20:30:49.132719] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.763 [2024-02-14 20:30:49.132726] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.763 [2024-02-14 20:30:49.132732] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:11.763 [2024-02-14 20:30:49.132747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.763 qpair failed and we were unable to recover it. 00:30:11.763 [2024-02-14 20:30:49.142579] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.763 [2024-02-14 20:30:49.142707] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.763 [2024-02-14 20:30:49.142724] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.763 [2024-02-14 20:30:49.142730] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.763 [2024-02-14 20:30:49.142736] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:11.763 [2024-02-14 20:30:49.142752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.763 qpair failed and we were unable to recover it. 00:30:11.763 [2024-02-14 20:30:49.152654] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.763 [2024-02-14 20:30:49.152775] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.763 [2024-02-14 20:30:49.152792] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.763 [2024-02-14 20:30:49.152799] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.763 [2024-02-14 20:30:49.152808] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:11.763 [2024-02-14 20:30:49.152824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.763 qpair failed and we were unable to recover it. 00:30:11.763 [2024-02-14 20:30:49.162682] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.763 [2024-02-14 20:30:49.162805] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.763 [2024-02-14 20:30:49.162822] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.763 [2024-02-14 20:30:49.162829] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.763 [2024-02-14 20:30:49.162835] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:11.763 [2024-02-14 20:30:49.162851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.763 qpair failed and we were unable to recover it. 00:30:11.763 [2024-02-14 20:30:49.172692] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.763 [2024-02-14 20:30:49.172807] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.763 [2024-02-14 20:30:49.172823] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.763 [2024-02-14 20:30:49.172830] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.763 [2024-02-14 20:30:49.172836] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:11.763 [2024-02-14 20:30:49.172851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.763 qpair failed and we were unable to recover it. 00:30:12.023 [2024-02-14 20:30:49.182754] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.023 [2024-02-14 20:30:49.182874] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.023 [2024-02-14 20:30:49.182890] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.023 [2024-02-14 20:30:49.182897] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.023 [2024-02-14 20:30:49.182902] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:12.023 [2024-02-14 20:30:49.182918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:12.023 qpair failed and we were unable to recover it. 00:30:12.023 [2024-02-14 20:30:49.192751] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.023 [2024-02-14 20:30:49.192872] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.023 [2024-02-14 20:30:49.192890] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.023 [2024-02-14 20:30:49.192897] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.023 [2024-02-14 20:30:49.192902] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:12.023 [2024-02-14 20:30:49.192919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:12.023 qpair failed and we were unable to recover it. 00:30:12.023 [2024-02-14 20:30:49.202773] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.023 [2024-02-14 20:30:49.202894] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.023 [2024-02-14 20:30:49.202911] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.023 [2024-02-14 20:30:49.202918] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.023 [2024-02-14 20:30:49.202923] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:12.023 [2024-02-14 20:30:49.202940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:12.023 qpair failed and we were unable to recover it. 00:30:12.023 [2024-02-14 20:30:49.212784] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.023 [2024-02-14 20:30:49.212946] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.023 [2024-02-14 20:30:49.212963] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.023 [2024-02-14 20:30:49.212970] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.023 [2024-02-14 20:30:49.212976] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:12.023 [2024-02-14 20:30:49.212992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:12.023 qpair failed and we were unable to recover it. 00:30:12.023 [2024-02-14 20:30:49.222833] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.023 [2024-02-14 20:30:49.222951] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.023 [2024-02-14 20:30:49.222969] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.023 [2024-02-14 20:30:49.222975] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.023 [2024-02-14 20:30:49.222981] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:12.023 [2024-02-14 20:30:49.222997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:12.023 qpair failed and we were unable to recover it. 00:30:12.023 [2024-02-14 20:30:49.232852] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.023 [2024-02-14 20:30:49.232965] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.024 [2024-02-14 20:30:49.232982] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.024 [2024-02-14 20:30:49.232989] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.024 [2024-02-14 20:30:49.232994] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:12.024 [2024-02-14 20:30:49.233010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:12.024 qpair failed and we were unable to recover it. 00:30:12.024 [2024-02-14 20:30:49.242817] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.024 [2024-02-14 20:30:49.242967] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.024 [2024-02-14 20:30:49.242984] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.024 [2024-02-14 20:30:49.242990] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.024 [2024-02-14 20:30:49.243000] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:12.024 [2024-02-14 20:30:49.243016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:12.024 qpair failed and we were unable to recover it. 00:30:12.024 [2024-02-14 20:30:49.252910] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.024 [2024-02-14 20:30:49.253190] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.024 [2024-02-14 20:30:49.253208] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.024 [2024-02-14 20:30:49.253215] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.024 [2024-02-14 20:30:49.253221] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:12.024 [2024-02-14 20:30:49.253236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:12.024 qpair failed and we were unable to recover it. 00:30:12.024 [2024-02-14 20:30:49.262871] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.024 [2024-02-14 20:30:49.262989] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.024 [2024-02-14 20:30:49.263006] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.024 [2024-02-14 20:30:49.263013] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.024 [2024-02-14 20:30:49.263019] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:12.024 [2024-02-14 20:30:49.263034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:12.024 qpair failed and we were unable to recover it. 00:30:12.024 [2024-02-14 20:30:49.272900] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.024 [2024-02-14 20:30:49.273019] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.024 [2024-02-14 20:30:49.273036] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.024 [2024-02-14 20:30:49.273043] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.024 [2024-02-14 20:30:49.273048] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:12.024 [2024-02-14 20:30:49.273064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:12.024 qpair failed and we were unable to recover it. 00:30:12.024 [2024-02-14 20:30:49.283001] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.024 [2024-02-14 20:30:49.283118] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.024 [2024-02-14 20:30:49.283135] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.024 [2024-02-14 20:30:49.283142] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.024 [2024-02-14 20:30:49.283147] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:12.024 [2024-02-14 20:30:49.283163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:12.024 qpair failed and we were unable to recover it. 00:30:12.024 [2024-02-14 20:30:49.292949] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.024 [2024-02-14 20:30:49.293069] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.024 [2024-02-14 20:30:49.293086] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.024 [2024-02-14 20:30:49.293093] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.024 [2024-02-14 20:30:49.293099] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:12.024 [2024-02-14 20:30:49.293114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:12.024 qpair failed and we were unable to recover it. 00:30:12.024 [2024-02-14 20:30:49.302987] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.024 [2024-02-14 20:30:49.303104] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.024 [2024-02-14 20:30:49.303121] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.024 [2024-02-14 20:30:49.303128] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.024 [2024-02-14 20:30:49.303133] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:12.024 [2024-02-14 20:30:49.303149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:12.024 qpair failed and we were unable to recover it. 00:30:12.024 [2024-02-14 20:30:49.313031] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.024 [2024-02-14 20:30:49.313148] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.024 [2024-02-14 20:30:49.313165] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.024 [2024-02-14 20:30:49.313172] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.024 [2024-02-14 20:30:49.313177] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:12.024 [2024-02-14 20:30:49.313193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:12.024 qpair failed and we were unable to recover it. 00:30:12.024 [2024-02-14 20:30:49.323031] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.024 [2024-02-14 20:30:49.323151] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.024 [2024-02-14 20:30:49.323168] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.024 [2024-02-14 20:30:49.323175] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.024 [2024-02-14 20:30:49.323180] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:12.024 [2024-02-14 20:30:49.323196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:12.024 qpair failed and we were unable to recover it. 00:30:12.024 [2024-02-14 20:30:49.333157] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.024 [2024-02-14 20:30:49.333276] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.024 [2024-02-14 20:30:49.333293] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.024 [2024-02-14 20:30:49.333299] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.024 [2024-02-14 20:30:49.333308] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:12.024 [2024-02-14 20:30:49.333324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:12.024 qpair failed and we were unable to recover it. 00:30:12.024 [2024-02-14 20:30:49.343167] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.024 [2024-02-14 20:30:49.343287] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.024 [2024-02-14 20:30:49.343304] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.024 [2024-02-14 20:30:49.343310] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.024 [2024-02-14 20:30:49.343316] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:12.024 [2024-02-14 20:30:49.343332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:12.024 qpair failed and we were unable to recover it. 00:30:12.024 [2024-02-14 20:30:49.353110] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.024 [2024-02-14 20:30:49.353236] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.024 [2024-02-14 20:30:49.353252] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.024 [2024-02-14 20:30:49.353259] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.024 [2024-02-14 20:30:49.353265] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:12.024 [2024-02-14 20:30:49.353280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:12.024 qpair failed and we were unable to recover it. 00:30:12.024 [2024-02-14 20:30:49.363191] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.024 [2024-02-14 20:30:49.363318] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.024 [2024-02-14 20:30:49.363335] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.024 [2024-02-14 20:30:49.363342] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.024 [2024-02-14 20:30:49.363348] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:12.025 [2024-02-14 20:30:49.363364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:12.025 qpair failed and we were unable to recover it. 00:30:12.025 [2024-02-14 20:30:49.373177] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.025 [2024-02-14 20:30:49.373295] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.025 [2024-02-14 20:30:49.373312] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.025 [2024-02-14 20:30:49.373321] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.025 [2024-02-14 20:30:49.373328] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:12.025 [2024-02-14 20:30:49.373344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:12.025 qpair failed and we were unable to recover it. 00:30:12.025 [2024-02-14 20:30:49.383221] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.025 [2024-02-14 20:30:49.383345] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.025 [2024-02-14 20:30:49.383362] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.025 [2024-02-14 20:30:49.383368] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.025 [2024-02-14 20:30:49.383374] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:12.025 [2024-02-14 20:30:49.383389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:12.025 qpair failed and we were unable to recover it. 00:30:12.025 [2024-02-14 20:30:49.393329] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.025 [2024-02-14 20:30:49.393450] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.025 [2024-02-14 20:30:49.393468] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.025 [2024-02-14 20:30:49.393476] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.025 [2024-02-14 20:30:49.393482] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:12.025 [2024-02-14 20:30:49.393498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:12.025 qpair failed and we were unable to recover it. 00:30:12.025 [2024-02-14 20:30:49.403328] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.025 [2024-02-14 20:30:49.403444] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.025 [2024-02-14 20:30:49.403461] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.025 [2024-02-14 20:30:49.403467] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.025 [2024-02-14 20:30:49.403473] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:12.025 [2024-02-14 20:30:49.403489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:12.025 qpair failed and we were unable to recover it. 00:30:12.025 [2024-02-14 20:30:49.413390] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.025 [2024-02-14 20:30:49.413508] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.025 [2024-02-14 20:30:49.413525] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.025 [2024-02-14 20:30:49.413532] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.025 [2024-02-14 20:30:49.413537] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:12.025 [2024-02-14 20:30:49.413553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:12.025 qpair failed and we were unable to recover it. 00:30:12.025 [2024-02-14 20:30:49.423392] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.025 [2024-02-14 20:30:49.423509] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.025 [2024-02-14 20:30:49.423526] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.025 [2024-02-14 20:30:49.423532] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.025 [2024-02-14 20:30:49.423542] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:12.025 [2024-02-14 20:30:49.423557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:12.025 qpair failed and we were unable to recover it. 00:30:12.025 [2024-02-14 20:30:49.433424] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.025 [2024-02-14 20:30:49.433545] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.025 [2024-02-14 20:30:49.433561] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.025 [2024-02-14 20:30:49.433568] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.025 [2024-02-14 20:30:49.433574] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:12.025 [2024-02-14 20:30:49.433590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:12.025 qpair failed and we were unable to recover it. 00:30:12.285 [2024-02-14 20:30:49.443465] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.285 [2024-02-14 20:30:49.443602] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.285 [2024-02-14 20:30:49.443618] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.285 [2024-02-14 20:30:49.443625] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.285 [2024-02-14 20:30:49.443631] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:12.285 [2024-02-14 20:30:49.443652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:12.285 qpair failed and we were unable to recover it. 00:30:12.285 [2024-02-14 20:30:49.453470] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.285 [2024-02-14 20:30:49.453590] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.285 [2024-02-14 20:30:49.453607] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.285 [2024-02-14 20:30:49.453614] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.285 [2024-02-14 20:30:49.453619] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:12.285 [2024-02-14 20:30:49.453635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:12.285 qpair failed and we were unable to recover it. 00:30:12.285 [2024-02-14 20:30:49.463507] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.285 [2024-02-14 20:30:49.463622] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.285 [2024-02-14 20:30:49.463639] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.285 [2024-02-14 20:30:49.463651] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.285 [2024-02-14 20:30:49.463657] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:12.285 [2024-02-14 20:30:49.463672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:12.285 qpair failed and we were unable to recover it. 00:30:12.285 [2024-02-14 20:30:49.473534] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.285 [2024-02-14 20:30:49.473667] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.285 [2024-02-14 20:30:49.473684] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.285 [2024-02-14 20:30:49.473692] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.285 [2024-02-14 20:30:49.473697] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:12.285 [2024-02-14 20:30:49.473713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:12.285 qpair failed and we were unable to recover it. 00:30:12.285 [2024-02-14 20:30:49.483654] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.285 [2024-02-14 20:30:49.483821] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.285 [2024-02-14 20:30:49.483838] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.285 [2024-02-14 20:30:49.483845] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.285 [2024-02-14 20:30:49.483850] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:12.285 [2024-02-14 20:30:49.483866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:12.285 qpair failed and we were unable to recover it. 00:30:12.285 [2024-02-14 20:30:49.493604] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.286 [2024-02-14 20:30:49.493728] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.286 [2024-02-14 20:30:49.493745] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.286 [2024-02-14 20:30:49.493752] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.286 [2024-02-14 20:30:49.493757] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:12.286 [2024-02-14 20:30:49.493773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:12.286 qpair failed and we were unable to recover it. 00:30:12.286 [2024-02-14 20:30:49.503637] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.286 [2024-02-14 20:30:49.503764] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.286 [2024-02-14 20:30:49.503781] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.286 [2024-02-14 20:30:49.503788] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.286 [2024-02-14 20:30:49.503794] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:12.286 [2024-02-14 20:30:49.503810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:12.286 qpair failed and we were unable to recover it. 00:30:12.286 [2024-02-14 20:30:49.513613] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.286 [2024-02-14 20:30:49.513739] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.286 [2024-02-14 20:30:49.513755] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.286 [2024-02-14 20:30:49.513765] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.286 [2024-02-14 20:30:49.513771] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:12.286 [2024-02-14 20:30:49.513786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:12.286 qpair failed and we were unable to recover it. 00:30:12.286 [2024-02-14 20:30:49.523695] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.286 [2024-02-14 20:30:49.523806] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.286 [2024-02-14 20:30:49.523823] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.286 [2024-02-14 20:30:49.523830] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.286 [2024-02-14 20:30:49.523835] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:12.286 [2024-02-14 20:30:49.523851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:12.286 qpair failed and we were unable to recover it. 00:30:12.286 [2024-02-14 20:30:49.533651] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.286 [2024-02-14 20:30:49.533768] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.286 [2024-02-14 20:30:49.533784] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.286 [2024-02-14 20:30:49.533791] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.286 [2024-02-14 20:30:49.533797] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:12.286 [2024-02-14 20:30:49.533813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:12.286 qpair failed and we were unable to recover it. 00:30:12.286 [2024-02-14 20:30:49.543747] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.286 [2024-02-14 20:30:49.544017] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.286 [2024-02-14 20:30:49.544034] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.286 [2024-02-14 20:30:49.544040] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.286 [2024-02-14 20:30:49.544047] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:12.286 [2024-02-14 20:30:49.544061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:12.286 qpair failed and we were unable to recover it. 00:30:12.286 [2024-02-14 20:30:49.553821] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.286 [2024-02-14 20:30:49.553981] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.286 [2024-02-14 20:30:49.553997] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.286 [2024-02-14 20:30:49.554004] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.286 [2024-02-14 20:30:49.554010] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:12.286 [2024-02-14 20:30:49.554026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:12.286 qpair failed and we were unable to recover it. 00:30:12.286 [2024-02-14 20:30:49.563828] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.286 [2024-02-14 20:30:49.563976] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.286 [2024-02-14 20:30:49.564003] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.286 [2024-02-14 20:30:49.564013] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.286 [2024-02-14 20:30:49.564022] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:12.286 [2024-02-14 20:30:49.564045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:12.286 qpair failed and we were unable to recover it. 00:30:12.286 [2024-02-14 20:30:49.573861] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.286 [2024-02-14 20:30:49.573981] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.286 [2024-02-14 20:30:49.573999] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.286 [2024-02-14 20:30:49.574006] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.286 [2024-02-14 20:30:49.574012] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:12.286 [2024-02-14 20:30:49.574029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:12.286 qpair failed and we were unable to recover it. 00:30:12.286 [2024-02-14 20:30:49.583867] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.286 [2024-02-14 20:30:49.583987] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.286 [2024-02-14 20:30:49.584003] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.286 [2024-02-14 20:30:49.584010] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.286 [2024-02-14 20:30:49.584016] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:12.286 [2024-02-14 20:30:49.584032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:12.286 qpair failed and we were unable to recover it. 00:30:12.286 [2024-02-14 20:30:49.593901] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.286 [2024-02-14 20:30:49.594031] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.286 [2024-02-14 20:30:49.594048] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.286 [2024-02-14 20:30:49.594055] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.286 [2024-02-14 20:30:49.594060] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:12.286 [2024-02-14 20:30:49.594076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:12.286 qpair failed and we were unable to recover it. 00:30:12.286 [2024-02-14 20:30:49.603865] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.286 [2024-02-14 20:30:49.603974] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.286 [2024-02-14 20:30:49.603990] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.286 [2024-02-14 20:30:49.604000] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.286 [2024-02-14 20:30:49.604006] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:12.286 [2024-02-14 20:30:49.604022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:12.286 qpair failed and we were unable to recover it. 00:30:12.286 [2024-02-14 20:30:49.613973] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.286 [2024-02-14 20:30:49.614092] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.286 [2024-02-14 20:30:49.614109] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.286 [2024-02-14 20:30:49.614116] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.286 [2024-02-14 20:30:49.614122] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:12.286 [2024-02-14 20:30:49.614138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:12.286 qpair failed and we were unable to recover it. 00:30:12.286 [2024-02-14 20:30:49.623986] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.286 [2024-02-14 20:30:49.624117] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.286 [2024-02-14 20:30:49.624133] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.287 [2024-02-14 20:30:49.624140] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.287 [2024-02-14 20:30:49.624146] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:12.287 [2024-02-14 20:30:49.624162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:12.287 qpair failed and we were unable to recover it. 00:30:12.287 [2024-02-14 20:30:49.633948] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.287 [2024-02-14 20:30:49.634068] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.287 [2024-02-14 20:30:49.634083] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.287 [2024-02-14 20:30:49.634090] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.287 [2024-02-14 20:30:49.634096] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:12.287 [2024-02-14 20:30:49.634112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:12.287 qpair failed and we were unable to recover it. 00:30:12.287 [2024-02-14 20:30:49.644070] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.287 [2024-02-14 20:30:49.644196] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.287 [2024-02-14 20:30:49.644212] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.287 [2024-02-14 20:30:49.644219] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.287 [2024-02-14 20:30:49.644225] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:12.287 [2024-02-14 20:30:49.644241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:12.287 qpair failed and we were unable to recover it. 00:30:12.287 [2024-02-14 20:30:49.654118] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.287 [2024-02-14 20:30:49.654279] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.287 [2024-02-14 20:30:49.654296] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.287 [2024-02-14 20:30:49.654303] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.287 [2024-02-14 20:30:49.654309] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:12.287 [2024-02-14 20:30:49.654324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:12.287 qpair failed and we were unable to recover it. 00:30:12.287 [2024-02-14 20:30:49.664123] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.287 [2024-02-14 20:30:49.664257] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.287 [2024-02-14 20:30:49.664273] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.287 [2024-02-14 20:30:49.664279] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.287 [2024-02-14 20:30:49.664285] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:12.287 [2024-02-14 20:30:49.664301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:12.287 qpair failed and we were unable to recover it. 00:30:12.287 [2024-02-14 20:30:49.674141] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.287 [2024-02-14 20:30:49.674263] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.287 [2024-02-14 20:30:49.674279] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.287 [2024-02-14 20:30:49.674285] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.287 [2024-02-14 20:30:49.674291] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:12.287 [2024-02-14 20:30:49.674306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:12.287 qpair failed and we were unable to recover it. 00:30:12.287 [2024-02-14 20:30:49.684163] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.287 [2024-02-14 20:30:49.684426] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.287 [2024-02-14 20:30:49.684443] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.287 [2024-02-14 20:30:49.684449] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.287 [2024-02-14 20:30:49.684455] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:12.287 [2024-02-14 20:30:49.684470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:12.287 qpair failed and we were unable to recover it. 00:30:12.287 [2024-02-14 20:30:49.694117] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.287 [2024-02-14 20:30:49.694258] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.287 [2024-02-14 20:30:49.694278] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.287 [2024-02-14 20:30:49.694285] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.287 [2024-02-14 20:30:49.694290] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:12.287 [2024-02-14 20:30:49.694306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:12.287 qpair failed and we were unable to recover it. 00:30:12.547 [2024-02-14 20:30:49.704154] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.547 [2024-02-14 20:30:49.704273] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.547 [2024-02-14 20:30:49.704290] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.547 [2024-02-14 20:30:49.704296] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.547 [2024-02-14 20:30:49.704302] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:12.547 [2024-02-14 20:30:49.704318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:12.547 qpair failed and we were unable to recover it. 00:30:12.547 [2024-02-14 20:30:49.714271] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.547 [2024-02-14 20:30:49.714387] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.547 [2024-02-14 20:30:49.714403] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.547 [2024-02-14 20:30:49.714410] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.547 [2024-02-14 20:30:49.714416] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:12.547 [2024-02-14 20:30:49.714431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:12.547 qpair failed and we were unable to recover it. 00:30:12.547 [2024-02-14 20:30:49.724298] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.548 [2024-02-14 20:30:49.724417] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.548 [2024-02-14 20:30:49.724433] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.548 [2024-02-14 20:30:49.724440] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.548 [2024-02-14 20:30:49.724445] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:12.548 [2024-02-14 20:30:49.724461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:12.548 qpair failed and we were unable to recover it. 00:30:12.548 [2024-02-14 20:30:49.734315] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.548 [2024-02-14 20:30:49.734434] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.548 [2024-02-14 20:30:49.734451] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.548 [2024-02-14 20:30:49.734459] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.548 [2024-02-14 20:30:49.734465] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:12.548 [2024-02-14 20:30:49.734480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:12.548 qpair failed and we were unable to recover it. 00:30:12.548 [2024-02-14 20:30:49.744401] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.548 [2024-02-14 20:30:49.744521] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.548 [2024-02-14 20:30:49.744537] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.548 [2024-02-14 20:30:49.744544] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.548 [2024-02-14 20:30:49.744550] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:12.548 [2024-02-14 20:30:49.744565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:12.548 qpair failed and we were unable to recover it. 00:30:12.548 [2024-02-14 20:30:49.754311] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.548 [2024-02-14 20:30:49.754432] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.548 [2024-02-14 20:30:49.754448] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.548 [2024-02-14 20:30:49.754454] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.548 [2024-02-14 20:30:49.754460] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:12.548 [2024-02-14 20:30:49.754476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:12.548 qpair failed and we were unable to recover it. 00:30:12.548 [2024-02-14 20:30:49.764399] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.548 [2024-02-14 20:30:49.764520] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.548 [2024-02-14 20:30:49.764536] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.548 [2024-02-14 20:30:49.764544] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.548 [2024-02-14 20:30:49.764549] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:12.548 [2024-02-14 20:30:49.764565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:12.548 qpair failed and we were unable to recover it. 00:30:12.548 [2024-02-14 20:30:49.774365] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.548 [2024-02-14 20:30:49.774480] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.548 [2024-02-14 20:30:49.774496] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.548 [2024-02-14 20:30:49.774502] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.548 [2024-02-14 20:30:49.774508] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:12.548 [2024-02-14 20:30:49.774523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:12.548 qpair failed and we were unable to recover it. 00:30:12.548 [2024-02-14 20:30:49.784404] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.548 [2024-02-14 20:30:49.784520] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.548 [2024-02-14 20:30:49.784540] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.548 [2024-02-14 20:30:49.784548] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.548 [2024-02-14 20:30:49.784553] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:12.548 [2024-02-14 20:30:49.784569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:12.548 qpair failed and we were unable to recover it. 00:30:12.548 [2024-02-14 20:30:49.794495] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.548 [2024-02-14 20:30:49.794615] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.548 [2024-02-14 20:30:49.794631] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.548 [2024-02-14 20:30:49.794637] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.548 [2024-02-14 20:30:49.794643] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:12.548 [2024-02-14 20:30:49.794667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:12.548 qpair failed and we were unable to recover it. 00:30:12.548 [2024-02-14 20:30:49.804527] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.548 [2024-02-14 20:30:49.804651] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.548 [2024-02-14 20:30:49.804668] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.548 [2024-02-14 20:30:49.804674] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.548 [2024-02-14 20:30:49.804680] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:12.548 [2024-02-14 20:30:49.804696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:12.548 qpair failed and we were unable to recover it. 00:30:12.548 [2024-02-14 20:30:49.814552] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.548 [2024-02-14 20:30:49.814677] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.548 [2024-02-14 20:30:49.814693] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.548 [2024-02-14 20:30:49.814700] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.548 [2024-02-14 20:30:49.814706] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:12.548 [2024-02-14 20:30:49.814722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:12.548 qpair failed and we were unable to recover it. 00:30:12.548 [2024-02-14 20:30:49.824757] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.548 [2024-02-14 20:30:49.824876] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.548 [2024-02-14 20:30:49.824892] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.548 [2024-02-14 20:30:49.824899] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.548 [2024-02-14 20:30:49.824904] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:12.548 [2024-02-14 20:30:49.824923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:12.548 qpair failed and we were unable to recover it. 00:30:12.548 [2024-02-14 20:30:49.834627] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.548 [2024-02-14 20:30:49.834794] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.548 [2024-02-14 20:30:49.834810] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.548 [2024-02-14 20:30:49.834817] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.548 [2024-02-14 20:30:49.834822] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:12.548 [2024-02-14 20:30:49.834838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:12.548 qpair failed and we were unable to recover it. 00:30:12.548 [2024-02-14 20:30:49.844637] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.548 [2024-02-14 20:30:49.844761] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.548 [2024-02-14 20:30:49.844777] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.548 [2024-02-14 20:30:49.844784] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.548 [2024-02-14 20:30:49.844789] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:12.548 [2024-02-14 20:30:49.844805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:12.548 qpair failed and we were unable to recover it. 00:30:12.548 [2024-02-14 20:30:49.854585] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.548 [2024-02-14 20:30:49.854709] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.548 [2024-02-14 20:30:49.854725] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.549 [2024-02-14 20:30:49.854731] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.549 [2024-02-14 20:30:49.854737] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:12.549 [2024-02-14 20:30:49.854752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:12.549 qpair failed and we were unable to recover it. 00:30:12.549 [2024-02-14 20:30:49.864705] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.549 [2024-02-14 20:30:49.864820] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.549 [2024-02-14 20:30:49.864836] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.549 [2024-02-14 20:30:49.864843] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.549 [2024-02-14 20:30:49.864848] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:12.549 [2024-02-14 20:30:49.864864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:12.549 qpair failed and we were unable to recover it. 00:30:12.549 [2024-02-14 20:30:49.874638] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.549 [2024-02-14 20:30:49.874762] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.549 [2024-02-14 20:30:49.874780] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.549 [2024-02-14 20:30:49.874787] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.549 [2024-02-14 20:30:49.874793] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:12.549 [2024-02-14 20:30:49.874808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:12.549 qpair failed and we were unable to recover it. 00:30:12.549 [2024-02-14 20:30:49.884753] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.549 [2024-02-14 20:30:49.884868] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.549 [2024-02-14 20:30:49.884884] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.549 [2024-02-14 20:30:49.884891] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.549 [2024-02-14 20:30:49.884896] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:12.549 [2024-02-14 20:30:49.884912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:12.549 qpair failed and we were unable to recover it. 00:30:12.549 [2024-02-14 20:30:49.894782] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.549 [2024-02-14 20:30:49.894899] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.549 [2024-02-14 20:30:49.894915] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.549 [2024-02-14 20:30:49.894922] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.549 [2024-02-14 20:30:49.894928] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:12.549 [2024-02-14 20:30:49.894943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:12.549 qpair failed and we were unable to recover it. 00:30:12.549 [2024-02-14 20:30:49.904802] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.549 [2024-02-14 20:30:49.904918] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.549 [2024-02-14 20:30:49.904934] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.549 [2024-02-14 20:30:49.904941] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.549 [2024-02-14 20:30:49.904947] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:12.549 [2024-02-14 20:30:49.904963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:12.549 qpair failed and we were unable to recover it. 00:30:12.549 [2024-02-14 20:30:49.914861] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.549 [2024-02-14 20:30:49.914977] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.549 [2024-02-14 20:30:49.914993] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.549 [2024-02-14 20:30:49.915000] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.549 [2024-02-14 20:30:49.915008] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:12.549 [2024-02-14 20:30:49.915024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:12.549 qpair failed and we were unable to recover it. 00:30:12.549 [2024-02-14 20:30:49.924793] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.549 [2024-02-14 20:30:49.924905] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.549 [2024-02-14 20:30:49.924921] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.549 [2024-02-14 20:30:49.924928] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.549 [2024-02-14 20:30:49.924933] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:12.549 [2024-02-14 20:30:49.924949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:12.549 qpair failed and we were unable to recover it. 00:30:12.549 [2024-02-14 20:30:49.934896] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.549 [2024-02-14 20:30:49.935012] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.549 [2024-02-14 20:30:49.935028] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.549 [2024-02-14 20:30:49.935034] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.549 [2024-02-14 20:30:49.935040] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:12.549 [2024-02-14 20:30:49.935055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:12.549 qpair failed and we were unable to recover it. 00:30:12.549 [2024-02-14 20:30:49.944920] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.549 [2024-02-14 20:30:49.945039] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.549 [2024-02-14 20:30:49.945056] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.549 [2024-02-14 20:30:49.945062] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.549 [2024-02-14 20:30:49.945068] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:12.549 [2024-02-14 20:30:49.945083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:12.549 qpair failed and we were unable to recover it. 00:30:12.549 [2024-02-14 20:30:49.954939] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.549 [2024-02-14 20:30:49.955057] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.549 [2024-02-14 20:30:49.955072] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.549 [2024-02-14 20:30:49.955079] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.549 [2024-02-14 20:30:49.955085] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:12.549 [2024-02-14 20:30:49.955100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:12.549 qpair failed and we were unable to recover it. 00:30:12.810 [2024-02-14 20:30:49.964974] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.810 [2024-02-14 20:30:49.965094] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.810 [2024-02-14 20:30:49.965110] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.810 [2024-02-14 20:30:49.965117] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.810 [2024-02-14 20:30:49.965122] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:12.810 [2024-02-14 20:30:49.965138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:12.810 qpair failed and we were unable to recover it. 00:30:12.810 [2024-02-14 20:30:49.974989] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.810 [2024-02-14 20:30:49.975101] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.810 [2024-02-14 20:30:49.975117] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.810 [2024-02-14 20:30:49.975123] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.810 [2024-02-14 20:30:49.975129] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:12.810 [2024-02-14 20:30:49.975144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:12.810 qpair failed and we were unable to recover it. 00:30:12.810 [2024-02-14 20:30:49.985186] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.810 [2024-02-14 20:30:49.985302] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.810 [2024-02-14 20:30:49.985318] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.810 [2024-02-14 20:30:49.985325] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.810 [2024-02-14 20:30:49.985331] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:12.810 [2024-02-14 20:30:49.985347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:12.810 qpair failed and we were unable to recover it. 00:30:12.810 [2024-02-14 20:30:49.995058] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.810 [2024-02-14 20:30:49.995172] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.810 [2024-02-14 20:30:49.995188] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.810 [2024-02-14 20:30:49.995195] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.810 [2024-02-14 20:30:49.995201] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:12.810 [2024-02-14 20:30:49.995217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:12.810 qpair failed and we were unable to recover it. 00:30:12.810 [2024-02-14 20:30:50.005220] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.810 [2024-02-14 20:30:50.005483] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.810 [2024-02-14 20:30:50.005546] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.810 [2024-02-14 20:30:50.005567] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.810 [2024-02-14 20:30:50.005596] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:12.810 [2024-02-14 20:30:50.005639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:12.810 qpair failed and we were unable to recover it. 00:30:12.810 [2024-02-14 20:30:50.015110] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.810 [2024-02-14 20:30:50.015235] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.810 [2024-02-14 20:30:50.015252] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.810 [2024-02-14 20:30:50.015259] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.810 [2024-02-14 20:30:50.015265] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:12.810 [2024-02-14 20:30:50.015282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:12.810 qpair failed and we were unable to recover it. 00:30:12.810 [2024-02-14 20:30:50.025150] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.810 [2024-02-14 20:30:50.025274] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.810 [2024-02-14 20:30:50.025291] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.810 [2024-02-14 20:30:50.025298] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.810 [2024-02-14 20:30:50.025304] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:12.810 [2024-02-14 20:30:50.025320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:12.810 qpair failed and we were unable to recover it. 00:30:12.810 [2024-02-14 20:30:50.035164] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.810 [2024-02-14 20:30:50.035293] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.810 [2024-02-14 20:30:50.035312] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.810 [2024-02-14 20:30:50.035320] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.810 [2024-02-14 20:30:50.035326] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:12.810 [2024-02-14 20:30:50.035344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:12.810 qpair failed and we were unable to recover it. 00:30:12.810 [2024-02-14 20:30:50.045203] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.810 [2024-02-14 20:30:50.045321] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.810 [2024-02-14 20:30:50.045338] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.810 [2024-02-14 20:30:50.045345] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.810 [2024-02-14 20:30:50.045351] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:12.810 [2024-02-14 20:30:50.045368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:12.810 qpair failed and we were unable to recover it. 00:30:12.810 [2024-02-14 20:30:50.055211] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.810 [2024-02-14 20:30:50.055323] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.810 [2024-02-14 20:30:50.055339] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.810 [2024-02-14 20:30:50.055346] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.810 [2024-02-14 20:30:50.055352] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:12.810 [2024-02-14 20:30:50.055368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:12.810 qpair failed and we were unable to recover it. 00:30:12.810 [2024-02-14 20:30:50.065273] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.810 [2024-02-14 20:30:50.065393] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.810 [2024-02-14 20:30:50.065409] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.810 [2024-02-14 20:30:50.065416] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.810 [2024-02-14 20:30:50.065423] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:12.810 [2024-02-14 20:30:50.065439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:12.810 qpair failed and we were unable to recover it. 00:30:12.810 [2024-02-14 20:30:50.075730] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.810 [2024-02-14 20:30:50.075884] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.811 [2024-02-14 20:30:50.075904] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.811 [2024-02-14 20:30:50.075913] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.811 [2024-02-14 20:30:50.075921] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:12.811 [2024-02-14 20:30:50.075941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:12.811 qpair failed and we were unable to recover it. 00:30:12.811 [2024-02-14 20:30:50.085316] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.811 [2024-02-14 20:30:50.085450] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.811 [2024-02-14 20:30:50.085466] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.811 [2024-02-14 20:30:50.085474] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.811 [2024-02-14 20:30:50.085480] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:12.811 [2024-02-14 20:30:50.085496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:12.811 qpair failed and we were unable to recover it. 00:30:12.811 [2024-02-14 20:30:50.095283] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.811 [2024-02-14 20:30:50.095431] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.811 [2024-02-14 20:30:50.095447] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.811 [2024-02-14 20:30:50.095458] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.811 [2024-02-14 20:30:50.095464] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:12.811 [2024-02-14 20:30:50.095480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:12.811 qpair failed and we were unable to recover it. 00:30:12.811 [2024-02-14 20:30:50.105398] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.811 [2024-02-14 20:30:50.105516] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.811 [2024-02-14 20:30:50.105531] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.811 [2024-02-14 20:30:50.105538] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.811 [2024-02-14 20:30:50.105545] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:12.811 [2024-02-14 20:30:50.105561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:12.811 qpair failed and we were unable to recover it. 00:30:12.811 [2024-02-14 20:30:50.115395] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.811 [2024-02-14 20:30:50.115512] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.811 [2024-02-14 20:30:50.115528] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.811 [2024-02-14 20:30:50.115535] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.811 [2024-02-14 20:30:50.115541] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:12.811 [2024-02-14 20:30:50.115557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:12.811 qpair failed and we were unable to recover it. 00:30:12.811 [2024-02-14 20:30:50.125448] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.811 [2024-02-14 20:30:50.125569] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.811 [2024-02-14 20:30:50.125585] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.811 [2024-02-14 20:30:50.125592] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.811 [2024-02-14 20:30:50.125598] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:12.811 [2024-02-14 20:30:50.125613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:12.811 qpair failed and we were unable to recover it. 00:30:12.811 [2024-02-14 20:30:50.135437] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.811 [2024-02-14 20:30:50.135552] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.811 [2024-02-14 20:30:50.135567] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.811 [2024-02-14 20:30:50.135574] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.811 [2024-02-14 20:30:50.135580] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:12.811 [2024-02-14 20:30:50.135596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:12.811 qpair failed and we were unable to recover it. 00:30:12.811 [2024-02-14 20:30:50.145536] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.811 [2024-02-14 20:30:50.145710] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.811 [2024-02-14 20:30:50.145727] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.811 [2024-02-14 20:30:50.145733] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.811 [2024-02-14 20:30:50.145739] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:12.811 [2024-02-14 20:30:50.145755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:12.811 qpair failed and we were unable to recover it. 00:30:12.811 [2024-02-14 20:30:50.155511] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.811 [2024-02-14 20:30:50.155632] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.811 [2024-02-14 20:30:50.155653] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.811 [2024-02-14 20:30:50.155660] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.811 [2024-02-14 20:30:50.155666] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:12.811 [2024-02-14 20:30:50.155682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:12.811 qpair failed and we were unable to recover it. 00:30:12.811 [2024-02-14 20:30:50.165572] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.811 [2024-02-14 20:30:50.165691] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.811 [2024-02-14 20:30:50.165708] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.811 [2024-02-14 20:30:50.165714] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.811 [2024-02-14 20:30:50.165721] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:12.811 [2024-02-14 20:30:50.165737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:12.811 qpair failed and we were unable to recover it. 00:30:12.811 [2024-02-14 20:30:50.175551] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.811 [2024-02-14 20:30:50.175672] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.811 [2024-02-14 20:30:50.175688] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.811 [2024-02-14 20:30:50.175695] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.811 [2024-02-14 20:30:50.175700] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:12.811 [2024-02-14 20:30:50.175717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:12.811 qpair failed and we were unable to recover it. 00:30:12.811 [2024-02-14 20:30:50.185576] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.811 [2024-02-14 20:30:50.185698] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.811 [2024-02-14 20:30:50.185715] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.811 [2024-02-14 20:30:50.185725] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.811 [2024-02-14 20:30:50.185731] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:12.811 [2024-02-14 20:30:50.185748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:12.811 qpair failed and we were unable to recover it. 00:30:12.811 [2024-02-14 20:30:50.195627] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.811 [2024-02-14 20:30:50.195753] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.811 [2024-02-14 20:30:50.195769] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.811 [2024-02-14 20:30:50.195776] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.811 [2024-02-14 20:30:50.195782] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:12.811 [2024-02-14 20:30:50.195798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:12.811 qpair failed and we were unable to recover it. 00:30:12.811 [2024-02-14 20:30:50.205609] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.811 [2024-02-14 20:30:50.205757] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.811 [2024-02-14 20:30:50.205773] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.811 [2024-02-14 20:30:50.205781] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.812 [2024-02-14 20:30:50.205786] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:12.812 [2024-02-14 20:30:50.205802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:12.812 qpair failed and we were unable to recover it. 00:30:12.812 [2024-02-14 20:30:50.215675] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.812 [2024-02-14 20:30:50.215825] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.812 [2024-02-14 20:30:50.215841] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.812 [2024-02-14 20:30:50.215848] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.812 [2024-02-14 20:30:50.215854] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:12.812 [2024-02-14 20:30:50.215870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:12.812 qpair failed and we were unable to recover it. 00:30:13.072 [2024-02-14 20:30:50.225708] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.072 [2024-02-14 20:30:50.225826] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.072 [2024-02-14 20:30:50.225842] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.072 [2024-02-14 20:30:50.225849] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.072 [2024-02-14 20:30:50.225854] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:13.072 [2024-02-14 20:30:50.225870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:13.072 qpair failed and we were unable to recover it. 00:30:13.072 [2024-02-14 20:30:50.235674] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.072 [2024-02-14 20:30:50.235793] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.072 [2024-02-14 20:30:50.235809] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.072 [2024-02-14 20:30:50.235816] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.072 [2024-02-14 20:30:50.235822] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:13.072 [2024-02-14 20:30:50.235838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:13.072 qpair failed and we were unable to recover it. 00:30:13.072 [2024-02-14 20:30:50.245796] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.072 [2024-02-14 20:30:50.245909] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.072 [2024-02-14 20:30:50.245925] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.072 [2024-02-14 20:30:50.245932] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.072 [2024-02-14 20:30:50.245938] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:13.072 [2024-02-14 20:30:50.245953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:13.072 qpair failed and we were unable to recover it. 00:30:13.072 [2024-02-14 20:30:50.255802] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.072 [2024-02-14 20:30:50.255918] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.072 [2024-02-14 20:30:50.255934] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.072 [2024-02-14 20:30:50.255940] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.072 [2024-02-14 20:30:50.255947] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:13.072 [2024-02-14 20:30:50.255962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:13.072 qpair failed and we were unable to recover it. 00:30:13.072 [2024-02-14 20:30:50.265777] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.072 [2024-02-14 20:30:50.265895] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.072 [2024-02-14 20:30:50.265911] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.072 [2024-02-14 20:30:50.265917] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.072 [2024-02-14 20:30:50.265923] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:13.072 [2024-02-14 20:30:50.265939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:13.072 qpair failed and we were unable to recover it. 00:30:13.072 [2024-02-14 20:30:50.275859] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.072 [2024-02-14 20:30:50.276006] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.072 [2024-02-14 20:30:50.276024] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.072 [2024-02-14 20:30:50.276031] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.072 [2024-02-14 20:30:50.276037] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:13.072 [2024-02-14 20:30:50.276053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:13.072 qpair failed and we were unable to recover it. 00:30:13.072 [2024-02-14 20:30:50.285891] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.072 [2024-02-14 20:30:50.286009] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.072 [2024-02-14 20:30:50.286024] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.072 [2024-02-14 20:30:50.286031] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.072 [2024-02-14 20:30:50.286037] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:13.072 [2024-02-14 20:30:50.286052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:13.072 qpair failed and we were unable to recover it. 00:30:13.072 [2024-02-14 20:30:50.295866] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.072 [2024-02-14 20:30:50.295983] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.072 [2024-02-14 20:30:50.295998] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.072 [2024-02-14 20:30:50.296005] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.072 [2024-02-14 20:30:50.296011] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:13.072 [2024-02-14 20:30:50.296027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:13.072 qpair failed and we were unable to recover it. 00:30:13.073 [2024-02-14 20:30:50.305958] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.073 [2024-02-14 20:30:50.306081] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.073 [2024-02-14 20:30:50.306098] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.073 [2024-02-14 20:30:50.306104] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.073 [2024-02-14 20:30:50.306110] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:13.073 [2024-02-14 20:30:50.306125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:13.073 qpair failed and we were unable to recover it. 00:30:13.073 [2024-02-14 20:30:50.315976] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.073 [2024-02-14 20:30:50.316096] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.073 [2024-02-14 20:30:50.316112] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.073 [2024-02-14 20:30:50.316118] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.073 [2024-02-14 20:30:50.316124] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:13.073 [2024-02-14 20:30:50.316143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:13.073 qpair failed and we were unable to recover it. 00:30:13.073 [2024-02-14 20:30:50.325934] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.073 [2024-02-14 20:30:50.326050] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.073 [2024-02-14 20:30:50.326066] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.073 [2024-02-14 20:30:50.326072] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.073 [2024-02-14 20:30:50.326078] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:13.073 [2024-02-14 20:30:50.326093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:13.073 qpair failed and we were unable to recover it. 00:30:13.073 [2024-02-14 20:30:50.336014] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.073 [2024-02-14 20:30:50.336129] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.073 [2024-02-14 20:30:50.336144] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.073 [2024-02-14 20:30:50.336151] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.073 [2024-02-14 20:30:50.336157] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:13.073 [2024-02-14 20:30:50.336172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:13.073 qpair failed and we were unable to recover it. 00:30:13.073 [2024-02-14 20:30:50.346078] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.073 [2024-02-14 20:30:50.346197] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.073 [2024-02-14 20:30:50.346212] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.073 [2024-02-14 20:30:50.346219] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.073 [2024-02-14 20:30:50.346225] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:13.073 [2024-02-14 20:30:50.346240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:13.073 qpair failed and we were unable to recover it. 00:30:13.073 [2024-02-14 20:30:50.356055] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.073 [2024-02-14 20:30:50.356193] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.073 [2024-02-14 20:30:50.356209] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.073 [2024-02-14 20:30:50.356215] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.073 [2024-02-14 20:30:50.356221] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:13.073 [2024-02-14 20:30:50.356236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:13.073 qpair failed and we were unable to recover it. 00:30:13.073 [2024-02-14 20:30:50.366045] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.073 [2024-02-14 20:30:50.366164] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.073 [2024-02-14 20:30:50.366183] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.073 [2024-02-14 20:30:50.366190] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.073 [2024-02-14 20:30:50.366196] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:13.073 [2024-02-14 20:30:50.366212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:13.073 qpair failed and we were unable to recover it. 00:30:13.073 [2024-02-14 20:30:50.376161] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.073 [2024-02-14 20:30:50.376275] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.073 [2024-02-14 20:30:50.376292] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.073 [2024-02-14 20:30:50.376299] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.073 [2024-02-14 20:30:50.376304] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:13.073 [2024-02-14 20:30:50.376320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:13.073 qpair failed and we were unable to recover it. 00:30:13.073 [2024-02-14 20:30:50.386190] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.073 [2024-02-14 20:30:50.386312] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.073 [2024-02-14 20:30:50.386328] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.073 [2024-02-14 20:30:50.386335] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.073 [2024-02-14 20:30:50.386341] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:13.073 [2024-02-14 20:30:50.386356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:13.073 qpair failed and we were unable to recover it. 00:30:13.073 [2024-02-14 20:30:50.396304] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.073 [2024-02-14 20:30:50.396423] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.073 [2024-02-14 20:30:50.396439] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.073 [2024-02-14 20:30:50.396446] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.073 [2024-02-14 20:30:50.396452] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:13.073 [2024-02-14 20:30:50.396468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:13.073 qpair failed and we were unable to recover it. 00:30:13.073 [2024-02-14 20:30:50.406209] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.073 [2024-02-14 20:30:50.406322] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.073 [2024-02-14 20:30:50.406338] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.073 [2024-02-14 20:30:50.406345] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.073 [2024-02-14 20:30:50.406351] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:13.073 [2024-02-14 20:30:50.406369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:13.073 qpair failed and we were unable to recover it. 00:30:13.073 [2024-02-14 20:30:50.416231] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.073 [2024-02-14 20:30:50.416352] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.073 [2024-02-14 20:30:50.416368] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.073 [2024-02-14 20:30:50.416374] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.073 [2024-02-14 20:30:50.416380] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:13.073 [2024-02-14 20:30:50.416395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:13.073 qpair failed and we were unable to recover it. 00:30:13.073 [2024-02-14 20:30:50.426222] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.073 [2024-02-14 20:30:50.426491] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.073 [2024-02-14 20:30:50.426508] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.073 [2024-02-14 20:30:50.426515] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.073 [2024-02-14 20:30:50.426521] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:13.073 [2024-02-14 20:30:50.426536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:13.073 qpair failed and we were unable to recover it. 00:30:13.073 [2024-02-14 20:30:50.436253] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.073 [2024-02-14 20:30:50.436367] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.074 [2024-02-14 20:30:50.436383] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.074 [2024-02-14 20:30:50.436390] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.074 [2024-02-14 20:30:50.436396] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:13.074 [2024-02-14 20:30:50.436411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:13.074 qpair failed and we were unable to recover it. 00:30:13.074 [2024-02-14 20:30:50.446510] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.074 [2024-02-14 20:30:50.446627] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.074 [2024-02-14 20:30:50.446643] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.074 [2024-02-14 20:30:50.446656] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.074 [2024-02-14 20:30:50.446662] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:13.074 [2024-02-14 20:30:50.446678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:13.074 qpair failed and we were unable to recover it. 00:30:13.074 [2024-02-14 20:30:50.456370] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.074 [2024-02-14 20:30:50.456483] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.074 [2024-02-14 20:30:50.456499] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.074 [2024-02-14 20:30:50.456506] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.074 [2024-02-14 20:30:50.456512] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:13.074 [2024-02-14 20:30:50.456527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:13.074 qpair failed and we were unable to recover it. 00:30:13.074 [2024-02-14 20:30:50.466430] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.074 [2024-02-14 20:30:50.466587] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.074 [2024-02-14 20:30:50.466603] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.074 [2024-02-14 20:30:50.466609] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.074 [2024-02-14 20:30:50.466615] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:13.074 [2024-02-14 20:30:50.466631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:13.074 qpair failed and we were unable to recover it. 00:30:13.074 [2024-02-14 20:30:50.476374] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.074 [2024-02-14 20:30:50.476495] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.074 [2024-02-14 20:30:50.476511] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.074 [2024-02-14 20:30:50.476518] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.074 [2024-02-14 20:30:50.476524] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:13.074 [2024-02-14 20:30:50.476539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:13.074 qpair failed and we were unable to recover it. 00:30:13.074 [2024-02-14 20:30:50.486405] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.074 [2024-02-14 20:30:50.486519] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.074 [2024-02-14 20:30:50.486535] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.074 [2024-02-14 20:30:50.486542] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.074 [2024-02-14 20:30:50.486548] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:13.074 [2024-02-14 20:30:50.486563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:13.074 qpair failed and we were unable to recover it. 00:30:13.334 [2024-02-14 20:30:50.496473] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.334 [2024-02-14 20:30:50.496602] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.334 [2024-02-14 20:30:50.496618] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.334 [2024-02-14 20:30:50.496624] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.334 [2024-02-14 20:30:50.496635] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:13.334 [2024-02-14 20:30:50.496656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:13.334 qpair failed and we were unable to recover it. 00:30:13.334 [2024-02-14 20:30:50.506557] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.334 [2024-02-14 20:30:50.506685] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.334 [2024-02-14 20:30:50.506701] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.334 [2024-02-14 20:30:50.506708] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.334 [2024-02-14 20:30:50.506714] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:13.334 [2024-02-14 20:30:50.506729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:13.334 qpair failed and we were unable to recover it. 00:30:13.334 [2024-02-14 20:30:50.516544] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.334 [2024-02-14 20:30:50.516666] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.334 [2024-02-14 20:30:50.516681] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.334 [2024-02-14 20:30:50.516688] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.334 [2024-02-14 20:30:50.516694] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:13.334 [2024-02-14 20:30:50.516710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:13.334 qpair failed and we were unable to recover it. 00:30:13.334 [2024-02-14 20:30:50.526598] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.334 [2024-02-14 20:30:50.526721] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.334 [2024-02-14 20:30:50.526736] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.334 [2024-02-14 20:30:50.526743] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.334 [2024-02-14 20:30:50.526749] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:13.334 [2024-02-14 20:30:50.526765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:13.334 qpair failed and we were unable to recover it. 00:30:13.334 [2024-02-14 20:30:50.536594] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.334 [2024-02-14 20:30:50.536722] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.334 [2024-02-14 20:30:50.536738] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.334 [2024-02-14 20:30:50.536745] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.334 [2024-02-14 20:30:50.536750] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:13.334 [2024-02-14 20:30:50.536766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:13.334 qpair failed and we were unable to recover it. 00:30:13.334 [2024-02-14 20:30:50.546659] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.334 [2024-02-14 20:30:50.546784] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.334 [2024-02-14 20:30:50.546800] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.334 [2024-02-14 20:30:50.546807] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.334 [2024-02-14 20:30:50.546813] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:13.334 [2024-02-14 20:30:50.546830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:13.334 qpair failed and we were unable to recover it. 00:30:13.334 [2024-02-14 20:30:50.556603] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.334 [2024-02-14 20:30:50.556723] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.334 [2024-02-14 20:30:50.556738] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.334 [2024-02-14 20:30:50.556745] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.334 [2024-02-14 20:30:50.556751] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:13.334 [2024-02-14 20:30:50.556767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:13.334 qpair failed and we were unable to recover it. 00:30:13.334 [2024-02-14 20:30:50.566723] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.334 [2024-02-14 20:30:50.567002] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.334 [2024-02-14 20:30:50.567019] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.334 [2024-02-14 20:30:50.567026] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.334 [2024-02-14 20:30:50.567032] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:13.334 [2024-02-14 20:30:50.567048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:13.334 qpair failed and we were unable to recover it. 00:30:13.334 [2024-02-14 20:30:50.576749] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.334 [2024-02-14 20:30:50.576868] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.334 [2024-02-14 20:30:50.576883] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.334 [2024-02-14 20:30:50.576890] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.334 [2024-02-14 20:30:50.576896] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:13.334 [2024-02-14 20:30:50.576912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:13.334 qpair failed and we were unable to recover it. 00:30:13.334 [2024-02-14 20:30:50.586756] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.334 [2024-02-14 20:30:50.586899] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.334 [2024-02-14 20:30:50.586915] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.334 [2024-02-14 20:30:50.586925] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.334 [2024-02-14 20:30:50.586931] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:13.334 [2024-02-14 20:30:50.586946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:13.334 qpair failed and we were unable to recover it. 00:30:13.334 [2024-02-14 20:30:50.596819] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.334 [2024-02-14 20:30:50.596940] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.334 [2024-02-14 20:30:50.596956] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.334 [2024-02-14 20:30:50.596963] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.334 [2024-02-14 20:30:50.596969] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:13.334 [2024-02-14 20:30:50.596984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:13.334 qpair failed and we were unable to recover it. 00:30:13.334 [2024-02-14 20:30:50.606829] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.334 [2024-02-14 20:30:50.606986] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.334 [2024-02-14 20:30:50.607003] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.334 [2024-02-14 20:30:50.607010] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.334 [2024-02-14 20:30:50.607016] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:13.334 [2024-02-14 20:30:50.607033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:13.334 qpair failed and we were unable to recover it. 00:30:13.334 [2024-02-14 20:30:50.616822] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.335 [2024-02-14 20:30:50.616937] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.335 [2024-02-14 20:30:50.616953] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.335 [2024-02-14 20:30:50.616960] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.335 [2024-02-14 20:30:50.616966] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:13.335 [2024-02-14 20:30:50.616982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:13.335 qpair failed and we were unable to recover it. 00:30:13.335 [2024-02-14 20:30:50.626860] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.335 [2024-02-14 20:30:50.626981] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.335 [2024-02-14 20:30:50.626997] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.335 [2024-02-14 20:30:50.627003] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.335 [2024-02-14 20:30:50.627009] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:13.335 [2024-02-14 20:30:50.627025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:13.335 qpair failed and we were unable to recover it. 00:30:13.335 [2024-02-14 20:30:50.636891] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.335 [2024-02-14 20:30:50.637011] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.335 [2024-02-14 20:30:50.637026] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.335 [2024-02-14 20:30:50.637033] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.335 [2024-02-14 20:30:50.637038] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:13.335 [2024-02-14 20:30:50.637054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:13.335 qpair failed and we were unable to recover it. 00:30:13.335 [2024-02-14 20:30:50.646864] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.335 [2024-02-14 20:30:50.646979] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.335 [2024-02-14 20:30:50.646994] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.335 [2024-02-14 20:30:50.647001] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.335 [2024-02-14 20:30:50.647007] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:13.335 [2024-02-14 20:30:50.647022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:13.335 qpair failed and we were unable to recover it. 00:30:13.335 [2024-02-14 20:30:50.656939] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.335 [2024-02-14 20:30:50.657053] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.335 [2024-02-14 20:30:50.657069] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.335 [2024-02-14 20:30:50.657076] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.335 [2024-02-14 20:30:50.657082] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:13.335 [2024-02-14 20:30:50.657097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:13.335 qpair failed and we were unable to recover it. 00:30:13.335 [2024-02-14 20:30:50.667004] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.335 [2024-02-14 20:30:50.667120] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.335 [2024-02-14 20:30:50.667135] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.335 [2024-02-14 20:30:50.667142] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.335 [2024-02-14 20:30:50.667148] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:13.335 [2024-02-14 20:30:50.667164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:13.335 qpair failed and we were unable to recover it. 00:30:13.335 [2024-02-14 20:30:50.676974] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.335 [2024-02-14 20:30:50.677087] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.335 [2024-02-14 20:30:50.677102] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.335 [2024-02-14 20:30:50.677112] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.335 [2024-02-14 20:30:50.677118] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:13.335 [2024-02-14 20:30:50.677134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:13.335 qpair failed and we were unable to recover it. 00:30:13.335 [2024-02-14 20:30:50.686975] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.335 [2024-02-14 20:30:50.687096] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.335 [2024-02-14 20:30:50.687112] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.335 [2024-02-14 20:30:50.687118] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.335 [2024-02-14 20:30:50.687124] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:13.335 [2024-02-14 20:30:50.687140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:13.335 qpair failed and we were unable to recover it. 00:30:13.335 [2024-02-14 20:30:50.697005] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.335 [2024-02-14 20:30:50.697119] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.335 [2024-02-14 20:30:50.697134] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.335 [2024-02-14 20:30:50.697141] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.335 [2024-02-14 20:30:50.697147] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:13.335 [2024-02-14 20:30:50.697162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:13.335 qpair failed and we were unable to recover it. 00:30:13.335 [2024-02-14 20:30:50.707111] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.335 [2024-02-14 20:30:50.707229] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.335 [2024-02-14 20:30:50.707244] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.335 [2024-02-14 20:30:50.707251] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.335 [2024-02-14 20:30:50.707257] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:13.335 [2024-02-14 20:30:50.707273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:13.335 qpair failed and we were unable to recover it. 00:30:13.335 [2024-02-14 20:30:50.717134] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.335 [2024-02-14 20:30:50.717263] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.335 [2024-02-14 20:30:50.717279] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.335 [2024-02-14 20:30:50.717286] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.335 [2024-02-14 20:30:50.717292] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:13.335 [2024-02-14 20:30:50.717307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:13.335 qpair failed and we were unable to recover it. 00:30:13.335 [2024-02-14 20:30:50.727164] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.335 [2024-02-14 20:30:50.727303] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.335 [2024-02-14 20:30:50.727318] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.335 [2024-02-14 20:30:50.727325] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.335 [2024-02-14 20:30:50.727331] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:13.335 [2024-02-14 20:30:50.727346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:13.335 qpair failed and we were unable to recover it. 00:30:13.335 [2024-02-14 20:30:50.737178] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.335 [2024-02-14 20:30:50.737292] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.335 [2024-02-14 20:30:50.737309] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.335 [2024-02-14 20:30:50.737316] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.335 [2024-02-14 20:30:50.737322] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:13.335 [2024-02-14 20:30:50.737338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:13.335 qpair failed and we were unable to recover it. 00:30:13.335 [2024-02-14 20:30:50.747226] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.335 [2024-02-14 20:30:50.747525] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.336 [2024-02-14 20:30:50.747541] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.336 [2024-02-14 20:30:50.747549] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.336 [2024-02-14 20:30:50.747555] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:13.336 [2024-02-14 20:30:50.747572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:13.336 qpair failed and we were unable to recover it. 00:30:13.595 [2024-02-14 20:30:50.757186] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.595 [2024-02-14 20:30:50.757308] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.595 [2024-02-14 20:30:50.757324] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.595 [2024-02-14 20:30:50.757330] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.595 [2024-02-14 20:30:50.757336] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:13.595 [2024-02-14 20:30:50.757352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:13.595 qpair failed and we were unable to recover it. 00:30:13.595 [2024-02-14 20:30:50.767307] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.595 [2024-02-14 20:30:50.767437] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.595 [2024-02-14 20:30:50.767457] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.595 [2024-02-14 20:30:50.767463] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.595 [2024-02-14 20:30:50.767469] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:13.595 [2024-02-14 20:30:50.767485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:13.595 qpair failed and we were unable to recover it. 00:30:13.595 [2024-02-14 20:30:50.777301] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.595 [2024-02-14 20:30:50.777420] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.595 [2024-02-14 20:30:50.777436] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.595 [2024-02-14 20:30:50.777443] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.595 [2024-02-14 20:30:50.777449] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:13.595 [2024-02-14 20:30:50.777464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:13.595 qpair failed and we were unable to recover it. 00:30:13.595 [2024-02-14 20:30:50.787323] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.595 [2024-02-14 20:30:50.787590] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.595 [2024-02-14 20:30:50.787607] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.595 [2024-02-14 20:30:50.787614] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.596 [2024-02-14 20:30:50.787620] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:13.596 [2024-02-14 20:30:50.787635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:13.596 qpair failed and we were unable to recover it. 00:30:13.596 [2024-02-14 20:30:50.797384] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.596 [2024-02-14 20:30:50.797507] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.596 [2024-02-14 20:30:50.797523] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.596 [2024-02-14 20:30:50.797529] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.596 [2024-02-14 20:30:50.797535] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:13.596 [2024-02-14 20:30:50.797551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:13.596 qpair failed and we were unable to recover it. 00:30:13.596 [2024-02-14 20:30:50.807390] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.596 [2024-02-14 20:30:50.807509] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.596 [2024-02-14 20:30:50.807525] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.596 [2024-02-14 20:30:50.807531] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.596 [2024-02-14 20:30:50.807537] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:13.596 [2024-02-14 20:30:50.807556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:13.596 qpair failed and we were unable to recover it. 00:30:13.596 [2024-02-14 20:30:50.817411] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.596 [2024-02-14 20:30:50.817529] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.596 [2024-02-14 20:30:50.817544] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.596 [2024-02-14 20:30:50.817551] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.596 [2024-02-14 20:30:50.817557] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:13.596 [2024-02-14 20:30:50.817572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:13.596 qpair failed and we were unable to recover it. 00:30:13.596 [2024-02-14 20:30:50.827450] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.596 [2024-02-14 20:30:50.827568] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.596 [2024-02-14 20:30:50.827584] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.596 [2024-02-14 20:30:50.827591] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.596 [2024-02-14 20:30:50.827597] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:13.596 [2024-02-14 20:30:50.827612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:13.596 qpair failed and we were unable to recover it. 00:30:13.596 [2024-02-14 20:30:50.837450] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.596 [2024-02-14 20:30:50.837571] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.596 [2024-02-14 20:30:50.837587] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.596 [2024-02-14 20:30:50.837594] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.596 [2024-02-14 20:30:50.837599] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:13.596 [2024-02-14 20:30:50.837615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:13.596 qpair failed and we were unable to recover it. 00:30:13.596 [2024-02-14 20:30:50.847497] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.596 [2024-02-14 20:30:50.847611] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.596 [2024-02-14 20:30:50.847627] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.596 [2024-02-14 20:30:50.847634] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.596 [2024-02-14 20:30:50.847639] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:13.596 [2024-02-14 20:30:50.847660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:13.596 qpair failed and we were unable to recover it. 00:30:13.596 [2024-02-14 20:30:50.857532] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.596 [2024-02-14 20:30:50.857680] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.596 [2024-02-14 20:30:50.857699] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.596 [2024-02-14 20:30:50.857706] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.596 [2024-02-14 20:30:50.857711] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:13.596 [2024-02-14 20:30:50.857727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:13.596 qpair failed and we were unable to recover it. 00:30:13.596 [2024-02-14 20:30:50.867579] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.596 [2024-02-14 20:30:50.867706] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.596 [2024-02-14 20:30:50.867722] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.596 [2024-02-14 20:30:50.867729] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.596 [2024-02-14 20:30:50.867735] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:13.596 [2024-02-14 20:30:50.867751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:13.596 qpair failed and we were unable to recover it. 00:30:13.596 [2024-02-14 20:30:50.877587] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.596 [2024-02-14 20:30:50.877712] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.596 [2024-02-14 20:30:50.877728] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.596 [2024-02-14 20:30:50.877735] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.596 [2024-02-14 20:30:50.877740] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:13.596 [2024-02-14 20:30:50.877756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:13.596 qpair failed and we were unable to recover it. 00:30:13.596 [2024-02-14 20:30:50.887619] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.596 [2024-02-14 20:30:50.887751] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.596 [2024-02-14 20:30:50.887767] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.596 [2024-02-14 20:30:50.887774] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.596 [2024-02-14 20:30:50.887780] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:13.596 [2024-02-14 20:30:50.887796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:13.596 qpair failed and we were unable to recover it. 00:30:13.596 [2024-02-14 20:30:50.897721] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.596 [2024-02-14 20:30:50.897842] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.596 [2024-02-14 20:30:50.897857] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.596 [2024-02-14 20:30:50.897864] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.596 [2024-02-14 20:30:50.897870] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:13.596 [2024-02-14 20:30:50.897889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:13.596 qpair failed and we were unable to recover it. 00:30:13.596 [2024-02-14 20:30:50.907688] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.596 [2024-02-14 20:30:50.907810] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.596 [2024-02-14 20:30:50.907827] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.596 [2024-02-14 20:30:50.907833] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.596 [2024-02-14 20:30:50.907839] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:13.596 [2024-02-14 20:30:50.907855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:13.596 qpair failed and we were unable to recover it. 00:30:13.596 [2024-02-14 20:30:50.917698] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.596 [2024-02-14 20:30:50.917818] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.596 [2024-02-14 20:30:50.917834] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.596 [2024-02-14 20:30:50.917840] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.596 [2024-02-14 20:30:50.917846] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:13.596 [2024-02-14 20:30:50.917862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:13.597 qpair failed and we were unable to recover it. 00:30:13.597 [2024-02-14 20:30:50.927725] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.597 [2024-02-14 20:30:50.927840] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.597 [2024-02-14 20:30:50.927856] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.597 [2024-02-14 20:30:50.927863] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.597 [2024-02-14 20:30:50.927868] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:13.597 [2024-02-14 20:30:50.927884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:13.597 qpair failed and we were unable to recover it. 00:30:13.597 [2024-02-14 20:30:50.937865] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.597 [2024-02-14 20:30:50.937988] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.597 [2024-02-14 20:30:50.938004] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.597 [2024-02-14 20:30:50.938011] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.597 [2024-02-14 20:30:50.938016] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:13.597 [2024-02-14 20:30:50.938032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:13.597 qpair failed and we were unable to recover it. 00:30:13.597 [2024-02-14 20:30:50.947833] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.597 [2024-02-14 20:30:50.947954] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.597 [2024-02-14 20:30:50.947973] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.597 [2024-02-14 20:30:50.947979] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.597 [2024-02-14 20:30:50.947985] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:13.597 [2024-02-14 20:30:50.948000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:13.597 qpair failed and we were unable to recover it. 00:30:13.597 [2024-02-14 20:30:50.957857] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.597 [2024-02-14 20:30:50.957978] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.597 [2024-02-14 20:30:50.957994] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.597 [2024-02-14 20:30:50.958000] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.597 [2024-02-14 20:30:50.958006] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:13.597 [2024-02-14 20:30:50.958022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:13.597 qpair failed and we were unable to recover it. 00:30:13.597 [2024-02-14 20:30:50.967862] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.597 [2024-02-14 20:30:50.967976] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.597 [2024-02-14 20:30:50.967992] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.597 [2024-02-14 20:30:50.967999] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.597 [2024-02-14 20:30:50.968004] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:13.597 [2024-02-14 20:30:50.968021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:13.597 qpair failed and we were unable to recover it. 00:30:13.597 [2024-02-14 20:30:50.977906] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.597 [2024-02-14 20:30:50.978019] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.597 [2024-02-14 20:30:50.978034] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.597 [2024-02-14 20:30:50.978041] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.597 [2024-02-14 20:30:50.978046] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:13.597 [2024-02-14 20:30:50.978062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:13.597 qpair failed and we were unable to recover it. 00:30:13.597 [2024-02-14 20:30:50.987918] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.597 [2024-02-14 20:30:50.988040] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.597 [2024-02-14 20:30:50.988056] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.597 [2024-02-14 20:30:50.988063] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.597 [2024-02-14 20:30:50.988072] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:13.597 [2024-02-14 20:30:50.988088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:13.597 qpair failed and we were unable to recover it. 00:30:13.597 [2024-02-14 20:30:50.997956] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.597 [2024-02-14 20:30:50.998074] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.597 [2024-02-14 20:30:50.998090] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.597 [2024-02-14 20:30:50.998096] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.597 [2024-02-14 20:30:50.998102] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:13.597 [2024-02-14 20:30:50.998118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:13.597 qpair failed and we were unable to recover it. 00:30:13.597 [2024-02-14 20:30:51.008023] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.597 [2024-02-14 20:30:51.008148] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.597 [2024-02-14 20:30:51.008164] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.597 [2024-02-14 20:30:51.008170] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.597 [2024-02-14 20:30:51.008176] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:13.597 [2024-02-14 20:30:51.008191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:13.597 qpair failed and we were unable to recover it. 00:30:13.856 [2024-02-14 20:30:51.018053] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.856 [2024-02-14 20:30:51.018175] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.856 [2024-02-14 20:30:51.018191] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.856 [2024-02-14 20:30:51.018198] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.856 [2024-02-14 20:30:51.018203] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:13.856 [2024-02-14 20:30:51.018218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:13.856 qpair failed and we were unable to recover it. 00:30:13.857 [2024-02-14 20:30:51.028021] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.857 [2024-02-14 20:30:51.028138] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.857 [2024-02-14 20:30:51.028154] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.857 [2024-02-14 20:30:51.028160] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.857 [2024-02-14 20:30:51.028166] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:13.857 [2024-02-14 20:30:51.028181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:13.857 qpair failed and we were unable to recover it. 00:30:13.857 [2024-02-14 20:30:51.038049] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.857 [2024-02-14 20:30:51.038173] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.857 [2024-02-14 20:30:51.038189] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.857 [2024-02-14 20:30:51.038195] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.857 [2024-02-14 20:30:51.038201] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:13.857 [2024-02-14 20:30:51.038216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:13.857 qpair failed and we were unable to recover it. 00:30:13.857 [2024-02-14 20:30:51.048086] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.857 [2024-02-14 20:30:51.048199] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.857 [2024-02-14 20:30:51.048215] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.857 [2024-02-14 20:30:51.048221] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.857 [2024-02-14 20:30:51.048227] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:13.857 [2024-02-14 20:30:51.048242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:13.857 qpair failed and we were unable to recover it. 00:30:13.857 [2024-02-14 20:30:51.058120] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.857 [2024-02-14 20:30:51.058238] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.857 [2024-02-14 20:30:51.058254] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.857 [2024-02-14 20:30:51.058261] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.857 [2024-02-14 20:30:51.058266] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:13.857 [2024-02-14 20:30:51.058282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:13.857 qpair failed and we were unable to recover it. 00:30:13.857 [2024-02-14 20:30:51.068146] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.857 [2024-02-14 20:30:51.068263] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.857 [2024-02-14 20:30:51.068279] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.857 [2024-02-14 20:30:51.068286] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.857 [2024-02-14 20:30:51.068291] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:13.857 [2024-02-14 20:30:51.068306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:13.857 qpair failed and we were unable to recover it. 00:30:13.857 [2024-02-14 20:30:51.078168] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.857 [2024-02-14 20:30:51.078443] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.857 [2024-02-14 20:30:51.078460] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.857 [2024-02-14 20:30:51.078466] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.857 [2024-02-14 20:30:51.078476] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:13.857 [2024-02-14 20:30:51.078491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:13.857 qpair failed and we were unable to recover it. 00:30:13.857 [2024-02-14 20:30:51.088248] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.857 [2024-02-14 20:30:51.088408] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.857 [2024-02-14 20:30:51.088424] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.857 [2024-02-14 20:30:51.088431] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.857 [2024-02-14 20:30:51.088436] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:13.857 [2024-02-14 20:30:51.088452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:13.857 qpair failed and we were unable to recover it. 00:30:13.857 [2024-02-14 20:30:51.098225] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.857 [2024-02-14 20:30:51.098355] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.857 [2024-02-14 20:30:51.098370] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.857 [2024-02-14 20:30:51.098377] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.857 [2024-02-14 20:30:51.098383] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:13.857 [2024-02-14 20:30:51.098398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:13.857 qpair failed and we were unable to recover it. 00:30:13.857 [2024-02-14 20:30:51.108256] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.857 [2024-02-14 20:30:51.108369] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.857 [2024-02-14 20:30:51.108385] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.857 [2024-02-14 20:30:51.108391] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.857 [2024-02-14 20:30:51.108397] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:13.857 [2024-02-14 20:30:51.108413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:13.857 qpair failed and we were unable to recover it. 00:30:13.857 [2024-02-14 20:30:51.118286] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.857 [2024-02-14 20:30:51.118402] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.857 [2024-02-14 20:30:51.118418] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.857 [2024-02-14 20:30:51.118425] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.857 [2024-02-14 20:30:51.118431] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:13.857 [2024-02-14 20:30:51.118446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:13.857 qpair failed and we were unable to recover it. 00:30:13.857 [2024-02-14 20:30:51.128350] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.857 [2024-02-14 20:30:51.128527] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.857 [2024-02-14 20:30:51.128559] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.857 [2024-02-14 20:30:51.128571] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.857 [2024-02-14 20:30:51.128581] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:13.857 [2024-02-14 20:30:51.128607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:13.857 qpair failed and we were unable to recover it. 00:30:13.857 [2024-02-14 20:30:51.138511] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.857 [2024-02-14 20:30:51.138783] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.857 [2024-02-14 20:30:51.138803] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.857 [2024-02-14 20:30:51.138810] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.857 [2024-02-14 20:30:51.138817] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:13.857 [2024-02-14 20:30:51.138834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:13.857 qpair failed and we were unable to recover it. 00:30:13.857 [2024-02-14 20:30:51.148369] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.857 [2024-02-14 20:30:51.148486] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.857 [2024-02-14 20:30:51.148503] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.857 [2024-02-14 20:30:51.148509] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.857 [2024-02-14 20:30:51.148515] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:13.857 [2024-02-14 20:30:51.148531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:13.857 qpair failed and we were unable to recover it. 00:30:13.857 [2024-02-14 20:30:51.158387] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.858 [2024-02-14 20:30:51.158513] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.858 [2024-02-14 20:30:51.158530] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.858 [2024-02-14 20:30:51.158537] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.858 [2024-02-14 20:30:51.158542] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:13.858 [2024-02-14 20:30:51.158558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:13.858 qpair failed and we were unable to recover it. 00:30:13.858 [2024-02-14 20:30:51.168406] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.858 [2024-02-14 20:30:51.168523] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.858 [2024-02-14 20:30:51.168540] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.858 [2024-02-14 20:30:51.168551] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.858 [2024-02-14 20:30:51.168557] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:13.858 [2024-02-14 20:30:51.168574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:13.858 qpair failed and we were unable to recover it. 00:30:13.858 [2024-02-14 20:30:51.178452] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.858 [2024-02-14 20:30:51.178572] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.858 [2024-02-14 20:30:51.178588] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.858 [2024-02-14 20:30:51.178595] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.858 [2024-02-14 20:30:51.178601] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:13.858 [2024-02-14 20:30:51.178617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:13.858 qpair failed and we were unable to recover it. 00:30:13.858 [2024-02-14 20:30:51.188452] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.858 [2024-02-14 20:30:51.188573] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.858 [2024-02-14 20:30:51.188591] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.858 [2024-02-14 20:30:51.188598] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.858 [2024-02-14 20:30:51.188604] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:13.858 [2024-02-14 20:30:51.188619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:13.858 qpair failed and we were unable to recover it. 00:30:13.858 [2024-02-14 20:30:51.198482] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.858 [2024-02-14 20:30:51.198604] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.858 [2024-02-14 20:30:51.198621] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.858 [2024-02-14 20:30:51.198628] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.858 [2024-02-14 20:30:51.198634] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:13.858 [2024-02-14 20:30:51.198657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:13.858 qpair failed and we were unable to recover it. 00:30:13.858 [2024-02-14 20:30:51.208532] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.858 [2024-02-14 20:30:51.208661] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.858 [2024-02-14 20:30:51.208678] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.858 [2024-02-14 20:30:51.208685] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.858 [2024-02-14 20:30:51.208691] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:13.858 [2024-02-14 20:30:51.208706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:13.858 qpair failed and we were unable to recover it. 00:30:13.858 [2024-02-14 20:30:51.218564] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.858 [2024-02-14 20:30:51.218689] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.858 [2024-02-14 20:30:51.218705] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.858 [2024-02-14 20:30:51.218712] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.858 [2024-02-14 20:30:51.218718] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:13.858 [2024-02-14 20:30:51.218733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:13.858 qpair failed and we were unable to recover it. 00:30:13.858 [2024-02-14 20:30:51.228593] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.858 [2024-02-14 20:30:51.228718] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.858 [2024-02-14 20:30:51.228735] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.858 [2024-02-14 20:30:51.228742] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.858 [2024-02-14 20:30:51.228747] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:13.858 [2024-02-14 20:30:51.228763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:13.858 qpair failed and we were unable to recover it. 00:30:13.858 [2024-02-14 20:30:51.238615] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.858 [2024-02-14 20:30:51.238744] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.858 [2024-02-14 20:30:51.238760] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.858 [2024-02-14 20:30:51.238767] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.858 [2024-02-14 20:30:51.238773] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:13.858 [2024-02-14 20:30:51.238788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:13.858 qpair failed and we were unable to recover it. 00:30:13.858 [2024-02-14 20:30:51.248661] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.858 [2024-02-14 20:30:51.248780] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.858 [2024-02-14 20:30:51.248796] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.858 [2024-02-14 20:30:51.248803] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.858 [2024-02-14 20:30:51.248809] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:13.858 [2024-02-14 20:30:51.248824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:13.858 qpair failed and we were unable to recover it. 00:30:13.858 [2024-02-14 20:30:51.258656] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.858 [2024-02-14 20:30:51.258768] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.858 [2024-02-14 20:30:51.258784] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.858 [2024-02-14 20:30:51.258795] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.858 [2024-02-14 20:30:51.258801] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:13.858 [2024-02-14 20:30:51.258816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:13.858 qpair failed and we were unable to recover it. 00:30:13.858 [2024-02-14 20:30:51.268706] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.858 [2024-02-14 20:30:51.268831] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.858 [2024-02-14 20:30:51.268848] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.858 [2024-02-14 20:30:51.268854] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.858 [2024-02-14 20:30:51.268860] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:13.858 [2024-02-14 20:30:51.268876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:13.858 qpair failed and we were unable to recover it. 00:30:14.118 [2024-02-14 20:30:51.278739] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.118 [2024-02-14 20:30:51.278863] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.118 [2024-02-14 20:30:51.278879] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.118 [2024-02-14 20:30:51.278886] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.118 [2024-02-14 20:30:51.278891] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.118 [2024-02-14 20:30:51.278907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.118 qpair failed and we were unable to recover it. 00:30:14.118 [2024-02-14 20:30:51.288769] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.118 [2024-02-14 20:30:51.288898] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.118 [2024-02-14 20:30:51.288914] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.118 [2024-02-14 20:30:51.288921] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.118 [2024-02-14 20:30:51.288927] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.118 [2024-02-14 20:30:51.288942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.118 qpair failed and we were unable to recover it. 00:30:14.118 [2024-02-14 20:30:51.298799] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.118 [2024-02-14 20:30:51.298957] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.118 [2024-02-14 20:30:51.298973] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.118 [2024-02-14 20:30:51.298980] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.118 [2024-02-14 20:30:51.298986] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.118 [2024-02-14 20:30:51.299001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.118 qpair failed and we were unable to recover it. 00:30:14.118 [2024-02-14 20:30:51.308837] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.118 [2024-02-14 20:30:51.308958] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.118 [2024-02-14 20:30:51.308975] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.118 [2024-02-14 20:30:51.308982] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.118 [2024-02-14 20:30:51.308987] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.118 [2024-02-14 20:30:51.309003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.118 qpair failed and we were unable to recover it. 00:30:14.118 [2024-02-14 20:30:51.318845] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.118 [2024-02-14 20:30:51.318968] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.118 [2024-02-14 20:30:51.318984] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.118 [2024-02-14 20:30:51.318991] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.118 [2024-02-14 20:30:51.318997] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.118 [2024-02-14 20:30:51.319013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.118 qpair failed and we were unable to recover it. 00:30:14.118 [2024-02-14 20:30:51.328873] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.118 [2024-02-14 20:30:51.328989] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.118 [2024-02-14 20:30:51.329005] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.118 [2024-02-14 20:30:51.329012] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.118 [2024-02-14 20:30:51.329018] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.118 [2024-02-14 20:30:51.329033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.118 qpair failed and we were unable to recover it. 00:30:14.118 [2024-02-14 20:30:51.338920] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.118 [2024-02-14 20:30:51.339039] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.119 [2024-02-14 20:30:51.339055] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.119 [2024-02-14 20:30:51.339061] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.119 [2024-02-14 20:30:51.339067] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.119 [2024-02-14 20:30:51.339082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.119 qpair failed and we were unable to recover it. 00:30:14.119 [2024-02-14 20:30:51.348924] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.119 [2024-02-14 20:30:51.349045] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.119 [2024-02-14 20:30:51.349062] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.119 [2024-02-14 20:30:51.349072] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.119 [2024-02-14 20:30:51.349077] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.119 [2024-02-14 20:30:51.349093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.119 qpair failed and we were unable to recover it. 00:30:14.119 [2024-02-14 20:30:51.358995] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.119 [2024-02-14 20:30:51.359156] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.119 [2024-02-14 20:30:51.359173] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.119 [2024-02-14 20:30:51.359179] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.119 [2024-02-14 20:30:51.359185] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.119 [2024-02-14 20:30:51.359202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.119 qpair failed and we were unable to recover it. 00:30:14.119 [2024-02-14 20:30:51.368996] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.119 [2024-02-14 20:30:51.369115] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.119 [2024-02-14 20:30:51.369131] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.119 [2024-02-14 20:30:51.369138] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.119 [2024-02-14 20:30:51.369144] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.119 [2024-02-14 20:30:51.369159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.119 qpair failed and we were unable to recover it. 00:30:14.119 [2024-02-14 20:30:51.379026] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.119 [2024-02-14 20:30:51.379160] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.119 [2024-02-14 20:30:51.379177] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.119 [2024-02-14 20:30:51.379184] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.119 [2024-02-14 20:30:51.379189] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.119 [2024-02-14 20:30:51.379205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.119 qpair failed and we were unable to recover it. 00:30:14.119 [2024-02-14 20:30:51.389058] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.119 [2024-02-14 20:30:51.389177] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.119 [2024-02-14 20:30:51.389193] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.119 [2024-02-14 20:30:51.389200] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.119 [2024-02-14 20:30:51.389206] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.119 [2024-02-14 20:30:51.389221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.119 qpair failed and we were unable to recover it. 00:30:14.119 [2024-02-14 20:30:51.399085] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.119 [2024-02-14 20:30:51.399209] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.119 [2024-02-14 20:30:51.399225] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.119 [2024-02-14 20:30:51.399232] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.119 [2024-02-14 20:30:51.399237] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.119 [2024-02-14 20:30:51.399253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.119 qpair failed and we were unable to recover it. 00:30:14.119 [2024-02-14 20:30:51.409111] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.119 [2024-02-14 20:30:51.409228] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.119 [2024-02-14 20:30:51.409244] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.119 [2024-02-14 20:30:51.409251] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.119 [2024-02-14 20:30:51.409256] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.119 [2024-02-14 20:30:51.409272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.119 qpair failed and we were unable to recover it. 00:30:14.119 [2024-02-14 20:30:51.419140] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.119 [2024-02-14 20:30:51.419256] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.119 [2024-02-14 20:30:51.419273] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.119 [2024-02-14 20:30:51.419279] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.119 [2024-02-14 20:30:51.419285] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.119 [2024-02-14 20:30:51.419300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.119 qpair failed and we were unable to recover it. 00:30:14.119 [2024-02-14 20:30:51.429178] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.119 [2024-02-14 20:30:51.429300] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.119 [2024-02-14 20:30:51.429316] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.119 [2024-02-14 20:30:51.429323] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.119 [2024-02-14 20:30:51.429329] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.119 [2024-02-14 20:30:51.429344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.119 qpair failed and we were unable to recover it. 00:30:14.119 [2024-02-14 20:30:51.439196] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.119 [2024-02-14 20:30:51.439318] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.119 [2024-02-14 20:30:51.439338] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.119 [2024-02-14 20:30:51.439345] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.119 [2024-02-14 20:30:51.439350] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.119 [2024-02-14 20:30:51.439365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.119 qpair failed and we were unable to recover it. 00:30:14.119 [2024-02-14 20:30:51.449241] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.119 [2024-02-14 20:30:51.449353] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.119 [2024-02-14 20:30:51.449369] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.119 [2024-02-14 20:30:51.449376] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.119 [2024-02-14 20:30:51.449382] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.119 [2024-02-14 20:30:51.449397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.119 qpair failed and we were unable to recover it. 00:30:14.119 [2024-02-14 20:30:51.459262] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.119 [2024-02-14 20:30:51.459382] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.119 [2024-02-14 20:30:51.459399] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.119 [2024-02-14 20:30:51.459405] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.119 [2024-02-14 20:30:51.459411] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.119 [2024-02-14 20:30:51.459426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.119 qpair failed and we were unable to recover it. 00:30:14.119 [2024-02-14 20:30:51.469302] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.119 [2024-02-14 20:30:51.469422] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.119 [2024-02-14 20:30:51.469439] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.119 [2024-02-14 20:30:51.469445] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.119 [2024-02-14 20:30:51.469451] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.120 [2024-02-14 20:30:51.469466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.120 qpair failed and we were unable to recover it. 00:30:14.120 [2024-02-14 20:30:51.479251] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.120 [2024-02-14 20:30:51.479374] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.120 [2024-02-14 20:30:51.479391] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.120 [2024-02-14 20:30:51.479398] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.120 [2024-02-14 20:30:51.479403] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.120 [2024-02-14 20:30:51.479418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.120 qpair failed and we were unable to recover it. 00:30:14.120 [2024-02-14 20:30:51.489356] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.120 [2024-02-14 20:30:51.489471] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.120 [2024-02-14 20:30:51.489488] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.120 [2024-02-14 20:30:51.489495] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.120 [2024-02-14 20:30:51.489501] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.120 [2024-02-14 20:30:51.489516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.120 qpair failed and we were unable to recover it. 00:30:14.120 [2024-02-14 20:30:51.499312] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.120 [2024-02-14 20:30:51.499428] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.120 [2024-02-14 20:30:51.499445] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.120 [2024-02-14 20:30:51.499452] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.120 [2024-02-14 20:30:51.499457] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.120 [2024-02-14 20:30:51.499473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.120 qpair failed and we were unable to recover it. 00:30:14.120 [2024-02-14 20:30:51.509435] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.120 [2024-02-14 20:30:51.509553] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.120 [2024-02-14 20:30:51.509569] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.120 [2024-02-14 20:30:51.509576] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.120 [2024-02-14 20:30:51.509581] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.120 [2024-02-14 20:30:51.509597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.120 qpair failed and we were unable to recover it. 00:30:14.120 [2024-02-14 20:30:51.519438] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.120 [2024-02-14 20:30:51.519551] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.120 [2024-02-14 20:30:51.519568] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.120 [2024-02-14 20:30:51.519575] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.120 [2024-02-14 20:30:51.519581] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.120 [2024-02-14 20:30:51.519596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.120 qpair failed and we were unable to recover it. 00:30:14.120 [2024-02-14 20:30:51.529463] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.120 [2024-02-14 20:30:51.529583] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.120 [2024-02-14 20:30:51.529602] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.120 [2024-02-14 20:30:51.529609] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.120 [2024-02-14 20:30:51.529615] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.120 [2024-02-14 20:30:51.529630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.120 qpair failed and we were unable to recover it. 00:30:14.380 [2024-02-14 20:30:51.539504] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.380 [2024-02-14 20:30:51.539622] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.380 [2024-02-14 20:30:51.539638] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.380 [2024-02-14 20:30:51.539645] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.380 [2024-02-14 20:30:51.539657] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.380 [2024-02-14 20:30:51.539672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.380 qpair failed and we were unable to recover it. 00:30:14.380 [2024-02-14 20:30:51.549533] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.380 [2024-02-14 20:30:51.549657] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.380 [2024-02-14 20:30:51.549674] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.380 [2024-02-14 20:30:51.549680] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.380 [2024-02-14 20:30:51.549686] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.380 [2024-02-14 20:30:51.549702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.380 qpair failed and we were unable to recover it. 00:30:14.380 [2024-02-14 20:30:51.559548] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.380 [2024-02-14 20:30:51.559687] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.380 [2024-02-14 20:30:51.559703] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.380 [2024-02-14 20:30:51.559710] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.380 [2024-02-14 20:30:51.559715] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.380 [2024-02-14 20:30:51.559731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.380 qpair failed and we were unable to recover it. 00:30:14.380 [2024-02-14 20:30:51.569582] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.380 [2024-02-14 20:30:51.569721] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.380 [2024-02-14 20:30:51.569737] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.380 [2024-02-14 20:30:51.569744] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.380 [2024-02-14 20:30:51.569749] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.380 [2024-02-14 20:30:51.569768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.380 qpair failed and we were unable to recover it. 00:30:14.380 [2024-02-14 20:30:51.579590] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.380 [2024-02-14 20:30:51.579713] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.380 [2024-02-14 20:30:51.579730] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.380 [2024-02-14 20:30:51.579737] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.380 [2024-02-14 20:30:51.579742] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.380 [2024-02-14 20:30:51.579758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.380 qpair failed and we were unable to recover it. 00:30:14.380 [2024-02-14 20:30:51.589661] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.380 [2024-02-14 20:30:51.589777] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.380 [2024-02-14 20:30:51.589794] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.380 [2024-02-14 20:30:51.589801] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.380 [2024-02-14 20:30:51.589806] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.380 [2024-02-14 20:30:51.589822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.380 qpair failed and we were unable to recover it. 00:30:14.380 [2024-02-14 20:30:51.599709] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.380 [2024-02-14 20:30:51.599826] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.380 [2024-02-14 20:30:51.599843] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.380 [2024-02-14 20:30:51.599849] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.380 [2024-02-14 20:30:51.599855] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.380 [2024-02-14 20:30:51.599870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.380 qpair failed and we were unable to recover it. 00:30:14.380 [2024-02-14 20:30:51.609706] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.380 [2024-02-14 20:30:51.609828] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.380 [2024-02-14 20:30:51.609844] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.380 [2024-02-14 20:30:51.609851] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.380 [2024-02-14 20:30:51.609856] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.380 [2024-02-14 20:30:51.609872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.380 qpair failed and we were unable to recover it. 00:30:14.380 [2024-02-14 20:30:51.619727] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.380 [2024-02-14 20:30:51.619846] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.380 [2024-02-14 20:30:51.619866] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.380 [2024-02-14 20:30:51.619873] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.380 [2024-02-14 20:30:51.619878] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.380 [2024-02-14 20:30:51.619894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.380 qpair failed and we were unable to recover it. 00:30:14.380 [2024-02-14 20:30:51.629763] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.380 [2024-02-14 20:30:51.629882] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.380 [2024-02-14 20:30:51.629898] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.380 [2024-02-14 20:30:51.629905] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.380 [2024-02-14 20:30:51.629911] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.380 [2024-02-14 20:30:51.629927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.380 qpair failed and we were unable to recover it. 00:30:14.380 [2024-02-14 20:30:51.639779] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.380 [2024-02-14 20:30:51.639900] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.380 [2024-02-14 20:30:51.639916] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.380 [2024-02-14 20:30:51.639923] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.380 [2024-02-14 20:30:51.639928] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.380 [2024-02-14 20:30:51.639943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.380 qpair failed and we were unable to recover it. 00:30:14.381 [2024-02-14 20:30:51.649824] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.381 [2024-02-14 20:30:51.649940] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.381 [2024-02-14 20:30:51.649956] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.381 [2024-02-14 20:30:51.649963] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.381 [2024-02-14 20:30:51.649969] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.381 [2024-02-14 20:30:51.649984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.381 qpair failed and we were unable to recover it. 00:30:14.381 [2024-02-14 20:30:51.659778] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.381 [2024-02-14 20:30:51.659892] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.381 [2024-02-14 20:30:51.659908] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.381 [2024-02-14 20:30:51.659914] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.381 [2024-02-14 20:30:51.659920] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.381 [2024-02-14 20:30:51.659939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.381 qpair failed and we were unable to recover it. 00:30:14.381 [2024-02-14 20:30:51.669882] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.381 [2024-02-14 20:30:51.670152] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.381 [2024-02-14 20:30:51.670169] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.381 [2024-02-14 20:30:51.670176] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.381 [2024-02-14 20:30:51.670181] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.381 [2024-02-14 20:30:51.670197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.381 qpair failed and we were unable to recover it. 00:30:14.381 [2024-02-14 20:30:51.679841] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.381 [2024-02-14 20:30:51.679972] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.381 [2024-02-14 20:30:51.679988] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.381 [2024-02-14 20:30:51.679995] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.381 [2024-02-14 20:30:51.680000] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.381 [2024-02-14 20:30:51.680016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.381 qpair failed and we were unable to recover it. 00:30:14.381 [2024-02-14 20:30:51.689914] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.381 [2024-02-14 20:30:51.690037] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.381 [2024-02-14 20:30:51.690053] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.381 [2024-02-14 20:30:51.690060] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.381 [2024-02-14 20:30:51.690065] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.381 [2024-02-14 20:30:51.690080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.381 qpair failed and we were unable to recover it. 00:30:14.381 [2024-02-14 20:30:51.700058] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.381 [2024-02-14 20:30:51.700168] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.381 [2024-02-14 20:30:51.700184] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.381 [2024-02-14 20:30:51.700190] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.381 [2024-02-14 20:30:51.700196] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.381 [2024-02-14 20:30:51.700211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.381 qpair failed and we were unable to recover it. 00:30:14.381 [2024-02-14 20:30:51.709984] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.381 [2024-02-14 20:30:51.710136] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.381 [2024-02-14 20:30:51.710156] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.381 [2024-02-14 20:30:51.710162] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.381 [2024-02-14 20:30:51.710168] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.381 [2024-02-14 20:30:51.710183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.381 qpair failed and we were unable to recover it. 00:30:14.381 [2024-02-14 20:30:51.720028] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.381 [2024-02-14 20:30:51.720143] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.381 [2024-02-14 20:30:51.720159] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.381 [2024-02-14 20:30:51.720166] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.381 [2024-02-14 20:30:51.720172] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.381 [2024-02-14 20:30:51.720187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.381 qpair failed and we were unable to recover it. 00:30:14.381 [2024-02-14 20:30:51.730059] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.381 [2024-02-14 20:30:51.730185] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.381 [2024-02-14 20:30:51.730201] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.381 [2024-02-14 20:30:51.730208] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.381 [2024-02-14 20:30:51.730213] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.381 [2024-02-14 20:30:51.730229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.381 qpair failed and we were unable to recover it. 00:30:14.381 [2024-02-14 20:30:51.740068] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.381 [2024-02-14 20:30:51.740185] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.381 [2024-02-14 20:30:51.740203] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.381 [2024-02-14 20:30:51.740210] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.381 [2024-02-14 20:30:51.740216] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.381 [2024-02-14 20:30:51.740232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.381 qpair failed and we were unable to recover it. 00:30:14.381 [2024-02-14 20:30:51.750078] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.381 [2024-02-14 20:30:51.750198] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.381 [2024-02-14 20:30:51.750214] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.381 [2024-02-14 20:30:51.750221] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.381 [2024-02-14 20:30:51.750226] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.381 [2024-02-14 20:30:51.750245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.381 qpair failed and we were unable to recover it. 00:30:14.381 [2024-02-14 20:30:51.760036] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.381 [2024-02-14 20:30:51.760159] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.381 [2024-02-14 20:30:51.760175] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.381 [2024-02-14 20:30:51.760182] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.381 [2024-02-14 20:30:51.760187] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.381 [2024-02-14 20:30:51.760203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.381 qpair failed and we were unable to recover it. 00:30:14.381 [2024-02-14 20:30:51.770132] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.381 [2024-02-14 20:30:51.770252] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.381 [2024-02-14 20:30:51.770268] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.381 [2024-02-14 20:30:51.770275] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.381 [2024-02-14 20:30:51.770281] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.381 [2024-02-14 20:30:51.770296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.381 qpair failed and we were unable to recover it. 00:30:14.381 [2024-02-14 20:30:51.780155] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.381 [2024-02-14 20:30:51.780272] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.381 [2024-02-14 20:30:51.780289] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.382 [2024-02-14 20:30:51.780297] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.382 [2024-02-14 20:30:51.780304] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.382 [2024-02-14 20:30:51.780320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.382 qpair failed and we were unable to recover it. 00:30:14.382 [2024-02-14 20:30:51.790210] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.382 [2024-02-14 20:30:51.790332] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.382 [2024-02-14 20:30:51.790350] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.382 [2024-02-14 20:30:51.790359] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.382 [2024-02-14 20:30:51.790365] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.382 [2024-02-14 20:30:51.790380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.382 qpair failed and we were unable to recover it. 00:30:14.642 [2024-02-14 20:30:51.800356] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.642 [2024-02-14 20:30:51.800482] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.642 [2024-02-14 20:30:51.800503] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.642 [2024-02-14 20:30:51.800510] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.642 [2024-02-14 20:30:51.800516] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.642 [2024-02-14 20:30:51.800531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.642 qpair failed and we were unable to recover it. 00:30:14.642 [2024-02-14 20:30:51.810256] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.642 [2024-02-14 20:30:51.810379] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.642 [2024-02-14 20:30:51.810395] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.642 [2024-02-14 20:30:51.810402] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.642 [2024-02-14 20:30:51.810408] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.642 [2024-02-14 20:30:51.810424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.642 qpair failed and we were unable to recover it. 00:30:14.642 [2024-02-14 20:30:51.820217] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.642 [2024-02-14 20:30:51.820335] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.642 [2024-02-14 20:30:51.820352] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.642 [2024-02-14 20:30:51.820359] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.642 [2024-02-14 20:30:51.820364] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.642 [2024-02-14 20:30:51.820380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.642 qpair failed and we were unable to recover it. 00:30:14.642 [2024-02-14 20:30:51.830253] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.642 [2024-02-14 20:30:51.830374] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.642 [2024-02-14 20:30:51.830390] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.642 [2024-02-14 20:30:51.830397] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.642 [2024-02-14 20:30:51.830403] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.642 [2024-02-14 20:30:51.830418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.642 qpair failed and we were unable to recover it. 00:30:14.642 [2024-02-14 20:30:51.840365] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.642 [2024-02-14 20:30:51.840480] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.642 [2024-02-14 20:30:51.840497] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.642 [2024-02-14 20:30:51.840504] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.642 [2024-02-14 20:30:51.840509] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.642 [2024-02-14 20:30:51.840528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.642 qpair failed and we were unable to recover it. 00:30:14.642 [2024-02-14 20:30:51.850299] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.642 [2024-02-14 20:30:51.850418] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.642 [2024-02-14 20:30:51.850434] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.642 [2024-02-14 20:30:51.850441] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.642 [2024-02-14 20:30:51.850447] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.642 [2024-02-14 20:30:51.850462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.642 qpair failed and we were unable to recover it. 00:30:14.642 [2024-02-14 20:30:51.860390] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.642 [2024-02-14 20:30:51.860505] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.642 [2024-02-14 20:30:51.860521] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.642 [2024-02-14 20:30:51.860528] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.642 [2024-02-14 20:30:51.860533] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.642 [2024-02-14 20:30:51.860548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.642 qpair failed and we were unable to recover it. 00:30:14.642 [2024-02-14 20:30:51.870425] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.642 [2024-02-14 20:30:51.870544] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.642 [2024-02-14 20:30:51.870561] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.642 [2024-02-14 20:30:51.870567] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.642 [2024-02-14 20:30:51.870573] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.642 [2024-02-14 20:30:51.870588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.642 qpair failed and we were unable to recover it. 00:30:14.642 [2024-02-14 20:30:51.880455] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.642 [2024-02-14 20:30:51.880571] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.642 [2024-02-14 20:30:51.880588] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.642 [2024-02-14 20:30:51.880595] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.642 [2024-02-14 20:30:51.880601] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.642 [2024-02-14 20:30:51.880616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.642 qpair failed and we were unable to recover it. 00:30:14.642 [2024-02-14 20:30:51.890506] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.642 [2024-02-14 20:30:51.890624] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.642 [2024-02-14 20:30:51.890643] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.642 [2024-02-14 20:30:51.890656] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.642 [2024-02-14 20:30:51.890662] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.642 [2024-02-14 20:30:51.890678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.642 qpair failed and we were unable to recover it. 00:30:14.642 [2024-02-14 20:30:51.900433] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.642 [2024-02-14 20:30:51.900550] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.642 [2024-02-14 20:30:51.900566] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.642 [2024-02-14 20:30:51.900573] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.642 [2024-02-14 20:30:51.900579] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.642 [2024-02-14 20:30:51.900595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.642 qpair failed and we were unable to recover it. 00:30:14.642 [2024-02-14 20:30:51.910479] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.642 [2024-02-14 20:30:51.910595] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.642 [2024-02-14 20:30:51.910611] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.642 [2024-02-14 20:30:51.910618] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.643 [2024-02-14 20:30:51.910624] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.643 [2024-02-14 20:30:51.910639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.643 qpair failed and we were unable to recover it. 00:30:14.643 [2024-02-14 20:30:51.920504] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.643 [2024-02-14 20:30:51.920627] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.643 [2024-02-14 20:30:51.920644] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.643 [2024-02-14 20:30:51.920655] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.643 [2024-02-14 20:30:51.920661] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.643 [2024-02-14 20:30:51.920676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.643 qpair failed and we were unable to recover it. 00:30:14.643 [2024-02-14 20:30:51.930528] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.643 [2024-02-14 20:30:51.930644] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.643 [2024-02-14 20:30:51.930667] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.643 [2024-02-14 20:30:51.930673] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.643 [2024-02-14 20:30:51.930682] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.643 [2024-02-14 20:30:51.930699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.643 qpair failed and we were unable to recover it. 00:30:14.643 [2024-02-14 20:30:51.940597] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.643 [2024-02-14 20:30:51.940724] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.643 [2024-02-14 20:30:51.940741] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.643 [2024-02-14 20:30:51.940748] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.643 [2024-02-14 20:30:51.940753] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.643 [2024-02-14 20:30:51.940769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.643 qpair failed and we were unable to recover it. 00:30:14.643 [2024-02-14 20:30:51.950583] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.643 [2024-02-14 20:30:51.950727] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.643 [2024-02-14 20:30:51.950744] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.643 [2024-02-14 20:30:51.950750] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.643 [2024-02-14 20:30:51.950756] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.643 [2024-02-14 20:30:51.950771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.643 qpair failed and we were unable to recover it. 00:30:14.643 [2024-02-14 20:30:51.960668] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.643 [2024-02-14 20:30:51.960789] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.643 [2024-02-14 20:30:51.960805] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.643 [2024-02-14 20:30:51.960812] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.643 [2024-02-14 20:30:51.960817] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.643 [2024-02-14 20:30:51.960833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.643 qpair failed and we were unable to recover it. 00:30:14.643 [2024-02-14 20:30:51.970683] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.643 [2024-02-14 20:30:51.970807] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.643 [2024-02-14 20:30:51.970823] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.643 [2024-02-14 20:30:51.970830] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.643 [2024-02-14 20:30:51.970835] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.643 [2024-02-14 20:30:51.970851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.643 qpair failed and we were unable to recover it. 00:30:14.643 [2024-02-14 20:30:51.980681] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.643 [2024-02-14 20:30:51.980800] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.643 [2024-02-14 20:30:51.980817] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.643 [2024-02-14 20:30:51.980824] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.643 [2024-02-14 20:30:51.980830] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.643 [2024-02-14 20:30:51.980845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.643 qpair failed and we were unable to recover it. 00:30:14.643 [2024-02-14 20:30:51.990811] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.643 [2024-02-14 20:30:51.990932] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.643 [2024-02-14 20:30:51.990948] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.643 [2024-02-14 20:30:51.990955] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.643 [2024-02-14 20:30:51.990960] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.643 [2024-02-14 20:30:51.990976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.643 qpair failed and we were unable to recover it. 00:30:14.643 [2024-02-14 20:30:52.000729] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.643 [2024-02-14 20:30:52.000851] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.643 [2024-02-14 20:30:52.000867] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.643 [2024-02-14 20:30:52.000874] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.643 [2024-02-14 20:30:52.000880] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.643 [2024-02-14 20:30:52.000895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.643 qpair failed and we were unable to recover it. 00:30:14.643 [2024-02-14 20:30:52.010753] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.643 [2024-02-14 20:30:52.010876] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.643 [2024-02-14 20:30:52.010892] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.643 [2024-02-14 20:30:52.010899] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.643 [2024-02-14 20:30:52.010904] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.643 [2024-02-14 20:30:52.010920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.643 qpair failed and we were unable to recover it. 00:30:14.643 [2024-02-14 20:30:52.020839] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.643 [2024-02-14 20:30:52.020958] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.643 [2024-02-14 20:30:52.020974] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.643 [2024-02-14 20:30:52.020981] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.643 [2024-02-14 20:30:52.020991] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.643 [2024-02-14 20:30:52.021006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.643 qpair failed and we were unable to recover it. 00:30:14.643 [2024-02-14 20:30:52.030870] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.643 [2024-02-14 20:30:52.030991] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.643 [2024-02-14 20:30:52.031008] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.643 [2024-02-14 20:30:52.031015] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.643 [2024-02-14 20:30:52.031021] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.643 [2024-02-14 20:30:52.031036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.643 qpair failed and we were unable to recover it. 00:30:14.643 [2024-02-14 20:30:52.040922] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.643 [2024-02-14 20:30:52.041047] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.643 [2024-02-14 20:30:52.041063] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.643 [2024-02-14 20:30:52.041070] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.643 [2024-02-14 20:30:52.041075] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.643 [2024-02-14 20:30:52.041091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.643 qpair failed and we were unable to recover it. 00:30:14.644 [2024-02-14 20:30:52.050957] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.644 [2024-02-14 20:30:52.051088] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.644 [2024-02-14 20:30:52.051104] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.644 [2024-02-14 20:30:52.051110] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.644 [2024-02-14 20:30:52.051116] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.644 [2024-02-14 20:30:52.051131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.644 qpair failed and we were unable to recover it. 00:30:14.903 [2024-02-14 20:30:52.060973] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.903 [2024-02-14 20:30:52.061127] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.903 [2024-02-14 20:30:52.061143] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.903 [2024-02-14 20:30:52.061150] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.903 [2024-02-14 20:30:52.061156] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.903 [2024-02-14 20:30:52.061171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.903 qpair failed and we were unable to recover it. 00:30:14.903 [2024-02-14 20:30:52.071010] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.904 [2024-02-14 20:30:52.071135] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.904 [2024-02-14 20:30:52.071152] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.904 [2024-02-14 20:30:52.071158] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.904 [2024-02-14 20:30:52.071164] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.904 [2024-02-14 20:30:52.071179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.904 qpair failed and we were unable to recover it. 00:30:14.904 [2024-02-14 20:30:52.081060] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.904 [2024-02-14 20:30:52.081186] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.904 [2024-02-14 20:30:52.081203] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.904 [2024-02-14 20:30:52.081209] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.904 [2024-02-14 20:30:52.081215] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.904 [2024-02-14 20:30:52.081230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.904 qpair failed and we were unable to recover it. 00:30:14.904 [2024-02-14 20:30:52.090989] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.904 [2024-02-14 20:30:52.091110] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.904 [2024-02-14 20:30:52.091126] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.904 [2024-02-14 20:30:52.091132] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.904 [2024-02-14 20:30:52.091138] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.904 [2024-02-14 20:30:52.091154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.904 qpair failed and we were unable to recover it. 00:30:14.904 [2024-02-14 20:30:52.101094] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.904 [2024-02-14 20:30:52.101213] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.904 [2024-02-14 20:30:52.101229] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.904 [2024-02-14 20:30:52.101235] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.904 [2024-02-14 20:30:52.101241] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.904 [2024-02-14 20:30:52.101257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.904 qpair failed and we were unable to recover it. 00:30:14.904 [2024-02-14 20:30:52.111124] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.904 [2024-02-14 20:30:52.111243] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.904 [2024-02-14 20:30:52.111259] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.904 [2024-02-14 20:30:52.111266] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.904 [2024-02-14 20:30:52.111275] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.904 [2024-02-14 20:30:52.111291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.904 qpair failed and we were unable to recover it. 00:30:14.904 [2024-02-14 20:30:52.121061] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.904 [2024-02-14 20:30:52.121182] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.904 [2024-02-14 20:30:52.121198] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.904 [2024-02-14 20:30:52.121205] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.904 [2024-02-14 20:30:52.121210] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.904 [2024-02-14 20:30:52.121226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.904 qpair failed and we were unable to recover it. 00:30:14.904 [2024-02-14 20:30:52.131147] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.904 [2024-02-14 20:30:52.131263] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.904 [2024-02-14 20:30:52.131279] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.904 [2024-02-14 20:30:52.131286] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.904 [2024-02-14 20:30:52.131292] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.904 [2024-02-14 20:30:52.131307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.904 qpair failed and we were unable to recover it. 00:30:14.904 [2024-02-14 20:30:52.141190] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.904 [2024-02-14 20:30:52.141306] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.904 [2024-02-14 20:30:52.141322] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.904 [2024-02-14 20:30:52.141329] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.904 [2024-02-14 20:30:52.141335] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.904 [2024-02-14 20:30:52.141350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.904 qpair failed and we were unable to recover it. 00:30:14.904 [2024-02-14 20:30:52.151158] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.904 [2024-02-14 20:30:52.151277] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.904 [2024-02-14 20:30:52.151294] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.904 [2024-02-14 20:30:52.151301] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.904 [2024-02-14 20:30:52.151306] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.904 [2024-02-14 20:30:52.151321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.904 qpair failed and we were unable to recover it. 00:30:14.904 [2024-02-14 20:30:52.161188] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.904 [2024-02-14 20:30:52.161308] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.904 [2024-02-14 20:30:52.161325] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.904 [2024-02-14 20:30:52.161331] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.904 [2024-02-14 20:30:52.161337] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.904 [2024-02-14 20:30:52.161352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.904 qpair failed and we were unable to recover it. 00:30:14.904 [2024-02-14 20:30:52.171286] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.904 [2024-02-14 20:30:52.171399] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.904 [2024-02-14 20:30:52.171416] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.904 [2024-02-14 20:30:52.171423] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.904 [2024-02-14 20:30:52.171429] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.904 [2024-02-14 20:30:52.171444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.904 qpair failed and we were unable to recover it. 00:30:14.904 [2024-02-14 20:30:52.181234] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.904 [2024-02-14 20:30:52.181362] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.904 [2024-02-14 20:30:52.181379] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.904 [2024-02-14 20:30:52.181386] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.904 [2024-02-14 20:30:52.181391] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.904 [2024-02-14 20:30:52.181406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.904 qpair failed and we were unable to recover it. 00:30:14.904 [2024-02-14 20:30:52.191365] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.904 [2024-02-14 20:30:52.191490] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.904 [2024-02-14 20:30:52.191508] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.904 [2024-02-14 20:30:52.191515] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.904 [2024-02-14 20:30:52.191521] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.904 [2024-02-14 20:30:52.191536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.904 qpair failed and we were unable to recover it. 00:30:14.904 [2024-02-14 20:30:52.201355] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.904 [2024-02-14 20:30:52.201477] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.905 [2024-02-14 20:30:52.201494] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.905 [2024-02-14 20:30:52.201501] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.905 [2024-02-14 20:30:52.201511] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.905 [2024-02-14 20:30:52.201527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.905 qpair failed and we were unable to recover it. 00:30:14.905 [2024-02-14 20:30:52.211395] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.905 [2024-02-14 20:30:52.211511] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.905 [2024-02-14 20:30:52.211527] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.905 [2024-02-14 20:30:52.211534] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.905 [2024-02-14 20:30:52.211540] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.905 [2024-02-14 20:30:52.211555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.905 qpair failed and we were unable to recover it. 00:30:14.905 [2024-02-14 20:30:52.221357] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.905 [2024-02-14 20:30:52.221486] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.905 [2024-02-14 20:30:52.221502] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.905 [2024-02-14 20:30:52.221509] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.905 [2024-02-14 20:30:52.221515] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.905 [2024-02-14 20:30:52.221530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.905 qpair failed and we were unable to recover it. 00:30:14.905 [2024-02-14 20:30:52.231472] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.905 [2024-02-14 20:30:52.231592] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.905 [2024-02-14 20:30:52.231608] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.905 [2024-02-14 20:30:52.231614] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.905 [2024-02-14 20:30:52.231620] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.905 [2024-02-14 20:30:52.231636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.905 qpair failed and we were unable to recover it. 00:30:14.905 [2024-02-14 20:30:52.241538] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.905 [2024-02-14 20:30:52.241696] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.905 [2024-02-14 20:30:52.241712] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.905 [2024-02-14 20:30:52.241719] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.905 [2024-02-14 20:30:52.241725] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.905 [2024-02-14 20:30:52.241740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.905 qpair failed and we were unable to recover it. 00:30:14.905 [2024-02-14 20:30:52.251515] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.905 [2024-02-14 20:30:52.251632] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.905 [2024-02-14 20:30:52.251654] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.905 [2024-02-14 20:30:52.251661] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.905 [2024-02-14 20:30:52.251667] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.905 [2024-02-14 20:30:52.251683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.905 qpair failed and we were unable to recover it. 00:30:14.905 [2024-02-14 20:30:52.261531] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.905 [2024-02-14 20:30:52.261656] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.905 [2024-02-14 20:30:52.261672] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.905 [2024-02-14 20:30:52.261678] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.905 [2024-02-14 20:30:52.261684] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.905 [2024-02-14 20:30:52.261700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.905 qpair failed and we were unable to recover it. 00:30:14.905 [2024-02-14 20:30:52.271507] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.905 [2024-02-14 20:30:52.271622] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.905 [2024-02-14 20:30:52.271638] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.905 [2024-02-14 20:30:52.271645] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.905 [2024-02-14 20:30:52.271656] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.905 [2024-02-14 20:30:52.271672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.905 qpair failed and we were unable to recover it. 00:30:14.905 [2024-02-14 20:30:52.281581] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.905 [2024-02-14 20:30:52.281707] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.905 [2024-02-14 20:30:52.281724] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.905 [2024-02-14 20:30:52.281730] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.905 [2024-02-14 20:30:52.281736] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.905 [2024-02-14 20:30:52.281752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.905 qpair failed and we were unable to recover it. 00:30:14.905 [2024-02-14 20:30:52.291629] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.905 [2024-02-14 20:30:52.291748] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.905 [2024-02-14 20:30:52.291764] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.905 [2024-02-14 20:30:52.291774] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.905 [2024-02-14 20:30:52.291780] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.905 [2024-02-14 20:30:52.291795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.905 qpair failed and we were unable to recover it. 00:30:14.905 [2024-02-14 20:30:52.301677] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.905 [2024-02-14 20:30:52.301832] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.905 [2024-02-14 20:30:52.301849] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.905 [2024-02-14 20:30:52.301856] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.905 [2024-02-14 20:30:52.301861] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.905 [2024-02-14 20:30:52.301877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.905 qpair failed and we were unable to recover it. 00:30:14.905 [2024-02-14 20:30:52.311700] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.905 [2024-02-14 20:30:52.311820] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.905 [2024-02-14 20:30:52.311836] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.905 [2024-02-14 20:30:52.311843] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.905 [2024-02-14 20:30:52.311849] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:14.905 [2024-02-14 20:30:52.311864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.905 qpair failed and we were unable to recover it. 00:30:15.165 [2024-02-14 20:30:52.321721] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.165 [2024-02-14 20:30:52.321845] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.165 [2024-02-14 20:30:52.321861] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.165 [2024-02-14 20:30:52.321868] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.165 [2024-02-14 20:30:52.321873] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.165 [2024-02-14 20:30:52.321888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.165 qpair failed and we were unable to recover it. 00:30:15.165 [2024-02-14 20:30:52.331746] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.165 [2024-02-14 20:30:52.331884] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.165 [2024-02-14 20:30:52.331900] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.165 [2024-02-14 20:30:52.331907] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.165 [2024-02-14 20:30:52.331912] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.165 [2024-02-14 20:30:52.331928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.165 qpair failed and we were unable to recover it. 00:30:15.165 [2024-02-14 20:30:52.341796] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.165 [2024-02-14 20:30:52.341915] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.165 [2024-02-14 20:30:52.341931] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.165 [2024-02-14 20:30:52.341938] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.165 [2024-02-14 20:30:52.341943] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.165 [2024-02-14 20:30:52.341959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.165 qpair failed and we were unable to recover it. 00:30:15.165 [2024-02-14 20:30:52.351821] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.165 [2024-02-14 20:30:52.351974] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.165 [2024-02-14 20:30:52.351990] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.165 [2024-02-14 20:30:52.351997] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.165 [2024-02-14 20:30:52.352002] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.165 [2024-02-14 20:30:52.352017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.165 qpair failed and we were unable to recover it. 00:30:15.165 [2024-02-14 20:30:52.361841] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.165 [2024-02-14 20:30:52.361990] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.165 [2024-02-14 20:30:52.362006] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.165 [2024-02-14 20:30:52.362013] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.165 [2024-02-14 20:30:52.362019] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.165 [2024-02-14 20:30:52.362034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.165 qpair failed and we were unable to recover it. 00:30:15.165 [2024-02-14 20:30:52.371862] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.165 [2024-02-14 20:30:52.371975] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.165 [2024-02-14 20:30:52.371991] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.165 [2024-02-14 20:30:52.371997] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.165 [2024-02-14 20:30:52.372003] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.165 [2024-02-14 20:30:52.372019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.165 qpair failed and we were unable to recover it. 00:30:15.165 [2024-02-14 20:30:52.381898] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.165 [2024-02-14 20:30:52.382010] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.165 [2024-02-14 20:30:52.382026] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.165 [2024-02-14 20:30:52.382036] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.165 [2024-02-14 20:30:52.382042] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.166 [2024-02-14 20:30:52.382057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.166 qpair failed and we were unable to recover it. 00:30:15.166 [2024-02-14 20:30:52.391958] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.166 [2024-02-14 20:30:52.392115] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.166 [2024-02-14 20:30:52.392132] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.166 [2024-02-14 20:30:52.392139] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.166 [2024-02-14 20:30:52.392145] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.166 [2024-02-14 20:30:52.392160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.166 qpair failed and we were unable to recover it. 00:30:15.166 [2024-02-14 20:30:52.401935] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.166 [2024-02-14 20:30:52.402052] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.166 [2024-02-14 20:30:52.402069] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.166 [2024-02-14 20:30:52.402075] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.166 [2024-02-14 20:30:52.402081] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.166 [2024-02-14 20:30:52.402097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.166 qpair failed and we were unable to recover it. 00:30:15.166 [2024-02-14 20:30:52.411971] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.166 [2024-02-14 20:30:52.412091] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.166 [2024-02-14 20:30:52.412107] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.166 [2024-02-14 20:30:52.412114] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.166 [2024-02-14 20:30:52.412119] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.166 [2024-02-14 20:30:52.412135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.166 qpair failed and we were unable to recover it. 00:30:15.166 [2024-02-14 20:30:52.421930] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.166 [2024-02-14 20:30:52.422041] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.166 [2024-02-14 20:30:52.422057] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.166 [2024-02-14 20:30:52.422064] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.166 [2024-02-14 20:30:52.422070] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.166 [2024-02-14 20:30:52.422085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.166 qpair failed and we were unable to recover it. 00:30:15.166 [2024-02-14 20:30:52.431939] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.166 [2024-02-14 20:30:52.432214] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.166 [2024-02-14 20:30:52.432231] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.166 [2024-02-14 20:30:52.432238] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.166 [2024-02-14 20:30:52.432244] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.166 [2024-02-14 20:30:52.432259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.166 qpair failed and we were unable to recover it. 00:30:15.166 [2024-02-14 20:30:52.442041] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.166 [2024-02-14 20:30:52.442164] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.166 [2024-02-14 20:30:52.442181] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.166 [2024-02-14 20:30:52.442187] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.166 [2024-02-14 20:30:52.442193] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.166 [2024-02-14 20:30:52.442209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.166 qpair failed and we were unable to recover it. 00:30:15.166 [2024-02-14 20:30:52.452086] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.166 [2024-02-14 20:30:52.452202] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.166 [2024-02-14 20:30:52.452219] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.166 [2024-02-14 20:30:52.452225] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.166 [2024-02-14 20:30:52.452231] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.166 [2024-02-14 20:30:52.452246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.166 qpair failed and we were unable to recover it. 00:30:15.166 [2024-02-14 20:30:52.462108] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.166 [2024-02-14 20:30:52.462223] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.166 [2024-02-14 20:30:52.462240] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.166 [2024-02-14 20:30:52.462246] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.166 [2024-02-14 20:30:52.462252] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.166 [2024-02-14 20:30:52.462267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.166 qpair failed and we were unable to recover it. 00:30:15.166 [2024-02-14 20:30:52.472152] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.166 [2024-02-14 20:30:52.472269] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.166 [2024-02-14 20:30:52.472286] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.166 [2024-02-14 20:30:52.472296] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.166 [2024-02-14 20:30:52.472302] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.166 [2024-02-14 20:30:52.472317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.166 qpair failed and we were unable to recover it. 00:30:15.166 [2024-02-14 20:30:52.482160] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.166 [2024-02-14 20:30:52.482281] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.166 [2024-02-14 20:30:52.482298] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.166 [2024-02-14 20:30:52.482304] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.166 [2024-02-14 20:30:52.482310] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.166 [2024-02-14 20:30:52.482325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.166 qpair failed and we were unable to recover it. 00:30:15.166 [2024-02-14 20:30:52.492177] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.166 [2024-02-14 20:30:52.492298] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.166 [2024-02-14 20:30:52.492315] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.166 [2024-02-14 20:30:52.492322] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.166 [2024-02-14 20:30:52.492328] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.166 [2024-02-14 20:30:52.492343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.166 qpair failed and we were unable to recover it. 00:30:15.166 [2024-02-14 20:30:52.502212] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.166 [2024-02-14 20:30:52.502328] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.166 [2024-02-14 20:30:52.502344] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.166 [2024-02-14 20:30:52.502351] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.166 [2024-02-14 20:30:52.502356] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.166 [2024-02-14 20:30:52.502372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.166 qpair failed and we were unable to recover it. 00:30:15.166 [2024-02-14 20:30:52.512245] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.166 [2024-02-14 20:30:52.512358] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.166 [2024-02-14 20:30:52.512374] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.166 [2024-02-14 20:30:52.512380] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.166 [2024-02-14 20:30:52.512386] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.166 [2024-02-14 20:30:52.512401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.166 qpair failed and we were unable to recover it. 00:30:15.166 [2024-02-14 20:30:52.522280] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.167 [2024-02-14 20:30:52.522396] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.167 [2024-02-14 20:30:52.522413] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.167 [2024-02-14 20:30:52.522420] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.167 [2024-02-14 20:30:52.522425] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.167 [2024-02-14 20:30:52.522441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.167 qpair failed and we were unable to recover it. 00:30:15.167 [2024-02-14 20:30:52.532302] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.167 [2024-02-14 20:30:52.532453] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.167 [2024-02-14 20:30:52.532470] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.167 [2024-02-14 20:30:52.532476] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.167 [2024-02-14 20:30:52.532482] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.167 [2024-02-14 20:30:52.532497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.167 qpair failed and we were unable to recover it. 00:30:15.167 [2024-02-14 20:30:52.542332] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.167 [2024-02-14 20:30:52.542446] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.167 [2024-02-14 20:30:52.542463] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.167 [2024-02-14 20:30:52.542470] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.167 [2024-02-14 20:30:52.542475] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.167 [2024-02-14 20:30:52.542490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.167 qpair failed and we were unable to recover it. 00:30:15.167 [2024-02-14 20:30:52.552362] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.167 [2024-02-14 20:30:52.552486] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.167 [2024-02-14 20:30:52.552503] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.167 [2024-02-14 20:30:52.552510] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.167 [2024-02-14 20:30:52.552515] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.167 [2024-02-14 20:30:52.552530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.167 qpair failed and we were unable to recover it. 00:30:15.167 [2024-02-14 20:30:52.562384] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.167 [2024-02-14 20:30:52.562502] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.167 [2024-02-14 20:30:52.562519] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.167 [2024-02-14 20:30:52.562529] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.167 [2024-02-14 20:30:52.562534] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.167 [2024-02-14 20:30:52.562549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.167 qpair failed and we were unable to recover it. 00:30:15.167 [2024-02-14 20:30:52.572377] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.167 [2024-02-14 20:30:52.572502] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.167 [2024-02-14 20:30:52.572519] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.167 [2024-02-14 20:30:52.572526] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.167 [2024-02-14 20:30:52.572531] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.167 [2024-02-14 20:30:52.572547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.167 qpair failed and we were unable to recover it. 00:30:15.427 [2024-02-14 20:30:52.582413] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.427 [2024-02-14 20:30:52.582541] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.427 [2024-02-14 20:30:52.582558] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.427 [2024-02-14 20:30:52.582564] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.427 [2024-02-14 20:30:52.582570] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.427 [2024-02-14 20:30:52.582586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.427 qpair failed and we were unable to recover it. 00:30:15.427 [2024-02-14 20:30:52.592478] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.427 [2024-02-14 20:30:52.592604] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.427 [2024-02-14 20:30:52.592620] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.427 [2024-02-14 20:30:52.592627] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.427 [2024-02-14 20:30:52.592633] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.427 [2024-02-14 20:30:52.592655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.427 qpair failed and we were unable to recover it. 00:30:15.427 [2024-02-14 20:30:52.602498] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.427 [2024-02-14 20:30:52.602645] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.427 [2024-02-14 20:30:52.602666] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.427 [2024-02-14 20:30:52.602673] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.427 [2024-02-14 20:30:52.602678] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.427 [2024-02-14 20:30:52.602694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.427 qpair failed and we were unable to recover it. 00:30:15.427 [2024-02-14 20:30:52.612525] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.427 [2024-02-14 20:30:52.612640] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.427 [2024-02-14 20:30:52.612664] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.427 [2024-02-14 20:30:52.612671] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.427 [2024-02-14 20:30:52.612676] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.427 [2024-02-14 20:30:52.612692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.427 qpair failed and we were unable to recover it. 00:30:15.427 [2024-02-14 20:30:52.622532] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.427 [2024-02-14 20:30:52.622657] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.427 [2024-02-14 20:30:52.622674] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.427 [2024-02-14 20:30:52.622681] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.427 [2024-02-14 20:30:52.622686] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.427 [2024-02-14 20:30:52.622703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.427 qpair failed and we were unable to recover it. 00:30:15.427 [2024-02-14 20:30:52.632566] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.427 [2024-02-14 20:30:52.632902] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.427 [2024-02-14 20:30:52.632919] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.427 [2024-02-14 20:30:52.632926] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.427 [2024-02-14 20:30:52.632931] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.427 [2024-02-14 20:30:52.632946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.427 qpair failed and we were unable to recover it. 00:30:15.427 [2024-02-14 20:30:52.642650] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.427 [2024-02-14 20:30:52.642767] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.427 [2024-02-14 20:30:52.642783] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.427 [2024-02-14 20:30:52.642790] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.427 [2024-02-14 20:30:52.642795] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.427 [2024-02-14 20:30:52.642810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.427 qpair failed and we were unable to recover it. 00:30:15.427 [2024-02-14 20:30:52.652618] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.427 [2024-02-14 20:30:52.652744] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.427 [2024-02-14 20:30:52.652764] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.427 [2024-02-14 20:30:52.652770] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.427 [2024-02-14 20:30:52.652776] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.427 [2024-02-14 20:30:52.652791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.427 qpair failed and we were unable to recover it. 00:30:15.427 [2024-02-14 20:30:52.662610] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.427 [2024-02-14 20:30:52.662884] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.427 [2024-02-14 20:30:52.662902] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.427 [2024-02-14 20:30:52.662908] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.427 [2024-02-14 20:30:52.662914] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.427 [2024-02-14 20:30:52.662929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.427 qpair failed and we were unable to recover it. 00:30:15.427 [2024-02-14 20:30:52.672708] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.427 [2024-02-14 20:30:52.672825] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.427 [2024-02-14 20:30:52.672841] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.427 [2024-02-14 20:30:52.672848] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.427 [2024-02-14 20:30:52.672853] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.427 [2024-02-14 20:30:52.672870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.427 qpair failed and we were unable to recover it. 00:30:15.427 [2024-02-14 20:30:52.682736] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.427 [2024-02-14 20:30:52.682867] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.427 [2024-02-14 20:30:52.682883] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.427 [2024-02-14 20:30:52.682890] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.427 [2024-02-14 20:30:52.682895] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.427 [2024-02-14 20:30:52.682911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.427 qpair failed and we were unable to recover it. 00:30:15.427 [2024-02-14 20:30:52.692756] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.427 [2024-02-14 20:30:52.692872] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.427 [2024-02-14 20:30:52.692888] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.428 [2024-02-14 20:30:52.692895] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.428 [2024-02-14 20:30:52.692900] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.428 [2024-02-14 20:30:52.692915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.428 qpair failed and we were unable to recover it. 00:30:15.428 [2024-02-14 20:30:52.702834] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.428 [2024-02-14 20:30:52.702989] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.428 [2024-02-14 20:30:52.703006] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.428 [2024-02-14 20:30:52.703012] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.428 [2024-02-14 20:30:52.703018] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.428 [2024-02-14 20:30:52.703034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.428 qpair failed and we were unable to recover it. 00:30:15.428 [2024-02-14 20:30:52.712769] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.428 [2024-02-14 20:30:52.712920] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.428 [2024-02-14 20:30:52.712936] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.428 [2024-02-14 20:30:52.712943] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.428 [2024-02-14 20:30:52.712948] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.428 [2024-02-14 20:30:52.712964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.428 qpair failed and we were unable to recover it. 00:30:15.428 [2024-02-14 20:30:52.722853] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.428 [2024-02-14 20:30:52.722977] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.428 [2024-02-14 20:30:52.722993] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.428 [2024-02-14 20:30:52.723001] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.428 [2024-02-14 20:30:52.723006] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.428 [2024-02-14 20:30:52.723022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.428 qpair failed and we were unable to recover it. 00:30:15.428 [2024-02-14 20:30:52.732872] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.428 [2024-02-14 20:30:52.733026] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.428 [2024-02-14 20:30:52.733042] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.428 [2024-02-14 20:30:52.733049] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.428 [2024-02-14 20:30:52.733055] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.428 [2024-02-14 20:30:52.733070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.428 qpair failed and we were unable to recover it. 00:30:15.428 [2024-02-14 20:30:52.742929] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.428 [2024-02-14 20:30:52.743050] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.428 [2024-02-14 20:30:52.743070] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.428 [2024-02-14 20:30:52.743077] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.428 [2024-02-14 20:30:52.743083] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.428 [2024-02-14 20:30:52.743098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.428 qpair failed and we were unable to recover it. 00:30:15.428 [2024-02-14 20:30:52.752977] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.428 [2024-02-14 20:30:52.753102] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.428 [2024-02-14 20:30:52.753118] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.428 [2024-02-14 20:30:52.753125] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.428 [2024-02-14 20:30:52.753131] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.428 [2024-02-14 20:30:52.753147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.428 qpair failed and we were unable to recover it. 00:30:15.428 [2024-02-14 20:30:52.763002] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.428 [2024-02-14 20:30:52.763168] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.428 [2024-02-14 20:30:52.763184] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.428 [2024-02-14 20:30:52.763190] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.428 [2024-02-14 20:30:52.763196] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.428 [2024-02-14 20:30:52.763211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.428 qpair failed and we were unable to recover it. 00:30:15.428 [2024-02-14 20:30:52.773020] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.428 [2024-02-14 20:30:52.773179] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.428 [2024-02-14 20:30:52.773195] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.428 [2024-02-14 20:30:52.773202] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.428 [2024-02-14 20:30:52.773207] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.428 [2024-02-14 20:30:52.773222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.428 qpair failed and we were unable to recover it. 00:30:15.428 [2024-02-14 20:30:52.783045] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.428 [2024-02-14 20:30:52.783164] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.428 [2024-02-14 20:30:52.783180] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.428 [2024-02-14 20:30:52.783187] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.428 [2024-02-14 20:30:52.783192] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.428 [2024-02-14 20:30:52.783210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.428 qpair failed and we were unable to recover it. 00:30:15.428 [2024-02-14 20:30:52.793070] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.428 [2024-02-14 20:30:52.793190] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.428 [2024-02-14 20:30:52.793207] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.428 [2024-02-14 20:30:52.793214] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.428 [2024-02-14 20:30:52.793220] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.428 [2024-02-14 20:30:52.793236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.428 qpair failed and we were unable to recover it. 00:30:15.428 [2024-02-14 20:30:52.803092] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.428 [2024-02-14 20:30:52.803211] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.428 [2024-02-14 20:30:52.803227] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.428 [2024-02-14 20:30:52.803234] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.428 [2024-02-14 20:30:52.803240] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.428 [2024-02-14 20:30:52.803255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.428 qpair failed and we were unable to recover it. 00:30:15.428 [2024-02-14 20:30:52.813118] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.428 [2024-02-14 20:30:52.813253] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.428 [2024-02-14 20:30:52.813269] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.428 [2024-02-14 20:30:52.813276] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.428 [2024-02-14 20:30:52.813281] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.428 [2024-02-14 20:30:52.813297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.428 qpair failed and we were unable to recover it. 00:30:15.428 [2024-02-14 20:30:52.823137] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.428 [2024-02-14 20:30:52.823251] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.428 [2024-02-14 20:30:52.823267] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.428 [2024-02-14 20:30:52.823274] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.428 [2024-02-14 20:30:52.823280] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.428 [2024-02-14 20:30:52.823295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.428 qpair failed and we were unable to recover it. 00:30:15.429 [2024-02-14 20:30:52.833188] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.429 [2024-02-14 20:30:52.833309] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.429 [2024-02-14 20:30:52.833328] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.429 [2024-02-14 20:30:52.833335] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.429 [2024-02-14 20:30:52.833341] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.429 [2024-02-14 20:30:52.833356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.429 qpair failed and we were unable to recover it. 00:30:15.688 [2024-02-14 20:30:52.843212] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.688 [2024-02-14 20:30:52.843338] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.688 [2024-02-14 20:30:52.843355] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.688 [2024-02-14 20:30:52.843362] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.688 [2024-02-14 20:30:52.843367] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.689 [2024-02-14 20:30:52.843382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.689 qpair failed and we were unable to recover it. 00:30:15.689 [2024-02-14 20:30:52.853242] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.689 [2024-02-14 20:30:52.853368] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.689 [2024-02-14 20:30:52.853384] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.689 [2024-02-14 20:30:52.853391] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.689 [2024-02-14 20:30:52.853396] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.689 [2024-02-14 20:30:52.853412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.689 qpair failed and we were unable to recover it. 00:30:15.689 [2024-02-14 20:30:52.863272] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.689 [2024-02-14 20:30:52.863394] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.689 [2024-02-14 20:30:52.863410] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.689 [2024-02-14 20:30:52.863417] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.689 [2024-02-14 20:30:52.863423] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.689 [2024-02-14 20:30:52.863438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.689 qpair failed and we were unable to recover it. 00:30:15.689 [2024-02-14 20:30:52.873305] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.689 [2024-02-14 20:30:52.873425] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.689 [2024-02-14 20:30:52.873442] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.689 [2024-02-14 20:30:52.873448] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.689 [2024-02-14 20:30:52.873454] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.689 [2024-02-14 20:30:52.873472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.689 qpair failed and we were unable to recover it. 00:30:15.689 [2024-02-14 20:30:52.883350] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.689 [2024-02-14 20:30:52.883480] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.689 [2024-02-14 20:30:52.883496] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.689 [2024-02-14 20:30:52.883503] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.689 [2024-02-14 20:30:52.883508] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.689 [2024-02-14 20:30:52.883523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.689 qpair failed and we were unable to recover it. 00:30:15.689 [2024-02-14 20:30:52.893378] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.689 [2024-02-14 20:30:52.893498] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.689 [2024-02-14 20:30:52.893514] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.689 [2024-02-14 20:30:52.893521] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.689 [2024-02-14 20:30:52.893526] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.689 [2024-02-14 20:30:52.893542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.689 qpair failed and we were unable to recover it. 00:30:15.689 [2024-02-14 20:30:52.903395] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.689 [2024-02-14 20:30:52.903515] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.689 [2024-02-14 20:30:52.903531] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.689 [2024-02-14 20:30:52.903538] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.689 [2024-02-14 20:30:52.903543] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.689 [2024-02-14 20:30:52.903559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.689 qpair failed and we were unable to recover it. 00:30:15.689 [2024-02-14 20:30:52.913415] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.689 [2024-02-14 20:30:52.913533] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.689 [2024-02-14 20:30:52.913549] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.689 [2024-02-14 20:30:52.913556] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.689 [2024-02-14 20:30:52.913561] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.689 [2024-02-14 20:30:52.913577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.689 qpair failed and we were unable to recover it. 00:30:15.689 [2024-02-14 20:30:52.923436] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.689 [2024-02-14 20:30:52.923555] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.689 [2024-02-14 20:30:52.923575] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.689 [2024-02-14 20:30:52.923582] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.689 [2024-02-14 20:30:52.923587] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.689 [2024-02-14 20:30:52.923603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.689 qpair failed and we were unable to recover it. 00:30:15.689 [2024-02-14 20:30:52.933482] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.689 [2024-02-14 20:30:52.933594] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.689 [2024-02-14 20:30:52.933610] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.689 [2024-02-14 20:30:52.933617] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.689 [2024-02-14 20:30:52.933623] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.689 [2024-02-14 20:30:52.933638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.689 qpair failed and we were unable to recover it. 00:30:15.689 [2024-02-14 20:30:52.943495] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.689 [2024-02-14 20:30:52.943614] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.689 [2024-02-14 20:30:52.943630] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.689 [2024-02-14 20:30:52.943637] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.689 [2024-02-14 20:30:52.943642] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.689 [2024-02-14 20:30:52.943663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.689 qpair failed and we were unable to recover it. 00:30:15.689 [2024-02-14 20:30:52.953468] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.689 [2024-02-14 20:30:52.953587] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.689 [2024-02-14 20:30:52.953603] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.689 [2024-02-14 20:30:52.953610] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.689 [2024-02-14 20:30:52.953615] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.689 [2024-02-14 20:30:52.953631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.689 qpair failed and we were unable to recover it. 00:30:15.689 [2024-02-14 20:30:52.963563] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.689 [2024-02-14 20:30:52.963685] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.689 [2024-02-14 20:30:52.963701] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.689 [2024-02-14 20:30:52.963708] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.689 [2024-02-14 20:30:52.963713] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.689 [2024-02-14 20:30:52.963732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.689 qpair failed and we were unable to recover it. 00:30:15.689 [2024-02-14 20:30:52.973606] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.689 [2024-02-14 20:30:52.973727] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.689 [2024-02-14 20:30:52.973743] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.689 [2024-02-14 20:30:52.973750] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.689 [2024-02-14 20:30:52.973756] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.690 [2024-02-14 20:30:52.973771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.690 qpair failed and we were unable to recover it. 00:30:15.690 [2024-02-14 20:30:52.983623] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.690 [2024-02-14 20:30:52.983742] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.690 [2024-02-14 20:30:52.983758] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.690 [2024-02-14 20:30:52.983765] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.690 [2024-02-14 20:30:52.983771] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.690 [2024-02-14 20:30:52.983786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.690 qpair failed and we were unable to recover it. 00:30:15.690 [2024-02-14 20:30:52.993671] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.690 [2024-02-14 20:30:52.993803] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.690 [2024-02-14 20:30:52.993819] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.690 [2024-02-14 20:30:52.993826] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.690 [2024-02-14 20:30:52.993833] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.690 [2024-02-14 20:30:52.993848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.690 qpair failed and we were unable to recover it. 00:30:15.690 [2024-02-14 20:30:53.003695] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.690 [2024-02-14 20:30:53.003844] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.690 [2024-02-14 20:30:53.003860] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.690 [2024-02-14 20:30:53.003867] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.690 [2024-02-14 20:30:53.003873] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.690 [2024-02-14 20:30:53.003888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.690 qpair failed and we were unable to recover it. 00:30:15.690 [2024-02-14 20:30:53.013713] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.690 [2024-02-14 20:30:53.013839] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.690 [2024-02-14 20:30:53.013858] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.690 [2024-02-14 20:30:53.013865] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.690 [2024-02-14 20:30:53.013871] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.690 [2024-02-14 20:30:53.013888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.690 qpair failed and we were unable to recover it. 00:30:15.690 [2024-02-14 20:30:53.023726] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.690 [2024-02-14 20:30:53.023848] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.690 [2024-02-14 20:30:53.023864] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.690 [2024-02-14 20:30:53.023871] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.690 [2024-02-14 20:30:53.023877] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.690 [2024-02-14 20:30:53.023893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.690 qpair failed and we were unable to recover it. 00:30:15.690 [2024-02-14 20:30:53.033776] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.690 [2024-02-14 20:30:53.033902] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.690 [2024-02-14 20:30:53.033918] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.690 [2024-02-14 20:30:53.033924] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.690 [2024-02-14 20:30:53.033930] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.690 [2024-02-14 20:30:53.033946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.690 qpair failed and we were unable to recover it. 00:30:15.690 [2024-02-14 20:30:53.043823] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.690 [2024-02-14 20:30:53.043948] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.690 [2024-02-14 20:30:53.043964] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.690 [2024-02-14 20:30:53.043971] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.690 [2024-02-14 20:30:53.043977] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.690 [2024-02-14 20:30:53.043992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.690 qpair failed and we were unable to recover it. 00:30:15.690 [2024-02-14 20:30:53.053844] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.690 [2024-02-14 20:30:53.054005] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.690 [2024-02-14 20:30:53.054021] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.690 [2024-02-14 20:30:53.054028] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.690 [2024-02-14 20:30:53.054034] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.690 [2024-02-14 20:30:53.054053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.690 qpair failed and we were unable to recover it. 00:30:15.690 [2024-02-14 20:30:53.063897] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.690 [2024-02-14 20:30:53.064024] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.690 [2024-02-14 20:30:53.064040] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.690 [2024-02-14 20:30:53.064047] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.690 [2024-02-14 20:30:53.064053] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.690 [2024-02-14 20:30:53.064069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.690 qpair failed and we were unable to recover it. 00:30:15.690 [2024-02-14 20:30:53.073885] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.690 [2024-02-14 20:30:53.074038] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.690 [2024-02-14 20:30:53.074055] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.690 [2024-02-14 20:30:53.074062] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.690 [2024-02-14 20:30:53.074067] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.690 [2024-02-14 20:30:53.074083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.690 qpair failed and we were unable to recover it. 00:30:15.690 [2024-02-14 20:30:53.083952] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.690 [2024-02-14 20:30:53.084070] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.690 [2024-02-14 20:30:53.084086] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.690 [2024-02-14 20:30:53.084093] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.690 [2024-02-14 20:30:53.084099] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.690 [2024-02-14 20:30:53.084114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.690 qpair failed and we were unable to recover it. 00:30:15.690 [2024-02-14 20:30:53.093977] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.690 [2024-02-14 20:30:53.094092] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.690 [2024-02-14 20:30:53.094108] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.690 [2024-02-14 20:30:53.094116] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.690 [2024-02-14 20:30:53.094122] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.690 [2024-02-14 20:30:53.094137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.690 qpair failed and we were unable to recover it. 00:30:15.949 [2024-02-14 20:30:53.104003] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.949 [2024-02-14 20:30:53.104135] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.949 [2024-02-14 20:30:53.104154] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.949 [2024-02-14 20:30:53.104161] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.949 [2024-02-14 20:30:53.104167] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.949 [2024-02-14 20:30:53.104183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.949 qpair failed and we were unable to recover it. 00:30:15.949 [2024-02-14 20:30:53.114052] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.949 [2024-02-14 20:30:53.114186] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.949 [2024-02-14 20:30:53.114203] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.949 [2024-02-14 20:30:53.114210] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.949 [2024-02-14 20:30:53.114215] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.949 [2024-02-14 20:30:53.114231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.949 qpair failed and we were unable to recover it. 00:30:15.949 [2024-02-14 20:30:53.124064] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.949 [2024-02-14 20:30:53.124184] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.949 [2024-02-14 20:30:53.124200] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.949 [2024-02-14 20:30:53.124207] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.949 [2024-02-14 20:30:53.124213] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.949 [2024-02-14 20:30:53.124229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.949 qpair failed and we were unable to recover it. 00:30:15.949 [2024-02-14 20:30:53.134093] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.949 [2024-02-14 20:30:53.134214] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.949 [2024-02-14 20:30:53.134231] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.949 [2024-02-14 20:30:53.134237] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.949 [2024-02-14 20:30:53.134243] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.949 [2024-02-14 20:30:53.134258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.949 qpair failed and we were unable to recover it. 00:30:15.949 [2024-02-14 20:30:53.144114] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.949 [2024-02-14 20:30:53.144232] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.949 [2024-02-14 20:30:53.144248] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.949 [2024-02-14 20:30:53.144255] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.949 [2024-02-14 20:30:53.144263] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.949 [2024-02-14 20:30:53.144279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.949 qpair failed and we were unable to recover it. 00:30:15.949 [2024-02-14 20:30:53.154158] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.949 [2024-02-14 20:30:53.154279] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.949 [2024-02-14 20:30:53.154296] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.949 [2024-02-14 20:30:53.154303] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.949 [2024-02-14 20:30:53.154309] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.949 [2024-02-14 20:30:53.154324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.949 qpair failed and we were unable to recover it. 00:30:15.949 [2024-02-14 20:30:53.164170] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.949 [2024-02-14 20:30:53.164289] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.949 [2024-02-14 20:30:53.164305] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.949 [2024-02-14 20:30:53.164311] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.949 [2024-02-14 20:30:53.164317] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.949 [2024-02-14 20:30:53.164332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.949 qpair failed and we were unable to recover it. 00:30:15.949 [2024-02-14 20:30:53.174196] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.949 [2024-02-14 20:30:53.174313] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.949 [2024-02-14 20:30:53.174330] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.949 [2024-02-14 20:30:53.174337] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.949 [2024-02-14 20:30:53.174343] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.949 [2024-02-14 20:30:53.174358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.949 qpair failed and we were unable to recover it. 00:30:15.949 [2024-02-14 20:30:53.184213] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.949 [2024-02-14 20:30:53.184334] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.949 [2024-02-14 20:30:53.184351] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.949 [2024-02-14 20:30:53.184358] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.949 [2024-02-14 20:30:53.184363] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.949 [2024-02-14 20:30:53.184379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.949 qpair failed and we were unable to recover it. 00:30:15.949 [2024-02-14 20:30:53.194246] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.949 [2024-02-14 20:30:53.194370] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.949 [2024-02-14 20:30:53.194388] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.949 [2024-02-14 20:30:53.194395] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.949 [2024-02-14 20:30:53.194401] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.949 [2024-02-14 20:30:53.194417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.949 qpair failed and we were unable to recover it. 00:30:15.949 [2024-02-14 20:30:53.204289] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.949 [2024-02-14 20:30:53.204406] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.949 [2024-02-14 20:30:53.204423] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.949 [2024-02-14 20:30:53.204430] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.949 [2024-02-14 20:30:53.204435] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.949 [2024-02-14 20:30:53.204451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.949 qpair failed and we were unable to recover it. 00:30:15.949 [2024-02-14 20:30:53.214333] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.949 [2024-02-14 20:30:53.214451] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.949 [2024-02-14 20:30:53.214468] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.949 [2024-02-14 20:30:53.214475] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.949 [2024-02-14 20:30:53.214481] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.949 [2024-02-14 20:30:53.214497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.949 qpair failed and we were unable to recover it. 00:30:15.949 [2024-02-14 20:30:53.224398] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.949 [2024-02-14 20:30:53.224519] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.949 [2024-02-14 20:30:53.224535] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.949 [2024-02-14 20:30:53.224542] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.949 [2024-02-14 20:30:53.224548] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.949 [2024-02-14 20:30:53.224562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.949 qpair failed and we were unable to recover it. 00:30:15.949 [2024-02-14 20:30:53.234420] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.949 [2024-02-14 20:30:53.234539] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.949 [2024-02-14 20:30:53.234556] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.950 [2024-02-14 20:30:53.234563] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.950 [2024-02-14 20:30:53.234572] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.950 [2024-02-14 20:30:53.234588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.950 qpair failed and we were unable to recover it. 00:30:15.950 [2024-02-14 20:30:53.244405] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.950 [2024-02-14 20:30:53.244520] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.950 [2024-02-14 20:30:53.244536] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.950 [2024-02-14 20:30:53.244542] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.950 [2024-02-14 20:30:53.244548] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.950 [2024-02-14 20:30:53.244563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.950 qpair failed and we were unable to recover it. 00:30:15.950 [2024-02-14 20:30:53.254418] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.950 [2024-02-14 20:30:53.254539] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.950 [2024-02-14 20:30:53.254556] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.950 [2024-02-14 20:30:53.254562] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.950 [2024-02-14 20:30:53.254568] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.950 [2024-02-14 20:30:53.254583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.950 qpair failed and we were unable to recover it. 00:30:15.950 [2024-02-14 20:30:53.264463] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.950 [2024-02-14 20:30:53.264577] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.950 [2024-02-14 20:30:53.264594] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.950 [2024-02-14 20:30:53.264601] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.950 [2024-02-14 20:30:53.264606] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.950 [2024-02-14 20:30:53.264622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.950 qpair failed and we were unable to recover it. 00:30:15.950 [2024-02-14 20:30:53.274441] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.950 [2024-02-14 20:30:53.274592] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.950 [2024-02-14 20:30:53.274608] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.950 [2024-02-14 20:30:53.274615] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.950 [2024-02-14 20:30:53.274620] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.950 [2024-02-14 20:30:53.274636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.950 qpair failed and we were unable to recover it. 00:30:15.950 [2024-02-14 20:30:53.284531] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.950 [2024-02-14 20:30:53.284657] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.950 [2024-02-14 20:30:53.284674] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.950 [2024-02-14 20:30:53.284681] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.950 [2024-02-14 20:30:53.284686] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.950 [2024-02-14 20:30:53.284702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.950 qpair failed and we were unable to recover it. 00:30:15.950 [2024-02-14 20:30:53.294523] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.950 [2024-02-14 20:30:53.294639] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.950 [2024-02-14 20:30:53.294662] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.950 [2024-02-14 20:30:53.294669] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.950 [2024-02-14 20:30:53.294675] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.950 [2024-02-14 20:30:53.294690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.950 qpair failed and we were unable to recover it. 00:30:15.950 [2024-02-14 20:30:53.304502] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.950 [2024-02-14 20:30:53.304616] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.950 [2024-02-14 20:30:53.304632] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.950 [2024-02-14 20:30:53.304639] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.950 [2024-02-14 20:30:53.304645] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.950 [2024-02-14 20:30:53.304667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.950 qpair failed and we were unable to recover it. 00:30:15.950 [2024-02-14 20:30:53.314584] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.950 [2024-02-14 20:30:53.314708] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.950 [2024-02-14 20:30:53.314725] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.950 [2024-02-14 20:30:53.314732] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.950 [2024-02-14 20:30:53.314738] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.950 [2024-02-14 20:30:53.314754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.950 qpair failed and we were unable to recover it. 00:30:15.950 [2024-02-14 20:30:53.324629] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.950 [2024-02-14 20:30:53.324765] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.950 [2024-02-14 20:30:53.324782] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.950 [2024-02-14 20:30:53.324789] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.950 [2024-02-14 20:30:53.324798] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.950 [2024-02-14 20:30:53.324814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.950 qpair failed and we were unable to recover it. 00:30:15.950 [2024-02-14 20:30:53.334684] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.950 [2024-02-14 20:30:53.334802] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.950 [2024-02-14 20:30:53.334818] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.950 [2024-02-14 20:30:53.334825] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.950 [2024-02-14 20:30:53.334831] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.950 [2024-02-14 20:30:53.334846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.950 qpair failed and we were unable to recover it. 00:30:15.950 [2024-02-14 20:30:53.344734] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.950 [2024-02-14 20:30:53.344869] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.950 [2024-02-14 20:30:53.344886] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.950 [2024-02-14 20:30:53.344893] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.950 [2024-02-14 20:30:53.344898] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.950 [2024-02-14 20:30:53.344913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.950 qpair failed and we were unable to recover it. 00:30:15.950 [2024-02-14 20:30:53.354735] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.950 [2024-02-14 20:30:53.354850] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.950 [2024-02-14 20:30:53.354866] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.950 [2024-02-14 20:30:53.354873] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.950 [2024-02-14 20:30:53.354879] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:15.950 [2024-02-14 20:30:53.354894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.950 qpair failed and we were unable to recover it. 00:30:16.208 [2024-02-14 20:30:53.364732] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.208 [2024-02-14 20:30:53.364852] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.208 [2024-02-14 20:30:53.364868] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.208 [2024-02-14 20:30:53.364875] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.208 [2024-02-14 20:30:53.364881] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:16.208 [2024-02-14 20:30:53.364897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.208 qpair failed and we were unable to recover it. 00:30:16.208 [2024-02-14 20:30:53.374811] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.208 [2024-02-14 20:30:53.374931] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.208 [2024-02-14 20:30:53.374948] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.208 [2024-02-14 20:30:53.374954] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.208 [2024-02-14 20:30:53.374960] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:16.208 [2024-02-14 20:30:53.374975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.208 qpair failed and we were unable to recover it. 00:30:16.208 [2024-02-14 20:30:53.384789] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.208 [2024-02-14 20:30:53.384929] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.208 [2024-02-14 20:30:53.384945] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.208 [2024-02-14 20:30:53.384952] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.208 [2024-02-14 20:30:53.384958] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:16.208 [2024-02-14 20:30:53.384973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.209 qpair failed and we were unable to recover it. 00:30:16.209 [2024-02-14 20:30:53.394873] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.209 [2024-02-14 20:30:53.394992] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.209 [2024-02-14 20:30:53.395008] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.209 [2024-02-14 20:30:53.395014] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.209 [2024-02-14 20:30:53.395020] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:16.209 [2024-02-14 20:30:53.395035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.209 qpair failed and we were unable to recover it. 00:30:16.209 [2024-02-14 20:30:53.404881] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.209 [2024-02-14 20:30:53.405004] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.209 [2024-02-14 20:30:53.405021] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.209 [2024-02-14 20:30:53.405029] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.209 [2024-02-14 20:30:53.405034] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:16.209 [2024-02-14 20:30:53.405049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.209 qpair failed and we were unable to recover it. 00:30:16.209 [2024-02-14 20:30:53.414941] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.209 [2024-02-14 20:30:53.415073] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.209 [2024-02-14 20:30:53.415090] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.209 [2024-02-14 20:30:53.415097] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.209 [2024-02-14 20:30:53.415107] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:16.209 [2024-02-14 20:30:53.415123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.209 qpair failed and we were unable to recover it. 00:30:16.209 [2024-02-14 20:30:53.424941] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.209 [2024-02-14 20:30:53.425058] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.209 [2024-02-14 20:30:53.425075] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.209 [2024-02-14 20:30:53.425081] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.209 [2024-02-14 20:30:53.425087] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:16.209 [2024-02-14 20:30:53.425103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.209 qpair failed and we were unable to recover it. 00:30:16.209 [2024-02-14 20:30:53.434902] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.209 [2024-02-14 20:30:53.435023] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.209 [2024-02-14 20:30:53.435039] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.209 [2024-02-14 20:30:53.435046] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.209 [2024-02-14 20:30:53.435051] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:16.209 [2024-02-14 20:30:53.435067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.209 qpair failed and we were unable to recover it. 00:30:16.209 [2024-02-14 20:30:53.444988] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.209 [2024-02-14 20:30:53.445137] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.209 [2024-02-14 20:30:53.445153] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.209 [2024-02-14 20:30:53.445160] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.209 [2024-02-14 20:30:53.445166] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:16.209 [2024-02-14 20:30:53.445181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.209 qpair failed and we were unable to recover it. 00:30:16.209 [2024-02-14 20:30:53.455048] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.209 [2024-02-14 20:30:53.455164] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.209 [2024-02-14 20:30:53.455180] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.209 [2024-02-14 20:30:53.455186] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.209 [2024-02-14 20:30:53.455192] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:16.209 [2024-02-14 20:30:53.455207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.209 qpair failed and we were unable to recover it. 00:30:16.209 [2024-02-14 20:30:53.464980] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.209 [2024-02-14 20:30:53.465103] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.209 [2024-02-14 20:30:53.465119] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.209 [2024-02-14 20:30:53.465126] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.209 [2024-02-14 20:30:53.465132] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:16.209 [2024-02-14 20:30:53.465147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.209 qpair failed and we were unable to recover it. 00:30:16.209 [2024-02-14 20:30:53.475063] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.209 [2024-02-14 20:30:53.475182] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.209 [2024-02-14 20:30:53.475198] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.209 [2024-02-14 20:30:53.475205] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.209 [2024-02-14 20:30:53.475211] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:16.209 [2024-02-14 20:30:53.475227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.209 qpair failed and we were unable to recover it. 00:30:16.209 [2024-02-14 20:30:53.485130] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.209 [2024-02-14 20:30:53.485247] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.209 [2024-02-14 20:30:53.485263] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.209 [2024-02-14 20:30:53.485270] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.209 [2024-02-14 20:30:53.485276] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:16.209 [2024-02-14 20:30:53.485291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.209 qpair failed and we were unable to recover it. 00:30:16.209 [2024-02-14 20:30:53.495076] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.209 [2024-02-14 20:30:53.495194] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.209 [2024-02-14 20:30:53.495211] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.209 [2024-02-14 20:30:53.495217] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.209 [2024-02-14 20:30:53.495223] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:16.209 [2024-02-14 20:30:53.495239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.209 qpair failed and we were unable to recover it. 00:30:16.209 [2024-02-14 20:30:53.505130] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.209 [2024-02-14 20:30:53.505249] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.209 [2024-02-14 20:30:53.505266] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.209 [2024-02-14 20:30:53.505276] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.209 [2024-02-14 20:30:53.505282] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:16.209 [2024-02-14 20:30:53.505297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.209 qpair failed and we were unable to recover it. 00:30:16.209 [2024-02-14 20:30:53.515201] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.209 [2024-02-14 20:30:53.515320] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.209 [2024-02-14 20:30:53.515336] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.209 [2024-02-14 20:30:53.515343] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.209 [2024-02-14 20:30:53.515349] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:16.209 [2024-02-14 20:30:53.515364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.209 qpair failed and we were unable to recover it. 00:30:16.209 [2024-02-14 20:30:53.525159] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.210 [2024-02-14 20:30:53.525295] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.210 [2024-02-14 20:30:53.525312] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.210 [2024-02-14 20:30:53.525319] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.210 [2024-02-14 20:30:53.525325] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:16.210 [2024-02-14 20:30:53.525340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.210 qpair failed and we were unable to recover it. 00:30:16.210 [2024-02-14 20:30:53.535185] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.210 [2024-02-14 20:30:53.535300] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.210 [2024-02-14 20:30:53.535317] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.210 [2024-02-14 20:30:53.535323] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.210 [2024-02-14 20:30:53.535329] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:16.210 [2024-02-14 20:30:53.535344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.210 qpair failed and we were unable to recover it. 00:30:16.210 [2024-02-14 20:30:53.545266] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.210 [2024-02-14 20:30:53.545385] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.210 [2024-02-14 20:30:53.545402] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.210 [2024-02-14 20:30:53.545408] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.210 [2024-02-14 20:30:53.545414] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:16.210 [2024-02-14 20:30:53.545430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.210 qpair failed and we were unable to recover it. 00:30:16.210 [2024-02-14 20:30:53.555386] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.210 [2024-02-14 20:30:53.555508] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.210 [2024-02-14 20:30:53.555524] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.210 [2024-02-14 20:30:53.555531] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.210 [2024-02-14 20:30:53.555537] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:16.210 [2024-02-14 20:30:53.555552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.210 qpair failed and we were unable to recover it. 00:30:16.210 [2024-02-14 20:30:53.565379] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.210 [2024-02-14 20:30:53.565501] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.210 [2024-02-14 20:30:53.565518] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.210 [2024-02-14 20:30:53.565524] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.210 [2024-02-14 20:30:53.565530] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:16.210 [2024-02-14 20:30:53.565546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.210 qpair failed and we were unable to recover it. 00:30:16.210 [2024-02-14 20:30:53.575362] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.210 [2024-02-14 20:30:53.575479] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.210 [2024-02-14 20:30:53.575495] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.210 [2024-02-14 20:30:53.575502] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.210 [2024-02-14 20:30:53.575508] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:16.210 [2024-02-14 20:30:53.575524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.210 qpair failed and we were unable to recover it. 00:30:16.210 [2024-02-14 20:30:53.585416] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.210 [2024-02-14 20:30:53.585713] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.210 [2024-02-14 20:30:53.585731] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.210 [2024-02-14 20:30:53.585738] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.210 [2024-02-14 20:30:53.585744] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:16.210 [2024-02-14 20:30:53.585759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.210 qpair failed and we were unable to recover it. 00:30:16.210 [2024-02-14 20:30:53.595522] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.210 [2024-02-14 20:30:53.595643] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.210 [2024-02-14 20:30:53.595667] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.210 [2024-02-14 20:30:53.595677] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.210 [2024-02-14 20:30:53.595684] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:16.210 [2024-02-14 20:30:53.595700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.210 qpair failed and we were unable to recover it. 00:30:16.210 [2024-02-14 20:30:53.605484] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.210 [2024-02-14 20:30:53.605610] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.210 [2024-02-14 20:30:53.605627] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.210 [2024-02-14 20:30:53.605634] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.210 [2024-02-14 20:30:53.605640] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:16.210 [2024-02-14 20:30:53.605661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.210 qpair failed and we were unable to recover it. 00:30:16.210 [2024-02-14 20:30:53.615486] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.210 [2024-02-14 20:30:53.615619] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.210 [2024-02-14 20:30:53.615636] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.210 [2024-02-14 20:30:53.615643] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.210 [2024-02-14 20:30:53.615654] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:16.210 [2024-02-14 20:30:53.615670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.210 qpair failed and we were unable to recover it. 00:30:16.517 [2024-02-14 20:30:53.625511] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.517 [2024-02-14 20:30:53.625631] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.517 [2024-02-14 20:30:53.625653] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.517 [2024-02-14 20:30:53.625660] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.517 [2024-02-14 20:30:53.625666] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:16.517 [2024-02-14 20:30:53.625681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.517 qpair failed and we were unable to recover it. 00:30:16.517 [2024-02-14 20:30:53.635557] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.517 [2024-02-14 20:30:53.635690] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.517 [2024-02-14 20:30:53.635707] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.517 [2024-02-14 20:30:53.635714] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.517 [2024-02-14 20:30:53.635720] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:16.517 [2024-02-14 20:30:53.635735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.517 qpair failed and we were unable to recover it. 00:30:16.517 [2024-02-14 20:30:53.645575] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.517 [2024-02-14 20:30:53.645710] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.517 [2024-02-14 20:30:53.645727] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.517 [2024-02-14 20:30:53.645733] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.517 [2024-02-14 20:30:53.645739] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:16.517 [2024-02-14 20:30:53.645755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.517 qpair failed and we were unable to recover it. 00:30:16.517 [2024-02-14 20:30:53.655619] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.517 [2024-02-14 20:30:53.655737] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.517 [2024-02-14 20:30:53.655753] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.517 [2024-02-14 20:30:53.655760] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.517 [2024-02-14 20:30:53.655766] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:16.517 [2024-02-14 20:30:53.655782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.517 qpair failed and we were unable to recover it. 00:30:16.517 [2024-02-14 20:30:53.665654] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.517 [2024-02-14 20:30:53.665778] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.517 [2024-02-14 20:30:53.665794] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.517 [2024-02-14 20:30:53.665801] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.517 [2024-02-14 20:30:53.665807] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:16.517 [2024-02-14 20:30:53.665822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.517 qpair failed and we were unable to recover it. 00:30:16.517 [2024-02-14 20:30:53.675678] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.517 [2024-02-14 20:30:53.675799] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.517 [2024-02-14 20:30:53.675816] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.517 [2024-02-14 20:30:53.675823] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.517 [2024-02-14 20:30:53.675828] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:16.517 [2024-02-14 20:30:53.675844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.517 qpair failed and we were unable to recover it. 00:30:16.517 [2024-02-14 20:30:53.685694] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.517 [2024-02-14 20:30:53.685809] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.517 [2024-02-14 20:30:53.685826] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.517 [2024-02-14 20:30:53.685836] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.517 [2024-02-14 20:30:53.685842] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:16.517 [2024-02-14 20:30:53.685858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.517 qpair failed and we were unable to recover it. 00:30:16.517 [2024-02-14 20:30:53.695743] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.517 [2024-02-14 20:30:53.695872] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.517 [2024-02-14 20:30:53.695888] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.517 [2024-02-14 20:30:53.695895] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.517 [2024-02-14 20:30:53.695901] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:16.517 [2024-02-14 20:30:53.695916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.517 qpair failed and we were unable to recover it. 00:30:16.517 [2024-02-14 20:30:53.705748] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.517 [2024-02-14 20:30:53.705877] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.517 [2024-02-14 20:30:53.705894] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.517 [2024-02-14 20:30:53.705900] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.517 [2024-02-14 20:30:53.705906] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:16.518 [2024-02-14 20:30:53.705921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.518 qpair failed and we were unable to recover it. 00:30:16.518 [2024-02-14 20:30:53.715752] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.518 [2024-02-14 20:30:53.715885] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.518 [2024-02-14 20:30:53.715902] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.518 [2024-02-14 20:30:53.715909] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.518 [2024-02-14 20:30:53.715915] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:16.518 [2024-02-14 20:30:53.715930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.518 qpair failed and we were unable to recover it. 00:30:16.518 [2024-02-14 20:30:53.725843] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.518 [2024-02-14 20:30:53.725967] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.518 [2024-02-14 20:30:53.725983] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.518 [2024-02-14 20:30:53.725990] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.518 [2024-02-14 20:30:53.725996] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:16.518 [2024-02-14 20:30:53.726011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.518 qpair failed and we were unable to recover it. 00:30:16.518 [2024-02-14 20:30:53.735769] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.518 [2024-02-14 20:30:53.735884] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.518 [2024-02-14 20:30:53.735900] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.518 [2024-02-14 20:30:53.735907] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.518 [2024-02-14 20:30:53.735913] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:16.518 [2024-02-14 20:30:53.735928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.518 qpair failed and we were unable to recover it. 00:30:16.518 [2024-02-14 20:30:53.745872] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.518 [2024-02-14 20:30:53.746000] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.518 [2024-02-14 20:30:53.746018] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.518 [2024-02-14 20:30:53.746025] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.518 [2024-02-14 20:30:53.746031] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:16.518 [2024-02-14 20:30:53.746046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.518 qpair failed and we were unable to recover it. 00:30:16.518 [2024-02-14 20:30:53.755898] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.518 [2024-02-14 20:30:53.756017] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.518 [2024-02-14 20:30:53.756033] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.518 [2024-02-14 20:30:53.756039] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.518 [2024-02-14 20:30:53.756045] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:16.518 [2024-02-14 20:30:53.756060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.518 qpair failed and we were unable to recover it. 00:30:16.518 [2024-02-14 20:30:53.765937] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.518 [2024-02-14 20:30:53.766061] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.518 [2024-02-14 20:30:53.766077] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.518 [2024-02-14 20:30:53.766083] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.518 [2024-02-14 20:30:53.766089] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:16.518 [2024-02-14 20:30:53.766105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.518 qpair failed and we were unable to recover it. 00:30:16.518 [2024-02-14 20:30:53.775925] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.518 [2024-02-14 20:30:53.776051] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.518 [2024-02-14 20:30:53.776068] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.518 [2024-02-14 20:30:53.776079] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.518 [2024-02-14 20:30:53.776086] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:16.518 [2024-02-14 20:30:53.776103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.518 qpair failed and we were unable to recover it. 00:30:16.518 [2024-02-14 20:30:53.785967] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.518 [2024-02-14 20:30:53.786086] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.518 [2024-02-14 20:30:53.786103] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.518 [2024-02-14 20:30:53.786109] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.518 [2024-02-14 20:30:53.786115] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:16.518 [2024-02-14 20:30:53.786131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.518 qpair failed and we were unable to recover it. 00:30:16.518 [2024-02-14 20:30:53.796017] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.518 [2024-02-14 20:30:53.796136] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.518 [2024-02-14 20:30:53.796154] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.518 [2024-02-14 20:30:53.796161] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.518 [2024-02-14 20:30:53.796167] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:16.518 [2024-02-14 20:30:53.796182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.518 qpair failed and we were unable to recover it. 00:30:16.518 [2024-02-14 20:30:53.806043] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.518 [2024-02-14 20:30:53.806180] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.518 [2024-02-14 20:30:53.806197] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.518 [2024-02-14 20:30:53.806204] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.518 [2024-02-14 20:30:53.806209] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:16.518 [2024-02-14 20:30:53.806226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.518 qpair failed and we were unable to recover it. 00:30:16.518 [2024-02-14 20:30:53.816083] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.518 [2024-02-14 20:30:53.816225] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.518 [2024-02-14 20:30:53.816241] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.518 [2024-02-14 20:30:53.816248] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.518 [2024-02-14 20:30:53.816254] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:16.518 [2024-02-14 20:30:53.816271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.518 qpair failed and we were unable to recover it. 00:30:16.518 [2024-02-14 20:30:53.826116] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.518 [2024-02-14 20:30:53.826236] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.518 [2024-02-14 20:30:53.826253] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.518 [2024-02-14 20:30:53.826259] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.518 [2024-02-14 20:30:53.826265] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:16.518 [2024-02-14 20:30:53.826281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.518 qpair failed and we were unable to recover it. 00:30:16.518 [2024-02-14 20:30:53.836169] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.518 [2024-02-14 20:30:53.836290] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.518 [2024-02-14 20:30:53.836306] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.518 [2024-02-14 20:30:53.836313] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.518 [2024-02-14 20:30:53.836319] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:16.518 [2024-02-14 20:30:53.836334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.518 qpair failed and we were unable to recover it. 00:30:16.518 [2024-02-14 20:30:53.846169] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.519 [2024-02-14 20:30:53.846288] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.519 [2024-02-14 20:30:53.846304] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.519 [2024-02-14 20:30:53.846311] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.519 [2024-02-14 20:30:53.846317] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:16.519 [2024-02-14 20:30:53.846332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.519 qpair failed and we were unable to recover it. 00:30:16.519 [2024-02-14 20:30:53.856203] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.519 [2024-02-14 20:30:53.856321] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.519 [2024-02-14 20:30:53.856338] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.519 [2024-02-14 20:30:53.856344] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.519 [2024-02-14 20:30:53.856350] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:16.519 [2024-02-14 20:30:53.856365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.519 qpair failed and we were unable to recover it. 00:30:16.519 [2024-02-14 20:30:53.866155] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.519 [2024-02-14 20:30:53.866272] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.519 [2024-02-14 20:30:53.866292] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.519 [2024-02-14 20:30:53.866299] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.519 [2024-02-14 20:30:53.866305] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:16.519 [2024-02-14 20:30:53.866320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.519 qpair failed and we were unable to recover it. 00:30:16.519 [2024-02-14 20:30:53.876271] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.519 [2024-02-14 20:30:53.876389] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.519 [2024-02-14 20:30:53.876405] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.519 [2024-02-14 20:30:53.876412] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.519 [2024-02-14 20:30:53.876417] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:16.519 [2024-02-14 20:30:53.876433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.519 qpair failed and we were unable to recover it. 00:30:16.519 [2024-02-14 20:30:53.886254] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.519 [2024-02-14 20:30:53.886385] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.519 [2024-02-14 20:30:53.886402] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.519 [2024-02-14 20:30:53.886409] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.519 [2024-02-14 20:30:53.886415] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:16.519 [2024-02-14 20:30:53.886431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.519 qpair failed and we were unable to recover it. 00:30:16.519 [2024-02-14 20:30:53.896344] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.519 [2024-02-14 20:30:53.896469] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.519 [2024-02-14 20:30:53.896485] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.519 [2024-02-14 20:30:53.896492] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.519 [2024-02-14 20:30:53.896497] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:16.519 [2024-02-14 20:30:53.896513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.519 qpair failed and we were unable to recover it. 00:30:16.519 [2024-02-14 20:30:53.906362] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.519 [2024-02-14 20:30:53.906501] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.519 [2024-02-14 20:30:53.906518] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.519 [2024-02-14 20:30:53.906525] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.519 [2024-02-14 20:30:53.906531] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:16.519 [2024-02-14 20:30:53.906546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.519 qpair failed and we were unable to recover it. 00:30:16.519 [2024-02-14 20:30:53.916393] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.519 [2024-02-14 20:30:53.916536] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.519 [2024-02-14 20:30:53.916552] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.519 [2024-02-14 20:30:53.916559] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.519 [2024-02-14 20:30:53.916565] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2181510 00:30:16.519 [2024-02-14 20:30:53.916581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.519 qpair failed and we were unable to recover it. 00:30:16.519 [2024-02-14 20:30:53.916866] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218efe0 is same with the state(5) to be set 00:30:16.519 [2024-02-14 20:30:53.926465] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.519 [2024-02-14 20:30:53.926644] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.519 [2024-02-14 20:30:53.926682] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.519 [2024-02-14 20:30:53.926695] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.519 [2024-02-14 20:30:53.926705] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:16.519 [2024-02-14 20:30:53.926731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.519 qpair failed and we were unable to recover it. 00:30:16.778 [2024-02-14 20:30:53.936441] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.778 [2024-02-14 20:30:53.936621] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.778 [2024-02-14 20:30:53.936645] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.778 [2024-02-14 20:30:53.936657] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.778 [2024-02-14 20:30:53.936663] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc780000b90 00:30:16.778 [2024-02-14 20:30:53.936682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.778 qpair failed and we were unable to recover it. 00:30:16.778 [2024-02-14 20:30:53.946513] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.778 [2024-02-14 20:30:53.946819] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.778 [2024-02-14 20:30:53.946851] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.778 [2024-02-14 20:30:53.946863] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.778 [2024-02-14 20:30:53.946874] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc770000b90 00:30:16.778 [2024-02-14 20:30:53.946899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:16.778 qpair failed and we were unable to recover it. 00:30:16.778 [2024-02-14 20:30:53.956486] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.778 [2024-02-14 20:30:53.956624] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.778 [2024-02-14 20:30:53.956642] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.778 [2024-02-14 20:30:53.956656] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.778 [2024-02-14 20:30:53.956662] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc770000b90 00:30:16.778 [2024-02-14 20:30:53.956680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:16.778 qpair failed and we were unable to recover it. 00:30:16.778 [2024-02-14 20:30:53.966543] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.778 [2024-02-14 20:30:53.966708] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.778 [2024-02-14 20:30:53.966730] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.778 [2024-02-14 20:30:53.966738] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.778 [2024-02-14 20:30:53.966744] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc778000b90 00:30:16.778 [2024-02-14 20:30:53.966767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.778 qpair failed and we were unable to recover it. 00:30:16.778 [2024-02-14 20:30:53.976575] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.778 [2024-02-14 20:30:53.976693] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.778 [2024-02-14 20:30:53.976709] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.778 [2024-02-14 20:30:53.976717] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.778 [2024-02-14 20:30:53.976722] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc778000b90 00:30:16.778 [2024-02-14 20:30:53.976738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.779 qpair failed and we were unable to recover it. 00:30:16.779 [2024-02-14 20:30:53.977009] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218efe0 (9): Bad file descriptor 00:30:16.779 Initializing NVMe Controllers 00:30:16.779 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:16.779 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:16.779 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:30:16.779 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:30:16.779 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:30:16.779 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:30:16.779 Initialization complete. Launching workers. 00:30:16.779 Starting thread on core 1 00:30:16.779 Starting thread on core 2 00:30:16.779 Starting thread on core 3 00:30:16.779 Starting thread on core 0 00:30:16.779 20:30:53 -- host/target_disconnect.sh@59 -- # sync 00:30:16.779 00:30:16.779 real 0m11.221s 00:30:16.779 user 0m20.555s 00:30:16.779 sys 0m4.345s 00:30:16.779 20:30:53 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:16.779 20:30:53 -- common/autotest_common.sh@10 -- # set +x 00:30:16.779 ************************************ 00:30:16.779 END TEST nvmf_target_disconnect_tc2 00:30:16.779 ************************************ 00:30:16.779 20:30:54 -- host/target_disconnect.sh@80 -- # '[' -n '' ']' 00:30:16.779 20:30:54 -- host/target_disconnect.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:30:16.779 20:30:54 -- host/target_disconnect.sh@85 -- # nvmftestfini 00:30:16.779 20:30:54 -- nvmf/common.sh@476 -- # nvmfcleanup 00:30:16.779 20:30:54 -- nvmf/common.sh@116 -- # sync 00:30:16.779 20:30:54 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:30:16.779 20:30:54 -- nvmf/common.sh@119 -- # set +e 00:30:16.779 20:30:54 -- nvmf/common.sh@120 -- # for i in {1..20} 00:30:16.779 20:30:54 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:30:16.779 rmmod nvme_tcp 00:30:16.779 rmmod nvme_fabrics 00:30:16.779 rmmod nvme_keyring 00:30:16.779 20:30:54 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:30:16.779 20:30:54 -- nvmf/common.sh@123 -- # set -e 00:30:16.779 20:30:54 -- nvmf/common.sh@124 -- # return 0 00:30:16.779 20:30:54 -- nvmf/common.sh@477 -- # '[' -n 1967048 ']' 00:30:16.779 20:30:54 -- nvmf/common.sh@478 -- # killprocess 1967048 00:30:16.779 20:30:54 -- common/autotest_common.sh@924 -- # '[' -z 1967048 ']' 00:30:16.779 20:30:54 -- common/autotest_common.sh@928 -- # kill -0 1967048 00:30:16.779 20:30:54 -- common/autotest_common.sh@929 -- # uname 00:30:16.779 20:30:54 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:30:16.779 20:30:54 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 1967048 00:30:16.779 20:30:54 -- common/autotest_common.sh@930 -- # process_name=reactor_4 00:30:16.779 20:30:54 -- common/autotest_common.sh@934 -- # '[' reactor_4 = sudo ']' 00:30:16.779 20:30:54 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 1967048' 00:30:16.779 killing process with pid 1967048 00:30:16.779 20:30:54 -- common/autotest_common.sh@943 -- # kill 1967048 00:30:16.779 20:30:54 -- common/autotest_common.sh@948 -- # wait 1967048 00:30:17.038 20:30:54 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:30:17.038 20:30:54 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:30:17.038 20:30:54 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:30:17.038 20:30:54 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:17.038 20:30:54 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:30:17.038 20:30:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:17.038 20:30:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:17.038 20:30:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:19.573 20:30:56 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:30:19.573 00:30:19.573 real 0m19.831s 00:30:19.573 user 0m47.717s 00:30:19.573 sys 0m9.068s 00:30:19.573 20:30:56 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:19.573 20:30:56 -- common/autotest_common.sh@10 -- # set +x 00:30:19.573 ************************************ 00:30:19.573 END TEST nvmf_target_disconnect 00:30:19.573 ************************************ 00:30:19.573 20:30:56 -- nvmf/nvmf.sh@126 -- # timing_exit host 00:30:19.573 20:30:56 -- common/autotest_common.sh@716 -- # xtrace_disable 00:30:19.573 20:30:56 -- common/autotest_common.sh@10 -- # set +x 00:30:19.573 20:30:56 -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:30:19.573 00:30:19.573 real 23m34.971s 00:30:19.573 user 62m0.351s 00:30:19.573 sys 6m11.780s 00:30:19.573 20:30:56 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:19.573 20:30:56 -- common/autotest_common.sh@10 -- # set +x 00:30:19.573 ************************************ 00:30:19.573 END TEST nvmf_tcp 00:30:19.573 ************************************ 00:30:19.573 20:30:56 -- spdk/autotest.sh@296 -- # [[ 0 -eq 0 ]] 00:30:19.573 20:30:56 -- spdk/autotest.sh@297 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:30:19.573 20:30:56 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:30:19.573 20:30:56 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:30:19.573 20:30:56 -- common/autotest_common.sh@10 -- # set +x 00:30:19.573 ************************************ 00:30:19.573 START TEST spdkcli_nvmf_tcp 00:30:19.573 ************************************ 00:30:19.573 20:30:56 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:30:19.573 * Looking for test storage... 00:30:19.573 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:30:19.573 20:30:56 -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:30:19.573 20:30:56 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:30:19.573 20:30:56 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:30:19.573 20:30:56 -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:19.573 20:30:56 -- nvmf/common.sh@7 -- # uname -s 00:30:19.573 20:30:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:19.573 20:30:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:19.573 20:30:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:19.573 20:30:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:19.573 20:30:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:19.573 20:30:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:19.573 20:30:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:19.573 20:30:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:19.573 20:30:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:19.573 20:30:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:19.573 20:30:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:30:19.573 20:30:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:30:19.573 20:30:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:19.573 20:30:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:19.573 20:30:56 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:19.573 20:30:56 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:19.573 20:30:56 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:19.573 20:30:56 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:19.573 20:30:56 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:19.573 20:30:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:19.573 20:30:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:19.573 20:30:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:19.573 20:30:56 -- paths/export.sh@5 -- # export PATH 00:30:19.573 20:30:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:19.573 20:30:56 -- nvmf/common.sh@46 -- # : 0 00:30:19.573 20:30:56 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:30:19.573 20:30:56 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:30:19.573 20:30:56 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:30:19.573 20:30:56 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:19.573 20:30:56 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:19.573 20:30:56 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:30:19.573 20:30:56 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:30:19.573 20:30:56 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:30:19.573 20:30:56 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:30:19.573 20:30:56 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:30:19.573 20:30:56 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:30:19.573 20:30:56 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:30:19.573 20:30:56 -- common/autotest_common.sh@710 -- # xtrace_disable 00:30:19.573 20:30:56 -- common/autotest_common.sh@10 -- # set +x 00:30:19.573 20:30:56 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:30:19.573 20:30:56 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1968663 00:30:19.573 20:30:56 -- spdkcli/common.sh@34 -- # waitforlisten 1968663 00:30:19.573 20:30:56 -- common/autotest_common.sh@817 -- # '[' -z 1968663 ']' 00:30:19.573 20:30:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:19.573 20:30:56 -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:30:19.573 20:30:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:19.573 20:30:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:19.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:19.573 20:30:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:19.573 20:30:56 -- common/autotest_common.sh@10 -- # set +x 00:30:19.573 [2024-02-14 20:30:56.648914] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:30:19.574 [2024-02-14 20:30:56.648964] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1968663 ] 00:30:19.574 EAL: No free 2048 kB hugepages reported on node 1 00:30:19.574 [2024-02-14 20:30:56.708722] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:19.574 [2024-02-14 20:30:56.784216] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:19.574 [2024-02-14 20:30:56.784349] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:19.574 [2024-02-14 20:30:56.784351] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:20.141 20:30:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:20.141 20:30:57 -- common/autotest_common.sh@850 -- # return 0 00:30:20.141 20:30:57 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:30:20.141 20:30:57 -- common/autotest_common.sh@716 -- # xtrace_disable 00:30:20.141 20:30:57 -- common/autotest_common.sh@10 -- # set +x 00:30:20.141 20:30:57 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:30:20.141 20:30:57 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:30:20.141 20:30:57 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:30:20.141 20:30:57 -- common/autotest_common.sh@710 -- # xtrace_disable 00:30:20.141 20:30:57 -- common/autotest_common.sh@10 -- # set +x 00:30:20.141 20:30:57 -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:30:20.141 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:30:20.141 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:30:20.141 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:30:20.141 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:30:20.141 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:30:20.141 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:30:20.141 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:30:20.141 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:30:20.141 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:30:20.141 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:20.141 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:20.141 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:30:20.141 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:20.141 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:20.141 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:30:20.141 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:20.141 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:30:20.141 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:30:20.141 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:20.141 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:30:20.141 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:30:20.141 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:30:20.141 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:30:20.141 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:20.141 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:30:20.141 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:30:20.141 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:30:20.141 ' 00:30:20.708 [2024-02-14 20:30:57.832950] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:30:22.611 [2024-02-14 20:30:59.871728] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:23.988 [2024-02-14 20:31:01.047653] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:30:25.890 [2024-02-14 20:31:03.210391] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:30:27.789 [2024-02-14 20:31:05.068216] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:30:29.167 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:30:29.167 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:30:29.167 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:30:29.167 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:30:29.167 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:30:29.167 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:30:29.167 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:30:29.167 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:29.167 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:30:29.167 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:30:29.167 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:29.167 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:29.167 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:30:29.167 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:29.167 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:29.167 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:30:29.167 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:29.167 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:30:29.167 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:29.167 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:29.167 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:30:29.167 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:30:29.167 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:30:29.167 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:30:29.167 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:29.167 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:30:29.167 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:30:29.167 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:30:29.425 20:31:06 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:30:29.425 20:31:06 -- common/autotest_common.sh@716 -- # xtrace_disable 00:30:29.425 20:31:06 -- common/autotest_common.sh@10 -- # set +x 00:30:29.426 20:31:06 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:30:29.426 20:31:06 -- common/autotest_common.sh@710 -- # xtrace_disable 00:30:29.426 20:31:06 -- common/autotest_common.sh@10 -- # set +x 00:30:29.426 20:31:06 -- spdkcli/nvmf.sh@69 -- # check_match 00:30:29.426 20:31:06 -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:30:29.685 20:31:06 -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:30:29.685 20:31:07 -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:30:29.685 20:31:07 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:30:29.685 20:31:07 -- common/autotest_common.sh@716 -- # xtrace_disable 00:30:29.685 20:31:07 -- common/autotest_common.sh@10 -- # set +x 00:30:29.685 20:31:07 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:30:29.685 20:31:07 -- common/autotest_common.sh@710 -- # xtrace_disable 00:30:29.685 20:31:07 -- common/autotest_common.sh@10 -- # set +x 00:30:29.685 20:31:07 -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:30:29.685 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:30:29.685 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:29.685 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:30:29.685 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:30:29.685 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:30:29.685 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:30:29.685 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:29.685 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:30:29.685 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:30:29.685 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:30:29.685 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:30:29.685 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:30:29.685 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:30:29.685 ' 00:30:34.955 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:30:34.955 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:30:34.955 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:30:34.955 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:30:34.955 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:30:34.955 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:30:34.955 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:30:34.955 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:30:34.955 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:30:34.955 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:30:34.955 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:30:34.955 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:30:34.955 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:30:34.955 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:30:34.955 20:31:12 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:30:34.955 20:31:12 -- common/autotest_common.sh@716 -- # xtrace_disable 00:30:34.955 20:31:12 -- common/autotest_common.sh@10 -- # set +x 00:30:34.955 20:31:12 -- spdkcli/nvmf.sh@90 -- # killprocess 1968663 00:30:34.955 20:31:12 -- common/autotest_common.sh@924 -- # '[' -z 1968663 ']' 00:30:34.955 20:31:12 -- common/autotest_common.sh@928 -- # kill -0 1968663 00:30:34.955 20:31:12 -- common/autotest_common.sh@929 -- # uname 00:30:34.955 20:31:12 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:30:34.955 20:31:12 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 1968663 00:30:34.955 20:31:12 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:30:34.955 20:31:12 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:30:34.955 20:31:12 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 1968663' 00:30:34.955 killing process with pid 1968663 00:30:34.955 20:31:12 -- common/autotest_common.sh@943 -- # kill 1968663 00:30:34.955 [2024-02-14 20:31:12.089417] app.c: 881:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:30:34.955 20:31:12 -- common/autotest_common.sh@948 -- # wait 1968663 00:30:34.955 20:31:12 -- spdkcli/nvmf.sh@1 -- # cleanup 00:30:34.955 20:31:12 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:30:34.955 20:31:12 -- spdkcli/common.sh@13 -- # '[' -n 1968663 ']' 00:30:34.955 20:31:12 -- spdkcli/common.sh@14 -- # killprocess 1968663 00:30:34.955 20:31:12 -- common/autotest_common.sh@924 -- # '[' -z 1968663 ']' 00:30:34.955 20:31:12 -- common/autotest_common.sh@928 -- # kill -0 1968663 00:30:34.955 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 928: kill: (1968663) - No such process 00:30:34.955 20:31:12 -- common/autotest_common.sh@951 -- # echo 'Process with pid 1968663 is not found' 00:30:34.955 Process with pid 1968663 is not found 00:30:34.955 20:31:12 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:30:34.955 20:31:12 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:30:34.955 20:31:12 -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:30:34.955 00:30:34.955 real 0m15.791s 00:30:34.955 user 0m32.674s 00:30:34.955 sys 0m0.671s 00:30:34.955 20:31:12 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:34.955 20:31:12 -- common/autotest_common.sh@10 -- # set +x 00:30:34.955 ************************************ 00:30:34.955 END TEST spdkcli_nvmf_tcp 00:30:34.955 ************************************ 00:30:34.955 20:31:12 -- spdk/autotest.sh@298 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:30:34.955 20:31:12 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:30:34.955 20:31:12 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:30:34.955 20:31:12 -- common/autotest_common.sh@10 -- # set +x 00:30:34.955 ************************************ 00:30:34.955 START TEST nvmf_identify_passthru 00:30:34.955 ************************************ 00:30:34.955 20:31:12 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:30:35.215 * Looking for test storage... 00:30:35.215 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:35.215 20:31:12 -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:35.215 20:31:12 -- nvmf/common.sh@7 -- # uname -s 00:30:35.215 20:31:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:35.215 20:31:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:35.215 20:31:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:35.215 20:31:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:35.215 20:31:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:35.215 20:31:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:35.215 20:31:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:35.215 20:31:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:35.215 20:31:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:35.215 20:31:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:35.215 20:31:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:30:35.215 20:31:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:30:35.215 20:31:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:35.215 20:31:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:35.215 20:31:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:35.215 20:31:12 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:35.215 20:31:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:35.215 20:31:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:35.215 20:31:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:35.215 20:31:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:35.215 20:31:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:35.215 20:31:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:35.215 20:31:12 -- paths/export.sh@5 -- # export PATH 00:30:35.215 20:31:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:35.215 20:31:12 -- nvmf/common.sh@46 -- # : 0 00:30:35.215 20:31:12 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:30:35.215 20:31:12 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:30:35.215 20:31:12 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:30:35.215 20:31:12 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:35.215 20:31:12 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:35.215 20:31:12 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:30:35.215 20:31:12 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:30:35.215 20:31:12 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:30:35.215 20:31:12 -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:35.215 20:31:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:35.215 20:31:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:35.215 20:31:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:35.215 20:31:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:35.215 20:31:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:35.215 20:31:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:35.215 20:31:12 -- paths/export.sh@5 -- # export PATH 00:30:35.215 20:31:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:35.215 20:31:12 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:30:35.215 20:31:12 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:30:35.215 20:31:12 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:35.215 20:31:12 -- nvmf/common.sh@436 -- # prepare_net_devs 00:30:35.215 20:31:12 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:30:35.215 20:31:12 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:30:35.215 20:31:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:35.215 20:31:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:35.215 20:31:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:35.215 20:31:12 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:30:35.215 20:31:12 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:30:35.215 20:31:12 -- nvmf/common.sh@284 -- # xtrace_disable 00:30:35.215 20:31:12 -- common/autotest_common.sh@10 -- # set +x 00:30:41.788 20:31:18 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:30:41.788 20:31:18 -- nvmf/common.sh@290 -- # pci_devs=() 00:30:41.788 20:31:18 -- nvmf/common.sh@290 -- # local -a pci_devs 00:30:41.788 20:31:18 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:30:41.788 20:31:18 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:30:41.788 20:31:18 -- nvmf/common.sh@292 -- # pci_drivers=() 00:30:41.788 20:31:18 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:30:41.788 20:31:18 -- nvmf/common.sh@294 -- # net_devs=() 00:30:41.788 20:31:18 -- nvmf/common.sh@294 -- # local -ga net_devs 00:30:41.788 20:31:18 -- nvmf/common.sh@295 -- # e810=() 00:30:41.788 20:31:18 -- nvmf/common.sh@295 -- # local -ga e810 00:30:41.788 20:31:18 -- nvmf/common.sh@296 -- # x722=() 00:30:41.788 20:31:18 -- nvmf/common.sh@296 -- # local -ga x722 00:30:41.788 20:31:18 -- nvmf/common.sh@297 -- # mlx=() 00:30:41.788 20:31:18 -- nvmf/common.sh@297 -- # local -ga mlx 00:30:41.788 20:31:18 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:41.788 20:31:18 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:41.788 20:31:18 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:41.788 20:31:18 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:41.788 20:31:18 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:41.788 20:31:18 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:41.788 20:31:18 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:41.788 20:31:18 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:41.788 20:31:18 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:41.788 20:31:18 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:41.788 20:31:18 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:41.788 20:31:18 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:30:41.788 20:31:18 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:30:41.788 20:31:18 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:30:41.788 20:31:18 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:30:41.788 20:31:18 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:30:41.788 20:31:18 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:30:41.788 20:31:18 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:41.788 20:31:18 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:41.788 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:41.788 20:31:18 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:41.788 20:31:18 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:41.788 20:31:18 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:41.788 20:31:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:41.788 20:31:18 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:41.788 20:31:18 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:41.788 20:31:18 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:41.788 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:41.788 20:31:18 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:41.788 20:31:18 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:41.788 20:31:18 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:41.788 20:31:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:41.788 20:31:18 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:41.788 20:31:18 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:30:41.788 20:31:18 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:30:41.788 20:31:18 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:30:41.788 20:31:18 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:41.788 20:31:18 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:41.788 20:31:18 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:41.788 20:31:18 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:41.788 20:31:18 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:41.788 Found net devices under 0000:af:00.0: cvl_0_0 00:30:41.788 20:31:18 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:41.788 20:31:18 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:41.788 20:31:18 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:41.788 20:31:18 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:41.788 20:31:18 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:41.788 20:31:18 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:41.788 Found net devices under 0000:af:00.1: cvl_0_1 00:30:41.788 20:31:18 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:41.788 20:31:18 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:30:41.788 20:31:18 -- nvmf/common.sh@402 -- # is_hw=yes 00:30:41.788 20:31:18 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:30:41.788 20:31:18 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:30:41.788 20:31:18 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:30:41.788 20:31:18 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:41.788 20:31:18 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:41.788 20:31:18 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:41.788 20:31:18 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:30:41.788 20:31:18 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:41.788 20:31:18 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:41.788 20:31:18 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:30:41.788 20:31:18 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:41.788 20:31:18 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:41.788 20:31:18 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:30:41.788 20:31:18 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:30:41.788 20:31:18 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:30:41.788 20:31:18 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:41.788 20:31:18 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:41.788 20:31:18 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:41.788 20:31:18 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:30:41.788 20:31:18 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:41.788 20:31:18 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:41.788 20:31:18 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:41.788 20:31:18 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:30:41.788 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:41.788 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.138 ms 00:30:41.788 00:30:41.788 --- 10.0.0.2 ping statistics --- 00:30:41.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:41.788 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:30:41.788 20:31:18 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:41.788 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:41.788 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:30:41.788 00:30:41.788 --- 10.0.0.1 ping statistics --- 00:30:41.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:41.788 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:30:41.788 20:31:18 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:41.788 20:31:18 -- nvmf/common.sh@410 -- # return 0 00:30:41.788 20:31:18 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:30:41.788 20:31:18 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:41.788 20:31:18 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:30:41.788 20:31:18 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:30:41.788 20:31:18 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:41.788 20:31:18 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:30:41.788 20:31:18 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:30:41.788 20:31:18 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:30:41.788 20:31:18 -- common/autotest_common.sh@710 -- # xtrace_disable 00:30:41.788 20:31:18 -- common/autotest_common.sh@10 -- # set +x 00:30:41.788 20:31:18 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:30:41.788 20:31:18 -- common/autotest_common.sh@1507 -- # bdfs=() 00:30:41.788 20:31:18 -- common/autotest_common.sh@1507 -- # local bdfs 00:30:41.788 20:31:18 -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:30:41.788 20:31:18 -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:30:41.788 20:31:18 -- common/autotest_common.sh@1496 -- # bdfs=() 00:30:41.788 20:31:18 -- common/autotest_common.sh@1496 -- # local bdfs 00:30:41.788 20:31:18 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:41.788 20:31:18 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:30:41.788 20:31:18 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:41.788 20:31:18 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:30:41.788 20:31:18 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:5e:00.0 00:30:41.788 20:31:18 -- common/autotest_common.sh@1510 -- # echo 0000:5e:00.0 00:30:41.788 20:31:18 -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:30:41.788 20:31:18 -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:30:41.788 20:31:18 -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:30:41.788 20:31:18 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:30:41.788 20:31:18 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:30:41.788 EAL: No free 2048 kB hugepages reported on node 1 00:30:46.053 20:31:22 -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ807001JM1P0FGN 00:30:46.053 20:31:22 -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:30:46.053 20:31:22 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:30:46.053 20:31:22 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:30:46.053 EAL: No free 2048 kB hugepages reported on node 1 00:30:50.243 20:31:26 -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:30:50.243 20:31:26 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:30:50.243 20:31:26 -- common/autotest_common.sh@716 -- # xtrace_disable 00:30:50.243 20:31:26 -- common/autotest_common.sh@10 -- # set +x 00:30:50.243 20:31:26 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:30:50.243 20:31:26 -- common/autotest_common.sh@710 -- # xtrace_disable 00:30:50.243 20:31:26 -- common/autotest_common.sh@10 -- # set +x 00:30:50.243 20:31:26 -- target/identify_passthru.sh@31 -- # nvmfpid=1975980 00:30:50.243 20:31:26 -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:30:50.243 20:31:26 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:50.243 20:31:26 -- target/identify_passthru.sh@35 -- # waitforlisten 1975980 00:30:50.243 20:31:26 -- common/autotest_common.sh@817 -- # '[' -z 1975980 ']' 00:30:50.243 20:31:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:50.243 20:31:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:50.243 20:31:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:50.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:50.243 20:31:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:50.243 20:31:26 -- common/autotest_common.sh@10 -- # set +x 00:30:50.243 [2024-02-14 20:31:26.939675] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:30:50.243 [2024-02-14 20:31:26.939722] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:50.243 EAL: No free 2048 kB hugepages reported on node 1 00:30:50.243 [2024-02-14 20:31:27.001438] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:50.243 [2024-02-14 20:31:27.077356] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:50.243 [2024-02-14 20:31:27.077463] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:50.243 [2024-02-14 20:31:27.077471] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:50.243 [2024-02-14 20:31:27.077476] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:50.243 [2024-02-14 20:31:27.077561] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:50.243 [2024-02-14 20:31:27.077664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:50.243 [2024-02-14 20:31:27.077718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:50.243 [2024-02-14 20:31:27.077719] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:50.502 20:31:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:50.502 20:31:27 -- common/autotest_common.sh@850 -- # return 0 00:30:50.502 20:31:27 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:30:50.502 20:31:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:50.502 20:31:27 -- common/autotest_common.sh@10 -- # set +x 00:30:50.502 INFO: Log level set to 20 00:30:50.502 INFO: Requests: 00:30:50.502 { 00:30:50.502 "jsonrpc": "2.0", 00:30:50.502 "method": "nvmf_set_config", 00:30:50.502 "id": 1, 00:30:50.502 "params": { 00:30:50.502 "admin_cmd_passthru": { 00:30:50.502 "identify_ctrlr": true 00:30:50.502 } 00:30:50.502 } 00:30:50.502 } 00:30:50.502 00:30:50.502 INFO: response: 00:30:50.502 { 00:30:50.502 "jsonrpc": "2.0", 00:30:50.502 "id": 1, 00:30:50.502 "result": true 00:30:50.502 } 00:30:50.502 00:30:50.502 20:31:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:50.502 20:31:27 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:30:50.502 20:31:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:50.502 20:31:27 -- common/autotest_common.sh@10 -- # set +x 00:30:50.502 INFO: Setting log level to 20 00:30:50.502 INFO: Setting log level to 20 00:30:50.502 INFO: Log level set to 20 00:30:50.502 INFO: Log level set to 20 00:30:50.502 INFO: Requests: 00:30:50.502 { 00:30:50.502 "jsonrpc": "2.0", 00:30:50.502 "method": "framework_start_init", 00:30:50.502 "id": 1 00:30:50.502 } 00:30:50.502 00:30:50.502 INFO: Requests: 00:30:50.502 { 00:30:50.502 "jsonrpc": "2.0", 00:30:50.502 "method": "framework_start_init", 00:30:50.502 "id": 1 00:30:50.502 } 00:30:50.502 00:30:50.503 [2024-02-14 20:31:27.843129] nvmf_tgt.c: 423:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:30:50.503 INFO: response: 00:30:50.503 { 00:30:50.503 "jsonrpc": "2.0", 00:30:50.503 "id": 1, 00:30:50.503 "result": true 00:30:50.503 } 00:30:50.503 00:30:50.503 INFO: response: 00:30:50.503 { 00:30:50.503 "jsonrpc": "2.0", 00:30:50.503 "id": 1, 00:30:50.503 "result": true 00:30:50.503 } 00:30:50.503 00:30:50.503 20:31:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:50.503 20:31:27 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:50.503 20:31:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:50.503 20:31:27 -- common/autotest_common.sh@10 -- # set +x 00:30:50.503 INFO: Setting log level to 40 00:30:50.503 INFO: Setting log level to 40 00:30:50.503 INFO: Setting log level to 40 00:30:50.503 [2024-02-14 20:31:27.856526] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:50.503 20:31:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:50.503 20:31:27 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:30:50.503 20:31:27 -- common/autotest_common.sh@716 -- # xtrace_disable 00:30:50.503 20:31:27 -- common/autotest_common.sh@10 -- # set +x 00:30:50.503 20:31:27 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:30:50.503 20:31:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:50.503 20:31:27 -- common/autotest_common.sh@10 -- # set +x 00:30:53.794 Nvme0n1 00:30:53.794 20:31:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:53.795 20:31:30 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:30:53.795 20:31:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:53.795 20:31:30 -- common/autotest_common.sh@10 -- # set +x 00:30:53.795 20:31:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:53.795 20:31:30 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:30:53.795 20:31:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:53.795 20:31:30 -- common/autotest_common.sh@10 -- # set +x 00:30:53.795 20:31:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:53.795 20:31:30 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:53.795 20:31:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:53.795 20:31:30 -- common/autotest_common.sh@10 -- # set +x 00:30:53.795 [2024-02-14 20:31:30.749227] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:53.795 20:31:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:53.795 20:31:30 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:30:53.795 20:31:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:53.795 20:31:30 -- common/autotest_common.sh@10 -- # set +x 00:30:53.795 [2024-02-14 20:31:30.757008] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:30:53.795 [ 00:30:53.795 { 00:30:53.795 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:53.795 "subtype": "Discovery", 00:30:53.795 "listen_addresses": [], 00:30:53.795 "allow_any_host": true, 00:30:53.795 "hosts": [] 00:30:53.795 }, 00:30:53.795 { 00:30:53.795 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:53.795 "subtype": "NVMe", 00:30:53.795 "listen_addresses": [ 00:30:53.795 { 00:30:53.795 "transport": "TCP", 00:30:53.795 "trtype": "TCP", 00:30:53.795 "adrfam": "IPv4", 00:30:53.795 "traddr": "10.0.0.2", 00:30:53.795 "trsvcid": "4420" 00:30:53.795 } 00:30:53.795 ], 00:30:53.795 "allow_any_host": true, 00:30:53.795 "hosts": [], 00:30:53.795 "serial_number": "SPDK00000000000001", 00:30:53.795 "model_number": "SPDK bdev Controller", 00:30:53.795 "max_namespaces": 1, 00:30:53.795 "min_cntlid": 1, 00:30:53.795 "max_cntlid": 65519, 00:30:53.795 "namespaces": [ 00:30:53.795 { 00:30:53.795 "nsid": 1, 00:30:53.795 "bdev_name": "Nvme0n1", 00:30:53.795 "name": "Nvme0n1", 00:30:53.795 "nguid": "0CBC326E8ED749E6B22DE24D71C4F094", 00:30:53.795 "uuid": "0cbc326e-8ed7-49e6-b22d-e24d71c4f094" 00:30:53.795 } 00:30:53.795 ] 00:30:53.795 } 00:30:53.795 ] 00:30:53.795 20:31:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:53.795 20:31:30 -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:53.795 20:31:30 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:30:53.795 20:31:30 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:30:53.795 EAL: No free 2048 kB hugepages reported on node 1 00:30:53.795 20:31:30 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ807001JM1P0FGN 00:30:53.795 20:31:30 -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:53.795 20:31:30 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:30:53.795 20:31:30 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:30:53.795 EAL: No free 2048 kB hugepages reported on node 1 00:30:53.795 20:31:31 -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:30:53.795 20:31:31 -- target/identify_passthru.sh@63 -- # '[' BTLJ807001JM1P0FGN '!=' BTLJ807001JM1P0FGN ']' 00:30:53.795 20:31:31 -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:30:53.795 20:31:31 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:53.795 20:31:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:53.795 20:31:31 -- common/autotest_common.sh@10 -- # set +x 00:30:53.795 20:31:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:53.795 20:31:31 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:30:53.795 20:31:31 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:30:53.795 20:31:31 -- nvmf/common.sh@476 -- # nvmfcleanup 00:30:53.795 20:31:31 -- nvmf/common.sh@116 -- # sync 00:30:53.795 20:31:31 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:30:53.795 20:31:31 -- nvmf/common.sh@119 -- # set +e 00:30:53.795 20:31:31 -- nvmf/common.sh@120 -- # for i in {1..20} 00:30:53.795 20:31:31 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:30:53.795 rmmod nvme_tcp 00:30:53.795 rmmod nvme_fabrics 00:30:53.795 rmmod nvme_keyring 00:30:53.795 20:31:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:30:53.795 20:31:31 -- nvmf/common.sh@123 -- # set -e 00:30:53.795 20:31:31 -- nvmf/common.sh@124 -- # return 0 00:30:53.795 20:31:31 -- nvmf/common.sh@477 -- # '[' -n 1975980 ']' 00:30:53.795 20:31:31 -- nvmf/common.sh@478 -- # killprocess 1975980 00:30:53.795 20:31:31 -- common/autotest_common.sh@924 -- # '[' -z 1975980 ']' 00:30:53.795 20:31:31 -- common/autotest_common.sh@928 -- # kill -0 1975980 00:30:53.795 20:31:31 -- common/autotest_common.sh@929 -- # uname 00:30:53.795 20:31:31 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:30:53.795 20:31:31 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 1975980 00:30:53.795 20:31:31 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:30:53.795 20:31:31 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:30:53.795 20:31:31 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 1975980' 00:30:53.795 killing process with pid 1975980 00:30:53.795 20:31:31 -- common/autotest_common.sh@943 -- # kill 1975980 00:30:53.795 [2024-02-14 20:31:31.186866] app.c: 881:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:30:53.795 20:31:31 -- common/autotest_common.sh@948 -- # wait 1975980 00:30:55.703 20:31:32 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:30:55.703 20:31:32 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:30:55.703 20:31:32 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:30:55.703 20:31:32 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:55.703 20:31:32 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:30:55.703 20:31:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:55.703 20:31:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:55.703 20:31:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:57.612 20:31:34 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:30:57.612 00:30:57.612 real 0m22.389s 00:30:57.612 user 0m29.827s 00:30:57.612 sys 0m5.302s 00:30:57.612 20:31:34 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:57.612 20:31:34 -- common/autotest_common.sh@10 -- # set +x 00:30:57.612 ************************************ 00:30:57.612 END TEST nvmf_identify_passthru 00:30:57.612 ************************************ 00:30:57.612 20:31:34 -- spdk/autotest.sh@300 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:30:57.612 20:31:34 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:30:57.612 20:31:34 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:30:57.612 20:31:34 -- common/autotest_common.sh@10 -- # set +x 00:30:57.612 ************************************ 00:30:57.612 START TEST nvmf_dif 00:30:57.612 ************************************ 00:30:57.612 20:31:34 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:30:57.612 * Looking for test storage... 00:30:57.612 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:57.612 20:31:34 -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:57.612 20:31:34 -- nvmf/common.sh@7 -- # uname -s 00:30:57.612 20:31:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:57.612 20:31:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:57.612 20:31:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:57.612 20:31:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:57.612 20:31:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:57.612 20:31:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:57.612 20:31:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:57.612 20:31:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:57.612 20:31:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:57.612 20:31:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:57.612 20:31:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:30:57.612 20:31:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:30:57.612 20:31:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:57.612 20:31:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:57.612 20:31:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:57.612 20:31:34 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:57.612 20:31:34 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:57.612 20:31:34 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:57.612 20:31:34 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:57.612 20:31:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:57.612 20:31:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:57.612 20:31:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:57.612 20:31:34 -- paths/export.sh@5 -- # export PATH 00:30:57.612 20:31:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:57.612 20:31:34 -- nvmf/common.sh@46 -- # : 0 00:30:57.612 20:31:34 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:30:57.612 20:31:34 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:30:57.612 20:31:34 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:30:57.612 20:31:34 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:57.612 20:31:34 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:57.612 20:31:34 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:30:57.612 20:31:34 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:30:57.612 20:31:34 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:30:57.612 20:31:34 -- target/dif.sh@15 -- # NULL_META=16 00:30:57.612 20:31:34 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:30:57.612 20:31:34 -- target/dif.sh@15 -- # NULL_SIZE=64 00:30:57.612 20:31:34 -- target/dif.sh@15 -- # NULL_DIF=1 00:30:57.612 20:31:34 -- target/dif.sh@135 -- # nvmftestinit 00:30:57.612 20:31:34 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:30:57.612 20:31:34 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:57.612 20:31:34 -- nvmf/common.sh@436 -- # prepare_net_devs 00:30:57.612 20:31:34 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:30:57.612 20:31:34 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:30:57.612 20:31:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:57.612 20:31:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:57.612 20:31:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:57.612 20:31:34 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:30:57.612 20:31:34 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:30:57.612 20:31:34 -- nvmf/common.sh@284 -- # xtrace_disable 00:30:57.612 20:31:34 -- common/autotest_common.sh@10 -- # set +x 00:31:04.192 20:31:40 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:31:04.192 20:31:40 -- nvmf/common.sh@290 -- # pci_devs=() 00:31:04.192 20:31:40 -- nvmf/common.sh@290 -- # local -a pci_devs 00:31:04.192 20:31:40 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:31:04.192 20:31:40 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:31:04.192 20:31:40 -- nvmf/common.sh@292 -- # pci_drivers=() 00:31:04.192 20:31:40 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:31:04.192 20:31:40 -- nvmf/common.sh@294 -- # net_devs=() 00:31:04.192 20:31:40 -- nvmf/common.sh@294 -- # local -ga net_devs 00:31:04.192 20:31:40 -- nvmf/common.sh@295 -- # e810=() 00:31:04.192 20:31:40 -- nvmf/common.sh@295 -- # local -ga e810 00:31:04.192 20:31:40 -- nvmf/common.sh@296 -- # x722=() 00:31:04.192 20:31:40 -- nvmf/common.sh@296 -- # local -ga x722 00:31:04.192 20:31:40 -- nvmf/common.sh@297 -- # mlx=() 00:31:04.192 20:31:40 -- nvmf/common.sh@297 -- # local -ga mlx 00:31:04.192 20:31:40 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:04.192 20:31:40 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:04.192 20:31:40 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:04.192 20:31:40 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:04.192 20:31:40 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:04.192 20:31:40 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:04.192 20:31:40 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:04.192 20:31:40 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:04.192 20:31:40 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:04.192 20:31:40 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:04.192 20:31:40 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:04.192 20:31:40 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:31:04.192 20:31:40 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:31:04.192 20:31:40 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:31:04.192 20:31:40 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:31:04.192 20:31:40 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:31:04.192 20:31:40 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:31:04.192 20:31:40 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:04.192 20:31:40 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:04.192 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:04.192 20:31:40 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:04.192 20:31:40 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:04.192 20:31:40 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:04.192 20:31:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:04.192 20:31:40 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:04.192 20:31:40 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:04.192 20:31:40 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:04.192 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:04.192 20:31:40 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:04.192 20:31:40 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:04.192 20:31:40 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:04.192 20:31:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:04.192 20:31:40 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:04.192 20:31:40 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:31:04.192 20:31:40 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:31:04.192 20:31:40 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:31:04.192 20:31:40 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:04.192 20:31:40 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:04.192 20:31:40 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:04.192 20:31:40 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:04.192 20:31:40 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:04.192 Found net devices under 0000:af:00.0: cvl_0_0 00:31:04.192 20:31:40 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:04.192 20:31:40 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:04.192 20:31:40 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:04.192 20:31:40 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:04.192 20:31:40 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:04.192 20:31:40 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:04.192 Found net devices under 0000:af:00.1: cvl_0_1 00:31:04.192 20:31:40 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:04.192 20:31:40 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:31:04.192 20:31:40 -- nvmf/common.sh@402 -- # is_hw=yes 00:31:04.192 20:31:40 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:31:04.192 20:31:40 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:31:04.192 20:31:40 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:31:04.192 20:31:40 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:04.192 20:31:40 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:04.192 20:31:40 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:04.192 20:31:40 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:31:04.192 20:31:40 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:04.192 20:31:40 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:04.192 20:31:40 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:31:04.192 20:31:40 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:04.192 20:31:40 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:04.192 20:31:40 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:31:04.192 20:31:40 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:31:04.192 20:31:40 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:31:04.192 20:31:40 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:04.192 20:31:40 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:04.192 20:31:40 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:04.192 20:31:40 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:31:04.192 20:31:40 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:04.192 20:31:40 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:04.192 20:31:40 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:04.192 20:31:40 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:31:04.192 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:04.192 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.154 ms 00:31:04.192 00:31:04.192 --- 10.0.0.2 ping statistics --- 00:31:04.192 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:04.192 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:31:04.192 20:31:40 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:04.192 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:04.192 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.248 ms 00:31:04.192 00:31:04.192 --- 10.0.0.1 ping statistics --- 00:31:04.192 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:04.192 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:31:04.192 20:31:40 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:04.192 20:31:40 -- nvmf/common.sh@410 -- # return 0 00:31:04.192 20:31:40 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:31:04.192 20:31:40 -- nvmf/common.sh@439 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:06.095 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:31:06.664 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:31:06.664 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:31:06.664 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:31:06.664 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:31:06.664 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:31:06.664 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:31:06.664 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:31:06.664 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:31:06.664 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:31:06.664 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:31:06.664 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:31:06.664 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:31:06.664 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:31:06.664 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:31:06.664 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:31:06.664 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:31:06.664 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:31:06.664 20:31:43 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:06.664 20:31:43 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:31:06.664 20:31:43 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:31:06.664 20:31:43 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:06.664 20:31:43 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:31:06.664 20:31:43 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:31:06.664 20:31:43 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:31:06.664 20:31:43 -- target/dif.sh@137 -- # nvmfappstart 00:31:06.664 20:31:43 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:31:06.664 20:31:43 -- common/autotest_common.sh@710 -- # xtrace_disable 00:31:06.664 20:31:43 -- common/autotest_common.sh@10 -- # set +x 00:31:06.664 20:31:44 -- nvmf/common.sh@469 -- # nvmfpid=1982081 00:31:06.664 20:31:44 -- nvmf/common.sh@470 -- # waitforlisten 1982081 00:31:06.664 20:31:44 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:31:06.664 20:31:44 -- common/autotest_common.sh@817 -- # '[' -z 1982081 ']' 00:31:06.664 20:31:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:06.664 20:31:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:31:06.664 20:31:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:06.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:06.664 20:31:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:31:06.664 20:31:44 -- common/autotest_common.sh@10 -- # set +x 00:31:06.664 [2024-02-14 20:31:44.049704] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:31:06.664 [2024-02-14 20:31:44.049744] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:06.664 EAL: No free 2048 kB hugepages reported on node 1 00:31:06.923 [2024-02-14 20:31:44.109008] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:06.923 [2024-02-14 20:31:44.183759] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:06.923 [2024-02-14 20:31:44.183885] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:06.923 [2024-02-14 20:31:44.183893] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:06.923 [2024-02-14 20:31:44.183899] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:06.923 [2024-02-14 20:31:44.183916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:07.492 20:31:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:31:07.492 20:31:44 -- common/autotest_common.sh@850 -- # return 0 00:31:07.492 20:31:44 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:31:07.492 20:31:44 -- common/autotest_common.sh@716 -- # xtrace_disable 00:31:07.492 20:31:44 -- common/autotest_common.sh@10 -- # set +x 00:31:07.492 20:31:44 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:07.492 20:31:44 -- target/dif.sh@139 -- # create_transport 00:31:07.492 20:31:44 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:31:07.492 20:31:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:07.492 20:31:44 -- common/autotest_common.sh@10 -- # set +x 00:31:07.492 [2024-02-14 20:31:44.882846] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:07.492 20:31:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:07.492 20:31:44 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:31:07.492 20:31:44 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:31:07.492 20:31:44 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:31:07.492 20:31:44 -- common/autotest_common.sh@10 -- # set +x 00:31:07.492 ************************************ 00:31:07.492 START TEST fio_dif_1_default 00:31:07.492 ************************************ 00:31:07.492 20:31:44 -- common/autotest_common.sh@1102 -- # fio_dif_1 00:31:07.492 20:31:44 -- target/dif.sh@86 -- # create_subsystems 0 00:31:07.492 20:31:44 -- target/dif.sh@28 -- # local sub 00:31:07.492 20:31:44 -- target/dif.sh@30 -- # for sub in "$@" 00:31:07.492 20:31:44 -- target/dif.sh@31 -- # create_subsystem 0 00:31:07.492 20:31:44 -- target/dif.sh@18 -- # local sub_id=0 00:31:07.492 20:31:44 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:07.492 20:31:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:07.492 20:31:44 -- common/autotest_common.sh@10 -- # set +x 00:31:07.492 bdev_null0 00:31:07.492 20:31:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:07.492 20:31:44 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:07.751 20:31:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:07.751 20:31:44 -- common/autotest_common.sh@10 -- # set +x 00:31:07.751 20:31:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:07.751 20:31:44 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:07.751 20:31:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:07.751 20:31:44 -- common/autotest_common.sh@10 -- # set +x 00:31:07.751 20:31:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:07.751 20:31:44 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:07.751 20:31:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:07.751 20:31:44 -- common/autotest_common.sh@10 -- # set +x 00:31:07.751 [2024-02-14 20:31:44.931100] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:07.751 20:31:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:07.751 20:31:44 -- target/dif.sh@87 -- # fio /dev/fd/62 00:31:07.751 20:31:44 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:31:07.751 20:31:44 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:07.751 20:31:44 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:07.751 20:31:44 -- nvmf/common.sh@520 -- # config=() 00:31:07.751 20:31:44 -- nvmf/common.sh@520 -- # local subsystem config 00:31:07.751 20:31:44 -- common/autotest_common.sh@1333 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:07.751 20:31:44 -- target/dif.sh@82 -- # gen_fio_conf 00:31:07.751 20:31:44 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:07.751 20:31:44 -- common/autotest_common.sh@1314 -- # local fio_dir=/usr/src/fio 00:31:07.751 20:31:44 -- target/dif.sh@54 -- # local file 00:31:07.751 20:31:44 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:07.751 { 00:31:07.751 "params": { 00:31:07.751 "name": "Nvme$subsystem", 00:31:07.751 "trtype": "$TEST_TRANSPORT", 00:31:07.751 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:07.751 "adrfam": "ipv4", 00:31:07.751 "trsvcid": "$NVMF_PORT", 00:31:07.751 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:07.751 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:07.751 "hdgst": ${hdgst:-false}, 00:31:07.751 "ddgst": ${ddgst:-false} 00:31:07.751 }, 00:31:07.751 "method": "bdev_nvme_attach_controller" 00:31:07.751 } 00:31:07.751 EOF 00:31:07.751 )") 00:31:07.751 20:31:44 -- common/autotest_common.sh@1316 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:07.751 20:31:44 -- target/dif.sh@56 -- # cat 00:31:07.751 20:31:44 -- common/autotest_common.sh@1316 -- # local sanitizers 00:31:07.751 20:31:44 -- common/autotest_common.sh@1317 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:07.751 20:31:44 -- common/autotest_common.sh@1318 -- # shift 00:31:07.751 20:31:44 -- common/autotest_common.sh@1320 -- # local asan_lib= 00:31:07.751 20:31:44 -- common/autotest_common.sh@1321 -- # for sanitizer in "${sanitizers[@]}" 00:31:07.751 20:31:44 -- nvmf/common.sh@542 -- # cat 00:31:07.751 20:31:44 -- target/dif.sh@72 -- # (( file = 1 )) 00:31:07.751 20:31:44 -- common/autotest_common.sh@1322 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:07.751 20:31:44 -- target/dif.sh@72 -- # (( file <= files )) 00:31:07.751 20:31:44 -- common/autotest_common.sh@1322 -- # grep libasan 00:31:07.751 20:31:44 -- common/autotest_common.sh@1322 -- # awk '{print $3}' 00:31:07.751 20:31:44 -- nvmf/common.sh@544 -- # jq . 00:31:07.751 20:31:44 -- nvmf/common.sh@545 -- # IFS=, 00:31:07.751 20:31:44 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:31:07.751 "params": { 00:31:07.751 "name": "Nvme0", 00:31:07.751 "trtype": "tcp", 00:31:07.751 "traddr": "10.0.0.2", 00:31:07.751 "adrfam": "ipv4", 00:31:07.751 "trsvcid": "4420", 00:31:07.751 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:07.751 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:07.751 "hdgst": false, 00:31:07.751 "ddgst": false 00:31:07.751 }, 00:31:07.751 "method": "bdev_nvme_attach_controller" 00:31:07.751 }' 00:31:07.751 20:31:44 -- common/autotest_common.sh@1322 -- # asan_lib= 00:31:07.751 20:31:44 -- common/autotest_common.sh@1323 -- # [[ -n '' ]] 00:31:07.751 20:31:44 -- common/autotest_common.sh@1321 -- # for sanitizer in "${sanitizers[@]}" 00:31:07.751 20:31:44 -- common/autotest_common.sh@1322 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:07.751 20:31:44 -- common/autotest_common.sh@1322 -- # grep libclang_rt.asan 00:31:07.751 20:31:44 -- common/autotest_common.sh@1322 -- # awk '{print $3}' 00:31:07.751 20:31:45 -- common/autotest_common.sh@1322 -- # asan_lib= 00:31:07.751 20:31:45 -- common/autotest_common.sh@1323 -- # [[ -n '' ]] 00:31:07.751 20:31:45 -- common/autotest_common.sh@1329 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:07.751 20:31:45 -- common/autotest_common.sh@1329 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:08.017 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:08.017 fio-3.35 00:31:08.017 Starting 1 thread 00:31:08.017 EAL: No free 2048 kB hugepages reported on node 1 00:31:08.319 [2024-02-14 20:31:45.611044] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:31:08.319 [2024-02-14 20:31:45.611094] rpc.c: 167:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:31:20.523 00:31:20.523 filename0: (groupid=0, jobs=1): err= 0: pid=1982458: Wed Feb 14 20:31:55 2024 00:31:20.523 read: IOPS=95, BW=380KiB/s (389kB/s)(3808KiB/10013msec) 00:31:20.523 slat (nsec): min=5634, max=63601, avg=6399.61, stdev=2326.20 00:31:20.523 clat (usec): min=41792, max=44877, avg=42052.62, stdev=298.13 00:31:20.523 lat (usec): min=41799, max=44912, avg=42059.02, stdev=298.71 00:31:20.523 clat percentiles (usec): 00:31:20.523 | 1.00th=[41681], 5.00th=[41681], 10.00th=[42206], 20.00th=[42206], 00:31:20.523 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:31:20.523 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42730], 00:31:20.523 | 99.00th=[43254], 99.50th=[43254], 99.90th=[44827], 99.95th=[44827], 00:31:20.523 | 99.99th=[44827] 00:31:20.523 bw ( KiB/s): min= 352, max= 384, per=99.66%, avg=379.20, stdev=11.72, samples=20 00:31:20.523 iops : min= 88, max= 96, avg=94.80, stdev= 2.93, samples=20 00:31:20.523 lat (msec) : 50=100.00% 00:31:20.523 cpu : usr=95.52%, sys=4.20%, ctx=17, majf=0, minf=206 00:31:20.523 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:20.523 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:20.523 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:20.523 issued rwts: total=952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:20.523 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:20.523 00:31:20.523 Run status group 0 (all jobs): 00:31:20.523 READ: bw=380KiB/s (389kB/s), 380KiB/s-380KiB/s (389kB/s-389kB/s), io=3808KiB (3899kB), run=10013-10013msec 00:31:20.523 20:31:56 -- target/dif.sh@88 -- # destroy_subsystems 0 00:31:20.523 20:31:56 -- target/dif.sh@43 -- # local sub 00:31:20.523 20:31:56 -- target/dif.sh@45 -- # for sub in "$@" 00:31:20.523 20:31:56 -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:20.523 20:31:56 -- target/dif.sh@36 -- # local sub_id=0 00:31:20.523 20:31:56 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:20.523 20:31:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:20.523 20:31:56 -- common/autotest_common.sh@10 -- # set +x 00:31:20.523 20:31:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:20.523 20:31:56 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:20.523 20:31:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:20.523 20:31:56 -- common/autotest_common.sh@10 -- # set +x 00:31:20.523 20:31:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:20.523 00:31:20.523 real 0m11.128s 00:31:20.523 user 0m15.680s 00:31:20.523 sys 0m0.708s 00:31:20.523 20:31:56 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:20.523 20:31:56 -- common/autotest_common.sh@10 -- # set +x 00:31:20.523 ************************************ 00:31:20.523 END TEST fio_dif_1_default 00:31:20.523 ************************************ 00:31:20.523 20:31:56 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:31:20.523 20:31:56 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:31:20.523 20:31:56 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:31:20.523 20:31:56 -- common/autotest_common.sh@10 -- # set +x 00:31:20.523 ************************************ 00:31:20.523 START TEST fio_dif_1_multi_subsystems 00:31:20.523 ************************************ 00:31:20.523 20:31:56 -- common/autotest_common.sh@1102 -- # fio_dif_1_multi_subsystems 00:31:20.523 20:31:56 -- target/dif.sh@92 -- # local files=1 00:31:20.523 20:31:56 -- target/dif.sh@94 -- # create_subsystems 0 1 00:31:20.523 20:31:56 -- target/dif.sh@28 -- # local sub 00:31:20.523 20:31:56 -- target/dif.sh@30 -- # for sub in "$@" 00:31:20.523 20:31:56 -- target/dif.sh@31 -- # create_subsystem 0 00:31:20.523 20:31:56 -- target/dif.sh@18 -- # local sub_id=0 00:31:20.523 20:31:56 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:20.523 20:31:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:20.523 20:31:56 -- common/autotest_common.sh@10 -- # set +x 00:31:20.523 bdev_null0 00:31:20.523 20:31:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:20.523 20:31:56 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:20.523 20:31:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:20.523 20:31:56 -- common/autotest_common.sh@10 -- # set +x 00:31:20.523 20:31:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:20.523 20:31:56 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:20.523 20:31:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:20.523 20:31:56 -- common/autotest_common.sh@10 -- # set +x 00:31:20.523 20:31:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:20.523 20:31:56 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:20.523 20:31:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:20.523 20:31:56 -- common/autotest_common.sh@10 -- # set +x 00:31:20.523 [2024-02-14 20:31:56.094852] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:20.523 20:31:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:20.523 20:31:56 -- target/dif.sh@30 -- # for sub in "$@" 00:31:20.523 20:31:56 -- target/dif.sh@31 -- # create_subsystem 1 00:31:20.523 20:31:56 -- target/dif.sh@18 -- # local sub_id=1 00:31:20.523 20:31:56 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:31:20.523 20:31:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:20.523 20:31:56 -- common/autotest_common.sh@10 -- # set +x 00:31:20.523 bdev_null1 00:31:20.523 20:31:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:20.523 20:31:56 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:20.523 20:31:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:20.523 20:31:56 -- common/autotest_common.sh@10 -- # set +x 00:31:20.523 20:31:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:20.523 20:31:56 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:20.523 20:31:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:20.523 20:31:56 -- common/autotest_common.sh@10 -- # set +x 00:31:20.523 20:31:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:20.523 20:31:56 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:20.523 20:31:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:20.523 20:31:56 -- common/autotest_common.sh@10 -- # set +x 00:31:20.523 20:31:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:20.523 20:31:56 -- target/dif.sh@95 -- # fio /dev/fd/62 00:31:20.523 20:31:56 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:31:20.523 20:31:56 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:31:20.523 20:31:56 -- nvmf/common.sh@520 -- # config=() 00:31:20.523 20:31:56 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:20.523 20:31:56 -- nvmf/common.sh@520 -- # local subsystem config 00:31:20.523 20:31:56 -- common/autotest_common.sh@1333 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:20.523 20:31:56 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:20.523 20:31:56 -- target/dif.sh@82 -- # gen_fio_conf 00:31:20.523 20:31:56 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:20.523 { 00:31:20.523 "params": { 00:31:20.523 "name": "Nvme$subsystem", 00:31:20.523 "trtype": "$TEST_TRANSPORT", 00:31:20.523 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:20.523 "adrfam": "ipv4", 00:31:20.523 "trsvcid": "$NVMF_PORT", 00:31:20.523 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:20.523 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:20.523 "hdgst": ${hdgst:-false}, 00:31:20.523 "ddgst": ${ddgst:-false} 00:31:20.523 }, 00:31:20.523 "method": "bdev_nvme_attach_controller" 00:31:20.523 } 00:31:20.523 EOF 00:31:20.523 )") 00:31:20.523 20:31:56 -- common/autotest_common.sh@1314 -- # local fio_dir=/usr/src/fio 00:31:20.523 20:31:56 -- target/dif.sh@54 -- # local file 00:31:20.523 20:31:56 -- common/autotest_common.sh@1316 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:20.523 20:31:56 -- target/dif.sh@56 -- # cat 00:31:20.523 20:31:56 -- common/autotest_common.sh@1316 -- # local sanitizers 00:31:20.524 20:31:56 -- common/autotest_common.sh@1317 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:20.524 20:31:56 -- common/autotest_common.sh@1318 -- # shift 00:31:20.524 20:31:56 -- common/autotest_common.sh@1320 -- # local asan_lib= 00:31:20.524 20:31:56 -- common/autotest_common.sh@1321 -- # for sanitizer in "${sanitizers[@]}" 00:31:20.524 20:31:56 -- nvmf/common.sh@542 -- # cat 00:31:20.524 20:31:56 -- target/dif.sh@72 -- # (( file = 1 )) 00:31:20.524 20:31:56 -- target/dif.sh@72 -- # (( file <= files )) 00:31:20.524 20:31:56 -- common/autotest_common.sh@1322 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:20.524 20:31:56 -- target/dif.sh@73 -- # cat 00:31:20.524 20:31:56 -- common/autotest_common.sh@1322 -- # grep libasan 00:31:20.524 20:31:56 -- common/autotest_common.sh@1322 -- # awk '{print $3}' 00:31:20.524 20:31:56 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:20.524 20:31:56 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:20.524 { 00:31:20.524 "params": { 00:31:20.524 "name": "Nvme$subsystem", 00:31:20.524 "trtype": "$TEST_TRANSPORT", 00:31:20.524 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:20.524 "adrfam": "ipv4", 00:31:20.524 "trsvcid": "$NVMF_PORT", 00:31:20.524 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:20.524 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:20.524 "hdgst": ${hdgst:-false}, 00:31:20.524 "ddgst": ${ddgst:-false} 00:31:20.524 }, 00:31:20.524 "method": "bdev_nvme_attach_controller" 00:31:20.524 } 00:31:20.524 EOF 00:31:20.524 )") 00:31:20.524 20:31:56 -- target/dif.sh@72 -- # (( file++ )) 00:31:20.524 20:31:56 -- target/dif.sh@72 -- # (( file <= files )) 00:31:20.524 20:31:56 -- nvmf/common.sh@542 -- # cat 00:31:20.524 20:31:56 -- nvmf/common.sh@544 -- # jq . 00:31:20.524 20:31:56 -- nvmf/common.sh@545 -- # IFS=, 00:31:20.524 20:31:56 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:31:20.524 "params": { 00:31:20.524 "name": "Nvme0", 00:31:20.524 "trtype": "tcp", 00:31:20.524 "traddr": "10.0.0.2", 00:31:20.524 "adrfam": "ipv4", 00:31:20.524 "trsvcid": "4420", 00:31:20.524 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:20.524 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:20.524 "hdgst": false, 00:31:20.524 "ddgst": false 00:31:20.524 }, 00:31:20.524 "method": "bdev_nvme_attach_controller" 00:31:20.524 },{ 00:31:20.524 "params": { 00:31:20.524 "name": "Nvme1", 00:31:20.524 "trtype": "tcp", 00:31:20.524 "traddr": "10.0.0.2", 00:31:20.524 "adrfam": "ipv4", 00:31:20.524 "trsvcid": "4420", 00:31:20.524 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:20.524 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:20.524 "hdgst": false, 00:31:20.524 "ddgst": false 00:31:20.524 }, 00:31:20.524 "method": "bdev_nvme_attach_controller" 00:31:20.524 }' 00:31:20.524 20:31:56 -- common/autotest_common.sh@1322 -- # asan_lib= 00:31:20.524 20:31:56 -- common/autotest_common.sh@1323 -- # [[ -n '' ]] 00:31:20.524 20:31:56 -- common/autotest_common.sh@1321 -- # for sanitizer in "${sanitizers[@]}" 00:31:20.524 20:31:56 -- common/autotest_common.sh@1322 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:20.524 20:31:56 -- common/autotest_common.sh@1322 -- # grep libclang_rt.asan 00:31:20.524 20:31:56 -- common/autotest_common.sh@1322 -- # awk '{print $3}' 00:31:20.524 20:31:56 -- common/autotest_common.sh@1322 -- # asan_lib= 00:31:20.524 20:31:56 -- common/autotest_common.sh@1323 -- # [[ -n '' ]] 00:31:20.524 20:31:56 -- common/autotest_common.sh@1329 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:20.524 20:31:56 -- common/autotest_common.sh@1329 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:20.524 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:20.524 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:20.524 fio-3.35 00:31:20.524 Starting 2 threads 00:31:20.524 EAL: No free 2048 kB hugepages reported on node 1 00:31:20.524 [2024-02-14 20:31:57.164308] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:31:20.524 [2024-02-14 20:31:57.164362] rpc.c: 167:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:31:30.503 00:31:30.503 filename0: (groupid=0, jobs=1): err= 0: pid=1984565: Wed Feb 14 20:32:07 2024 00:31:30.503 read: IOPS=181, BW=726KiB/s (743kB/s)(7264KiB/10005msec) 00:31:30.503 slat (nsec): min=5746, max=24761, avg=6971.97, stdev=2114.28 00:31:30.503 clat (usec): min=1163, max=45153, avg=22015.84, stdev=20439.56 00:31:30.503 lat (usec): min=1169, max=45177, avg=22022.81, stdev=20438.94 00:31:30.503 clat percentiles (usec): 00:31:30.503 | 1.00th=[ 1401], 5.00th=[ 1418], 10.00th=[ 1418], 20.00th=[ 1418], 00:31:30.503 | 30.00th=[ 1434], 40.00th=[ 1532], 50.00th=[41157], 60.00th=[42206], 00:31:30.503 | 70.00th=[42206], 80.00th=[42730], 90.00th=[42730], 95.00th=[42730], 00:31:30.503 | 99.00th=[43254], 99.50th=[43254], 99.90th=[45351], 99.95th=[45351], 00:31:30.503 | 99.99th=[45351] 00:31:30.503 bw ( KiB/s): min= 672, max= 768, per=65.54%, avg=724.80, stdev=33.28, samples=20 00:31:30.503 iops : min= 168, max= 192, avg=181.20, stdev= 8.32, samples=20 00:31:30.503 lat (msec) : 2=47.47%, 4=2.31%, 50=50.22% 00:31:30.503 cpu : usr=97.58%, sys=2.17%, ctx=10, majf=0, minf=184 00:31:30.503 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:30.503 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.503 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.503 issued rwts: total=1816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:30.503 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:30.503 filename1: (groupid=0, jobs=1): err= 0: pid=1984566: Wed Feb 14 20:32:07 2024 00:31:30.503 read: IOPS=94, BW=380KiB/s (389kB/s)(3808KiB/10023msec) 00:31:30.503 slat (nsec): min=5761, max=27450, avg=7634.53, stdev=2695.57 00:31:30.503 clat (usec): min=41782, max=46296, avg=42087.11, stdev=419.71 00:31:30.503 lat (usec): min=41788, max=46324, avg=42094.74, stdev=420.23 00:31:30.503 clat percentiles (usec): 00:31:30.503 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[42206], 00:31:30.503 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:31:30.503 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42730], 00:31:30.503 | 99.00th=[43779], 99.50th=[44303], 99.90th=[46400], 99.95th=[46400], 00:31:30.503 | 99.99th=[46400] 00:31:30.503 bw ( KiB/s): min= 352, max= 384, per=34.31%, avg=379.20, stdev=11.72, samples=20 00:31:30.503 iops : min= 88, max= 96, avg=94.80, stdev= 2.93, samples=20 00:31:30.503 lat (msec) : 50=100.00% 00:31:30.503 cpu : usr=97.73%, sys=2.03%, ctx=13, majf=0, minf=50 00:31:30.503 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:30.503 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.503 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.503 issued rwts: total=952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:30.503 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:30.503 00:31:30.503 Run status group 0 (all jobs): 00:31:30.503 READ: bw=1105KiB/s (1131kB/s), 380KiB/s-726KiB/s (389kB/s-743kB/s), io=10.8MiB (11.3MB), run=10005-10023msec 00:31:30.503 20:32:07 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:31:30.503 20:32:07 -- target/dif.sh@43 -- # local sub 00:31:30.503 20:32:07 -- target/dif.sh@45 -- # for sub in "$@" 00:31:30.503 20:32:07 -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:30.503 20:32:07 -- target/dif.sh@36 -- # local sub_id=0 00:31:30.503 20:32:07 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:30.503 20:32:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:30.503 20:32:07 -- common/autotest_common.sh@10 -- # set +x 00:31:30.503 20:32:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:30.503 20:32:07 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:30.503 20:32:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:30.503 20:32:07 -- common/autotest_common.sh@10 -- # set +x 00:31:30.503 20:32:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:30.503 20:32:07 -- target/dif.sh@45 -- # for sub in "$@" 00:31:30.503 20:32:07 -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:30.503 20:32:07 -- target/dif.sh@36 -- # local sub_id=1 00:31:30.503 20:32:07 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:30.503 20:32:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:30.503 20:32:07 -- common/autotest_common.sh@10 -- # set +x 00:31:30.503 20:32:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:30.503 20:32:07 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:30.503 20:32:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:30.503 20:32:07 -- common/autotest_common.sh@10 -- # set +x 00:31:30.503 20:32:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:30.503 00:31:30.503 real 0m11.453s 00:31:30.503 user 0m26.213s 00:31:30.503 sys 0m0.723s 00:31:30.503 20:32:07 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:30.503 20:32:07 -- common/autotest_common.sh@10 -- # set +x 00:31:30.503 ************************************ 00:31:30.503 END TEST fio_dif_1_multi_subsystems 00:31:30.503 ************************************ 00:31:30.503 20:32:07 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:31:30.503 20:32:07 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:31:30.503 20:32:07 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:31:30.503 20:32:07 -- common/autotest_common.sh@10 -- # set +x 00:31:30.503 ************************************ 00:31:30.503 START TEST fio_dif_rand_params 00:31:30.503 ************************************ 00:31:30.503 20:32:07 -- common/autotest_common.sh@1102 -- # fio_dif_rand_params 00:31:30.503 20:32:07 -- target/dif.sh@100 -- # local NULL_DIF 00:31:30.503 20:32:07 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:31:30.503 20:32:07 -- target/dif.sh@103 -- # NULL_DIF=3 00:31:30.503 20:32:07 -- target/dif.sh@103 -- # bs=128k 00:31:30.503 20:32:07 -- target/dif.sh@103 -- # numjobs=3 00:31:30.503 20:32:07 -- target/dif.sh@103 -- # iodepth=3 00:31:30.504 20:32:07 -- target/dif.sh@103 -- # runtime=5 00:31:30.504 20:32:07 -- target/dif.sh@105 -- # create_subsystems 0 00:31:30.504 20:32:07 -- target/dif.sh@28 -- # local sub 00:31:30.504 20:32:07 -- target/dif.sh@30 -- # for sub in "$@" 00:31:30.504 20:32:07 -- target/dif.sh@31 -- # create_subsystem 0 00:31:30.504 20:32:07 -- target/dif.sh@18 -- # local sub_id=0 00:31:30.504 20:32:07 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:30.504 20:32:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:30.504 20:32:07 -- common/autotest_common.sh@10 -- # set +x 00:31:30.504 bdev_null0 00:31:30.504 20:32:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:30.504 20:32:07 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:30.504 20:32:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:30.504 20:32:07 -- common/autotest_common.sh@10 -- # set +x 00:31:30.504 20:32:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:30.504 20:32:07 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:30.504 20:32:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:30.504 20:32:07 -- common/autotest_common.sh@10 -- # set +x 00:31:30.504 20:32:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:30.504 20:32:07 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:30.504 20:32:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:30.504 20:32:07 -- common/autotest_common.sh@10 -- # set +x 00:31:30.504 [2024-02-14 20:32:07.588066] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:30.504 20:32:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:30.504 20:32:07 -- target/dif.sh@106 -- # fio /dev/fd/62 00:31:30.504 20:32:07 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:31:30.504 20:32:07 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:30.504 20:32:07 -- nvmf/common.sh@520 -- # config=() 00:31:30.504 20:32:07 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:30.504 20:32:07 -- nvmf/common.sh@520 -- # local subsystem config 00:31:30.504 20:32:07 -- common/autotest_common.sh@1333 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:30.504 20:32:07 -- target/dif.sh@82 -- # gen_fio_conf 00:31:30.504 20:32:07 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:30.504 20:32:07 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:30.504 { 00:31:30.504 "params": { 00:31:30.504 "name": "Nvme$subsystem", 00:31:30.504 "trtype": "$TEST_TRANSPORT", 00:31:30.504 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:30.504 "adrfam": "ipv4", 00:31:30.504 "trsvcid": "$NVMF_PORT", 00:31:30.504 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:30.504 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:30.504 "hdgst": ${hdgst:-false}, 00:31:30.504 "ddgst": ${ddgst:-false} 00:31:30.504 }, 00:31:30.504 "method": "bdev_nvme_attach_controller" 00:31:30.504 } 00:31:30.504 EOF 00:31:30.504 )") 00:31:30.504 20:32:07 -- common/autotest_common.sh@1314 -- # local fio_dir=/usr/src/fio 00:31:30.504 20:32:07 -- target/dif.sh@54 -- # local file 00:31:30.504 20:32:07 -- common/autotest_common.sh@1316 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:30.504 20:32:07 -- target/dif.sh@56 -- # cat 00:31:30.504 20:32:07 -- common/autotest_common.sh@1316 -- # local sanitizers 00:31:30.504 20:32:07 -- common/autotest_common.sh@1317 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:30.504 20:32:07 -- common/autotest_common.sh@1318 -- # shift 00:31:30.504 20:32:07 -- common/autotest_common.sh@1320 -- # local asan_lib= 00:31:30.504 20:32:07 -- common/autotest_common.sh@1321 -- # for sanitizer in "${sanitizers[@]}" 00:31:30.504 20:32:07 -- nvmf/common.sh@542 -- # cat 00:31:30.504 20:32:07 -- target/dif.sh@72 -- # (( file = 1 )) 00:31:30.504 20:32:07 -- target/dif.sh@72 -- # (( file <= files )) 00:31:30.504 20:32:07 -- common/autotest_common.sh@1322 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:30.504 20:32:07 -- common/autotest_common.sh@1322 -- # grep libasan 00:31:30.504 20:32:07 -- common/autotest_common.sh@1322 -- # awk '{print $3}' 00:31:30.504 20:32:07 -- nvmf/common.sh@544 -- # jq . 00:31:30.504 20:32:07 -- nvmf/common.sh@545 -- # IFS=, 00:31:30.504 20:32:07 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:31:30.504 "params": { 00:31:30.504 "name": "Nvme0", 00:31:30.504 "trtype": "tcp", 00:31:30.504 "traddr": "10.0.0.2", 00:31:30.504 "adrfam": "ipv4", 00:31:30.504 "trsvcid": "4420", 00:31:30.504 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:30.504 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:30.504 "hdgst": false, 00:31:30.504 "ddgst": false 00:31:30.504 }, 00:31:30.504 "method": "bdev_nvme_attach_controller" 00:31:30.504 }' 00:31:30.504 20:32:07 -- common/autotest_common.sh@1322 -- # asan_lib= 00:31:30.504 20:32:07 -- common/autotest_common.sh@1323 -- # [[ -n '' ]] 00:31:30.504 20:32:07 -- common/autotest_common.sh@1321 -- # for sanitizer in "${sanitizers[@]}" 00:31:30.504 20:32:07 -- common/autotest_common.sh@1322 -- # awk '{print $3}' 00:31:30.504 20:32:07 -- common/autotest_common.sh@1322 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:30.504 20:32:07 -- common/autotest_common.sh@1322 -- # grep libclang_rt.asan 00:31:30.504 20:32:07 -- common/autotest_common.sh@1322 -- # asan_lib= 00:31:30.504 20:32:07 -- common/autotest_common.sh@1323 -- # [[ -n '' ]] 00:31:30.504 20:32:07 -- common/autotest_common.sh@1329 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:30.504 20:32:07 -- common/autotest_common.sh@1329 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:30.761 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:30.761 ... 00:31:30.761 fio-3.35 00:31:30.761 Starting 3 threads 00:31:30.761 EAL: No free 2048 kB hugepages reported on node 1 00:31:31.018 [2024-02-14 20:32:08.280296] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:31:31.018 [2024-02-14 20:32:08.280352] rpc.c: 167:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:31:36.282 00:31:36.282 filename0: (groupid=0, jobs=1): err= 0: pid=1986401: Wed Feb 14 20:32:13 2024 00:31:36.282 read: IOPS=331, BW=41.5MiB/s (43.5MB/s)(208MiB/5002msec) 00:31:36.282 slat (nsec): min=5933, max=61457, avg=9159.85, stdev=3133.35 00:31:36.282 clat (usec): min=4144, max=91029, avg=9026.78, stdev=10216.61 00:31:36.282 lat (usec): min=4150, max=91037, avg=9035.94, stdev=10216.95 00:31:36.282 clat percentiles (usec): 00:31:36.282 | 1.00th=[ 4490], 5.00th=[ 4686], 10.00th=[ 4883], 20.00th=[ 5211], 00:31:36.282 | 30.00th=[ 5669], 40.00th=[ 6128], 50.00th=[ 6587], 60.00th=[ 6980], 00:31:36.282 | 70.00th=[ 7504], 80.00th=[ 8291], 90.00th=[ 9896], 95.00th=[46400], 00:31:36.282 | 99.00th=[52691], 99.50th=[54789], 99.90th=[90702], 99.95th=[90702], 00:31:36.282 | 99.99th=[90702] 00:31:36.282 bw ( KiB/s): min=29184, max=59136, per=45.51%, avg=42444.80, stdev=10927.87, samples=10 00:31:36.282 iops : min= 228, max= 462, avg=331.60, stdev=85.37, samples=10 00:31:36.282 lat (msec) : 10=90.24%, 20=4.70%, 50=2.59%, 100=2.47% 00:31:36.282 cpu : usr=96.26%, sys=3.24%, ctx=12, majf=0, minf=132 00:31:36.282 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:36.282 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.282 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.282 issued rwts: total=1660,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:36.282 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:36.282 filename0: (groupid=0, jobs=1): err= 0: pid=1986402: Wed Feb 14 20:32:13 2024 00:31:36.282 read: IOPS=220, BW=27.6MiB/s (29.0MB/s)(139MiB/5038msec) 00:31:36.282 slat (nsec): min=5959, max=29263, avg=9327.42, stdev=3048.83 00:31:36.282 clat (usec): min=3867, max=96061, avg=13564.35, stdev=14820.64 00:31:36.282 lat (usec): min=3873, max=96074, avg=13573.67, stdev=14820.97 00:31:36.282 clat percentiles (usec): 00:31:36.282 | 1.00th=[ 4555], 5.00th=[ 4817], 10.00th=[ 5080], 20.00th=[ 5866], 00:31:36.282 | 30.00th=[ 6521], 40.00th=[ 7439], 50.00th=[ 8586], 60.00th=[ 9634], 00:31:36.282 | 70.00th=[10421], 80.00th=[11863], 90.00th=[49021], 95.00th=[52167], 00:31:36.282 | 99.00th=[55837], 99.50th=[58983], 99.90th=[93848], 99.95th=[95945], 00:31:36.282 | 99.99th=[95945] 00:31:36.282 bw ( KiB/s): min=16896, max=42752, per=30.47%, avg=28416.00, stdev=7096.53, samples=10 00:31:36.282 iops : min= 132, max= 334, avg=222.00, stdev=55.44, samples=10 00:31:36.282 lat (msec) : 4=0.09%, 10=65.23%, 20=22.46%, 50=2.70%, 100=9.52% 00:31:36.282 cpu : usr=96.37%, sys=3.14%, ctx=40, majf=0, minf=83 00:31:36.282 IO depths : 1=0.8%, 2=99.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:36.282 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.282 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.282 issued rwts: total=1113,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:36.282 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:36.282 filename0: (groupid=0, jobs=1): err= 0: pid=1986403: Wed Feb 14 20:32:13 2024 00:31:36.282 read: IOPS=179, BW=22.4MiB/s (23.5MB/s)(112MiB/5002msec) 00:31:36.282 slat (nsec): min=5968, max=28677, avg=9667.37, stdev=3174.62 00:31:36.282 clat (usec): min=4060, max=95918, avg=16695.87, stdev=18235.72 00:31:36.282 lat (usec): min=4070, max=95929, avg=16705.53, stdev=18236.00 00:31:36.282 clat percentiles (usec): 00:31:36.282 | 1.00th=[ 4621], 5.00th=[ 4948], 10.00th=[ 5538], 20.00th=[ 6587], 00:31:36.282 | 30.00th=[ 7570], 40.00th=[ 8586], 50.00th=[ 9372], 60.00th=[10290], 00:31:36.282 | 70.00th=[11469], 80.00th=[13698], 90.00th=[52167], 95.00th=[53216], 00:31:36.282 | 99.00th=[92799], 99.50th=[94897], 99.90th=[95945], 99.95th=[95945], 00:31:36.282 | 99.99th=[95945] 00:31:36.282 bw ( KiB/s): min=13056, max=40704, per=24.57%, avg=22912.00, stdev=7485.29, samples=10 00:31:36.282 iops : min= 102, max= 318, avg=179.00, stdev=58.48, samples=10 00:31:36.282 lat (msec) : 10=56.57%, 20=26.17%, 50=2.45%, 100=14.81% 00:31:36.282 cpu : usr=96.86%, sys=2.76%, ctx=7, majf=0, minf=152 00:31:36.282 IO depths : 1=1.7%, 2=98.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:36.282 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.282 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.282 issued rwts: total=898,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:36.282 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:36.282 00:31:36.282 Run status group 0 (all jobs): 00:31:36.282 READ: bw=91.1MiB/s (95.5MB/s), 22.4MiB/s-41.5MiB/s (23.5MB/s-43.5MB/s), io=459MiB (481MB), run=5002-5038msec 00:31:36.282 20:32:13 -- target/dif.sh@107 -- # destroy_subsystems 0 00:31:36.282 20:32:13 -- target/dif.sh@43 -- # local sub 00:31:36.282 20:32:13 -- target/dif.sh@45 -- # for sub in "$@" 00:31:36.282 20:32:13 -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:36.282 20:32:13 -- target/dif.sh@36 -- # local sub_id=0 00:31:36.282 20:32:13 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:36.282 20:32:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:36.282 20:32:13 -- common/autotest_common.sh@10 -- # set +x 00:31:36.282 20:32:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:36.282 20:32:13 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:36.282 20:32:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:36.282 20:32:13 -- common/autotest_common.sh@10 -- # set +x 00:31:36.282 20:32:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:36.282 20:32:13 -- target/dif.sh@109 -- # NULL_DIF=2 00:31:36.282 20:32:13 -- target/dif.sh@109 -- # bs=4k 00:31:36.282 20:32:13 -- target/dif.sh@109 -- # numjobs=8 00:31:36.282 20:32:13 -- target/dif.sh@109 -- # iodepth=16 00:31:36.282 20:32:13 -- target/dif.sh@109 -- # runtime= 00:31:36.282 20:32:13 -- target/dif.sh@109 -- # files=2 00:31:36.282 20:32:13 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:31:36.282 20:32:13 -- target/dif.sh@28 -- # local sub 00:31:36.282 20:32:13 -- target/dif.sh@30 -- # for sub in "$@" 00:31:36.282 20:32:13 -- target/dif.sh@31 -- # create_subsystem 0 00:31:36.282 20:32:13 -- target/dif.sh@18 -- # local sub_id=0 00:31:36.282 20:32:13 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:31:36.282 20:32:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:36.282 20:32:13 -- common/autotest_common.sh@10 -- # set +x 00:31:36.282 bdev_null0 00:31:36.282 20:32:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:36.282 20:32:13 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:36.282 20:32:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:36.282 20:32:13 -- common/autotest_common.sh@10 -- # set +x 00:31:36.282 20:32:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:36.282 20:32:13 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:36.282 20:32:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:36.282 20:32:13 -- common/autotest_common.sh@10 -- # set +x 00:31:36.541 20:32:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:36.541 20:32:13 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:36.541 20:32:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:36.541 20:32:13 -- common/autotest_common.sh@10 -- # set +x 00:31:36.541 [2024-02-14 20:32:13.703528] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:36.541 20:32:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:36.542 20:32:13 -- target/dif.sh@30 -- # for sub in "$@" 00:31:36.542 20:32:13 -- target/dif.sh@31 -- # create_subsystem 1 00:31:36.542 20:32:13 -- target/dif.sh@18 -- # local sub_id=1 00:31:36.542 20:32:13 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:31:36.542 20:32:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:36.542 20:32:13 -- common/autotest_common.sh@10 -- # set +x 00:31:36.542 bdev_null1 00:31:36.542 20:32:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:36.542 20:32:13 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:36.542 20:32:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:36.542 20:32:13 -- common/autotest_common.sh@10 -- # set +x 00:31:36.542 20:32:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:36.542 20:32:13 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:36.542 20:32:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:36.542 20:32:13 -- common/autotest_common.sh@10 -- # set +x 00:31:36.542 20:32:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:36.542 20:32:13 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:36.542 20:32:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:36.542 20:32:13 -- common/autotest_common.sh@10 -- # set +x 00:31:36.542 20:32:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:36.542 20:32:13 -- target/dif.sh@30 -- # for sub in "$@" 00:31:36.542 20:32:13 -- target/dif.sh@31 -- # create_subsystem 2 00:31:36.542 20:32:13 -- target/dif.sh@18 -- # local sub_id=2 00:31:36.542 20:32:13 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:31:36.542 20:32:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:36.542 20:32:13 -- common/autotest_common.sh@10 -- # set +x 00:31:36.542 bdev_null2 00:31:36.542 20:32:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:36.542 20:32:13 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:31:36.542 20:32:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:36.542 20:32:13 -- common/autotest_common.sh@10 -- # set +x 00:31:36.542 20:32:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:36.542 20:32:13 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:31:36.542 20:32:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:36.542 20:32:13 -- common/autotest_common.sh@10 -- # set +x 00:31:36.542 20:32:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:36.542 20:32:13 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:36.542 20:32:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:36.542 20:32:13 -- common/autotest_common.sh@10 -- # set +x 00:31:36.542 20:32:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:36.542 20:32:13 -- target/dif.sh@112 -- # fio /dev/fd/62 00:31:36.542 20:32:13 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:31:36.542 20:32:13 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:31:36.542 20:32:13 -- nvmf/common.sh@520 -- # config=() 00:31:36.542 20:32:13 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:36.542 20:32:13 -- nvmf/common.sh@520 -- # local subsystem config 00:31:36.542 20:32:13 -- common/autotest_common.sh@1333 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:36.542 20:32:13 -- target/dif.sh@82 -- # gen_fio_conf 00:31:36.542 20:32:13 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:36.542 20:32:13 -- common/autotest_common.sh@1314 -- # local fio_dir=/usr/src/fio 00:31:36.542 20:32:13 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:36.542 { 00:31:36.542 "params": { 00:31:36.542 "name": "Nvme$subsystem", 00:31:36.542 "trtype": "$TEST_TRANSPORT", 00:31:36.542 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:36.542 "adrfam": "ipv4", 00:31:36.542 "trsvcid": "$NVMF_PORT", 00:31:36.542 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:36.542 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:36.542 "hdgst": ${hdgst:-false}, 00:31:36.542 "ddgst": ${ddgst:-false} 00:31:36.542 }, 00:31:36.542 "method": "bdev_nvme_attach_controller" 00:31:36.542 } 00:31:36.542 EOF 00:31:36.542 )") 00:31:36.542 20:32:13 -- target/dif.sh@54 -- # local file 00:31:36.542 20:32:13 -- common/autotest_common.sh@1316 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:36.542 20:32:13 -- target/dif.sh@56 -- # cat 00:31:36.542 20:32:13 -- common/autotest_common.sh@1316 -- # local sanitizers 00:31:36.542 20:32:13 -- common/autotest_common.sh@1317 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:36.542 20:32:13 -- common/autotest_common.sh@1318 -- # shift 00:31:36.542 20:32:13 -- common/autotest_common.sh@1320 -- # local asan_lib= 00:31:36.542 20:32:13 -- common/autotest_common.sh@1321 -- # for sanitizer in "${sanitizers[@]}" 00:31:36.542 20:32:13 -- nvmf/common.sh@542 -- # cat 00:31:36.542 20:32:13 -- target/dif.sh@72 -- # (( file = 1 )) 00:31:36.542 20:32:13 -- common/autotest_common.sh@1322 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:36.542 20:32:13 -- target/dif.sh@72 -- # (( file <= files )) 00:31:36.542 20:32:13 -- common/autotest_common.sh@1322 -- # grep libasan 00:31:36.542 20:32:13 -- target/dif.sh@73 -- # cat 00:31:36.542 20:32:13 -- common/autotest_common.sh@1322 -- # awk '{print $3}' 00:31:36.542 20:32:13 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:36.542 20:32:13 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:36.542 { 00:31:36.542 "params": { 00:31:36.542 "name": "Nvme$subsystem", 00:31:36.542 "trtype": "$TEST_TRANSPORT", 00:31:36.542 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:36.542 "adrfam": "ipv4", 00:31:36.542 "trsvcid": "$NVMF_PORT", 00:31:36.542 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:36.542 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:36.542 "hdgst": ${hdgst:-false}, 00:31:36.542 "ddgst": ${ddgst:-false} 00:31:36.542 }, 00:31:36.542 "method": "bdev_nvme_attach_controller" 00:31:36.542 } 00:31:36.542 EOF 00:31:36.542 )") 00:31:36.542 20:32:13 -- target/dif.sh@72 -- # (( file++ )) 00:31:36.542 20:32:13 -- target/dif.sh@72 -- # (( file <= files )) 00:31:36.542 20:32:13 -- target/dif.sh@73 -- # cat 00:31:36.542 20:32:13 -- nvmf/common.sh@542 -- # cat 00:31:36.542 20:32:13 -- target/dif.sh@72 -- # (( file++ )) 00:31:36.542 20:32:13 -- target/dif.sh@72 -- # (( file <= files )) 00:31:36.542 20:32:13 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:36.542 20:32:13 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:36.542 { 00:31:36.542 "params": { 00:31:36.542 "name": "Nvme$subsystem", 00:31:36.542 "trtype": "$TEST_TRANSPORT", 00:31:36.542 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:36.542 "adrfam": "ipv4", 00:31:36.542 "trsvcid": "$NVMF_PORT", 00:31:36.542 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:36.542 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:36.542 "hdgst": ${hdgst:-false}, 00:31:36.542 "ddgst": ${ddgst:-false} 00:31:36.542 }, 00:31:36.542 "method": "bdev_nvme_attach_controller" 00:31:36.542 } 00:31:36.542 EOF 00:31:36.542 )") 00:31:36.542 20:32:13 -- nvmf/common.sh@542 -- # cat 00:31:36.542 20:32:13 -- nvmf/common.sh@544 -- # jq . 00:31:36.542 20:32:13 -- nvmf/common.sh@545 -- # IFS=, 00:31:36.542 20:32:13 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:31:36.542 "params": { 00:31:36.542 "name": "Nvme0", 00:31:36.542 "trtype": "tcp", 00:31:36.542 "traddr": "10.0.0.2", 00:31:36.542 "adrfam": "ipv4", 00:31:36.542 "trsvcid": "4420", 00:31:36.542 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:36.542 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:36.542 "hdgst": false, 00:31:36.542 "ddgst": false 00:31:36.542 }, 00:31:36.542 "method": "bdev_nvme_attach_controller" 00:31:36.542 },{ 00:31:36.542 "params": { 00:31:36.542 "name": "Nvme1", 00:31:36.542 "trtype": "tcp", 00:31:36.542 "traddr": "10.0.0.2", 00:31:36.542 "adrfam": "ipv4", 00:31:36.542 "trsvcid": "4420", 00:31:36.542 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:36.542 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:36.542 "hdgst": false, 00:31:36.542 "ddgst": false 00:31:36.542 }, 00:31:36.542 "method": "bdev_nvme_attach_controller" 00:31:36.542 },{ 00:31:36.542 "params": { 00:31:36.542 "name": "Nvme2", 00:31:36.542 "trtype": "tcp", 00:31:36.542 "traddr": "10.0.0.2", 00:31:36.542 "adrfam": "ipv4", 00:31:36.542 "trsvcid": "4420", 00:31:36.542 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:31:36.542 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:31:36.542 "hdgst": false, 00:31:36.542 "ddgst": false 00:31:36.542 }, 00:31:36.542 "method": "bdev_nvme_attach_controller" 00:31:36.542 }' 00:31:36.542 20:32:13 -- common/autotest_common.sh@1322 -- # asan_lib= 00:31:36.542 20:32:13 -- common/autotest_common.sh@1323 -- # [[ -n '' ]] 00:31:36.542 20:32:13 -- common/autotest_common.sh@1321 -- # for sanitizer in "${sanitizers[@]}" 00:31:36.542 20:32:13 -- common/autotest_common.sh@1322 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:36.542 20:32:13 -- common/autotest_common.sh@1322 -- # grep libclang_rt.asan 00:31:36.542 20:32:13 -- common/autotest_common.sh@1322 -- # awk '{print $3}' 00:31:36.542 20:32:13 -- common/autotest_common.sh@1322 -- # asan_lib= 00:31:36.542 20:32:13 -- common/autotest_common.sh@1323 -- # [[ -n '' ]] 00:31:36.542 20:32:13 -- common/autotest_common.sh@1329 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:36.542 20:32:13 -- common/autotest_common.sh@1329 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:36.801 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:36.801 ... 00:31:36.801 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:36.801 ... 00:31:36.801 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:36.801 ... 00:31:36.801 fio-3.35 00:31:36.801 Starting 24 threads 00:31:36.801 EAL: No free 2048 kB hugepages reported on node 1 00:31:37.735 [2024-02-14 20:32:14.999316] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:31:37.735 [2024-02-14 20:32:14.999362] rpc.c: 167:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:31:50.015 00:31:50.015 filename0: (groupid=0, jobs=1): err= 0: pid=1987672: Wed Feb 14 20:32:25 2024 00:31:50.015 read: IOPS=580, BW=2320KiB/s (2376kB/s)(22.8MiB/10050msec) 00:31:50.015 slat (usec): min=6, max=905, avg=32.33, stdev=28.23 00:31:50.015 clat (usec): min=4628, max=62316, avg=27339.92, stdev=5840.91 00:31:50.015 lat (usec): min=4647, max=62346, avg=27372.25, stdev=5840.78 00:31:50.015 clat percentiles (usec): 00:31:50.015 | 1.00th=[11863], 5.00th=[20055], 10.00th=[22938], 20.00th=[24249], 00:31:50.015 | 30.00th=[24773], 40.00th=[25297], 50.00th=[25822], 60.00th=[26346], 00:31:50.015 | 70.00th=[28181], 80.00th=[31589], 90.00th=[35390], 95.00th=[38536], 00:31:50.015 | 99.00th=[44303], 99.50th=[48497], 99.90th=[62129], 99.95th=[62129], 00:31:50.015 | 99.99th=[62129] 00:31:50.015 bw ( KiB/s): min= 2144, max= 2448, per=4.15%, avg=2328.60, stdev=96.57, samples=20 00:31:50.015 iops : min= 536, max= 612, avg=582.00, stdev=24.11, samples=20 00:31:50.015 lat (msec) : 10=0.69%, 20=4.19%, 50=94.73%, 100=0.39% 00:31:50.015 cpu : usr=86.97%, sys=5.05%, ctx=224, majf=0, minf=9 00:31:50.015 IO depths : 1=0.2%, 2=0.4%, 4=6.8%, 8=78.2%, 16=14.4%, 32=0.0%, >=64=0.0% 00:31:50.015 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.015 complete : 0=0.0%, 4=89.9%, 8=6.4%, 16=3.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.015 issued rwts: total=5830,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:50.015 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:50.015 filename0: (groupid=0, jobs=1): err= 0: pid=1987673: Wed Feb 14 20:32:25 2024 00:31:50.015 read: IOPS=638, BW=2554KiB/s (2615kB/s)(25.0MiB/10009msec) 00:31:50.015 slat (nsec): min=6161, max=95901, avg=38185.65, stdev=18325.50 00:31:50.015 clat (usec): min=3153, max=41831, avg=24760.10, stdev=3159.31 00:31:50.015 lat (usec): min=3167, max=41843, avg=24798.28, stdev=3161.31 00:31:50.015 clat percentiles (usec): 00:31:50.015 | 1.00th=[12911], 5.00th=[20055], 10.00th=[22938], 20.00th=[23725], 00:31:50.015 | 30.00th=[24249], 40.00th=[24511], 50.00th=[24773], 60.00th=[25297], 00:31:50.015 | 70.00th=[25560], 80.00th=[26084], 90.00th=[27132], 95.00th=[28181], 00:31:50.015 | 99.00th=[33162], 99.50th=[36439], 99.90th=[40633], 99.95th=[41157], 00:31:50.015 | 99.99th=[41681] 00:31:50.015 bw ( KiB/s): min= 2427, max= 2784, per=4.55%, avg=2553.79, stdev=96.32, samples=19 00:31:50.015 iops : min= 606, max= 696, avg=638.26, stdev=24.06, samples=19 00:31:50.015 lat (msec) : 4=0.08%, 10=0.70%, 20=4.10%, 50=95.12% 00:31:50.015 cpu : usr=92.35%, sys=3.35%, ctx=181, majf=0, minf=9 00:31:50.015 IO depths : 1=3.1%, 2=8.6%, 4=22.7%, 8=55.9%, 16=9.7%, 32=0.0%, >=64=0.0% 00:31:50.015 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.015 complete : 0=0.0%, 4=93.6%, 8=0.9%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.015 issued rwts: total=6391,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:50.015 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:50.015 filename0: (groupid=0, jobs=1): err= 0: pid=1987674: Wed Feb 14 20:32:25 2024 00:31:50.015 read: IOPS=631, BW=2527KiB/s (2588kB/s)(24.7MiB/10012msec) 00:31:50.015 slat (nsec): min=6666, max=99347, avg=40716.19, stdev=17192.55 00:31:50.015 clat (usec): min=9960, max=47109, avg=24958.34, stdev=2507.32 00:31:50.015 lat (usec): min=9974, max=47145, avg=24999.06, stdev=2508.30 00:31:50.015 clat percentiles (usec): 00:31:50.015 | 1.00th=[16057], 5.00th=[21890], 10.00th=[23200], 20.00th=[23725], 00:31:50.015 | 30.00th=[24249], 40.00th=[24511], 50.00th=[25035], 60.00th=[25297], 00:31:50.015 | 70.00th=[25560], 80.00th=[26084], 90.00th=[26870], 95.00th=[27919], 00:31:50.015 | 99.00th=[33817], 99.50th=[35914], 99.90th=[39584], 99.95th=[44303], 00:31:50.015 | 99.99th=[46924] 00:31:50.015 bw ( KiB/s): min= 2304, max= 2688, per=4.51%, avg=2529.05, stdev=91.28, samples=20 00:31:50.015 iops : min= 576, max= 672, avg=632.20, stdev=22.86, samples=20 00:31:50.015 lat (msec) : 10=0.03%, 20=3.21%, 50=96.76% 00:31:50.015 cpu : usr=98.59%, sys=0.82%, ctx=95, majf=0, minf=9 00:31:50.015 IO depths : 1=4.7%, 2=10.1%, 4=23.5%, 8=53.9%, 16=7.8%, 32=0.0%, >=64=0.0% 00:31:50.015 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.015 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.015 issued rwts: total=6326,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:50.015 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:50.015 filename0: (groupid=0, jobs=1): err= 0: pid=1987675: Wed Feb 14 20:32:25 2024 00:31:50.015 read: IOPS=582, BW=2328KiB/s (2384kB/s)(22.8MiB/10021msec) 00:31:50.015 slat (usec): min=6, max=100, avg=29.56, stdev=18.25 00:31:50.015 clat (usec): min=9174, max=49482, avg=27320.89, stdev=5231.59 00:31:50.015 lat (usec): min=9196, max=49538, avg=27350.45, stdev=5232.92 00:31:50.015 clat percentiles (usec): 00:31:50.015 | 1.00th=[15270], 5.00th=[19792], 10.00th=[23200], 20.00th=[24249], 00:31:50.015 | 30.00th=[25035], 40.00th=[25297], 50.00th=[25822], 60.00th=[26608], 00:31:50.015 | 70.00th=[28181], 80.00th=[31327], 90.00th=[35390], 95.00th=[38011], 00:31:50.015 | 99.00th=[42206], 99.50th=[44827], 99.90th=[49546], 99.95th=[49546], 00:31:50.015 | 99.99th=[49546] 00:31:50.015 bw ( KiB/s): min= 2200, max= 2522, per=4.15%, avg=2328.65, stdev=77.73, samples=20 00:31:50.015 iops : min= 550, max= 630, avg=582.10, stdev=19.37, samples=20 00:31:50.015 lat (msec) : 10=0.02%, 20=5.30%, 50=94.69% 00:31:50.015 cpu : usr=98.97%, sys=0.66%, ctx=9, majf=0, minf=9 00:31:50.015 IO depths : 1=0.2%, 2=0.4%, 4=7.1%, 8=78.0%, 16=14.3%, 32=0.0%, >=64=0.0% 00:31:50.015 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.015 complete : 0=0.0%, 4=90.0%, 8=6.1%, 16=3.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.015 issued rwts: total=5833,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:50.015 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:50.015 filename0: (groupid=0, jobs=1): err= 0: pid=1987676: Wed Feb 14 20:32:25 2024 00:31:50.015 read: IOPS=632, BW=2531KiB/s (2592kB/s)(24.8MiB/10013msec) 00:31:50.015 slat (usec): min=7, max=201, avg=38.59, stdev=18.53 00:31:50.015 clat (usec): min=10230, max=36833, avg=24954.50, stdev=1916.48 00:31:50.015 lat (usec): min=10242, max=36878, avg=24993.09, stdev=1917.56 00:31:50.015 clat percentiles (usec): 00:31:50.015 | 1.00th=[17433], 5.00th=[22676], 10.00th=[23462], 20.00th=[23987], 00:31:50.015 | 30.00th=[24249], 40.00th=[24511], 50.00th=[25035], 60.00th=[25297], 00:31:50.015 | 70.00th=[25560], 80.00th=[26084], 90.00th=[26608], 95.00th=[27395], 00:31:50.015 | 99.00th=[31589], 99.50th=[32637], 99.90th=[35390], 99.95th=[35390], 00:31:50.015 | 99.99th=[36963] 00:31:50.015 bw ( KiB/s): min= 2304, max= 2560, per=4.51%, avg=2527.40, stdev=70.15, samples=20 00:31:50.015 iops : min= 576, max= 640, avg=631.80, stdev=17.52, samples=20 00:31:50.015 lat (msec) : 20=1.77%, 50=98.23% 00:31:50.015 cpu : usr=97.73%, sys=1.17%, ctx=170, majf=0, minf=9 00:31:50.016 IO depths : 1=5.7%, 2=11.7%, 4=24.5%, 8=51.3%, 16=6.9%, 32=0.0%, >=64=0.0% 00:31:50.016 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.016 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.016 issued rwts: total=6336,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:50.016 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:50.016 filename0: (groupid=0, jobs=1): err= 0: pid=1987677: Wed Feb 14 20:32:25 2024 00:31:50.016 read: IOPS=594, BW=2377KiB/s (2434kB/s)(23.2MiB/10016msec) 00:31:50.016 slat (usec): min=6, max=105, avg=32.88, stdev=18.79 00:31:50.016 clat (usec): min=9376, max=68565, avg=26748.00, stdev=5188.99 00:31:50.016 lat (usec): min=9396, max=68589, avg=26780.88, stdev=5189.15 00:31:50.016 clat percentiles (usec): 00:31:50.016 | 1.00th=[13960], 5.00th=[19792], 10.00th=[22938], 20.00th=[23987], 00:31:50.016 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25560], 60.00th=[26346], 00:31:50.016 | 70.00th=[27132], 80.00th=[29754], 90.00th=[33817], 95.00th=[36963], 00:31:50.016 | 99.00th=[42730], 99.50th=[46400], 99.90th=[58983], 99.95th=[68682], 00:31:50.016 | 99.99th=[68682] 00:31:50.016 bw ( KiB/s): min= 2008, max= 2546, per=4.24%, avg=2376.75, stdev=127.73, samples=20 00:31:50.016 iops : min= 502, max= 636, avg=594.10, stdev=31.92, samples=20 00:31:50.016 lat (msec) : 10=0.08%, 20=5.04%, 50=94.61%, 100=0.27% 00:31:50.016 cpu : usr=98.24%, sys=1.11%, ctx=125, majf=0, minf=9 00:31:50.016 IO depths : 1=0.1%, 2=0.6%, 4=6.9%, 8=77.2%, 16=15.1%, 32=0.0%, >=64=0.0% 00:31:50.016 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.016 complete : 0=0.0%, 4=90.3%, 8=6.4%, 16=3.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.016 issued rwts: total=5951,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:50.016 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:50.016 filename0: (groupid=0, jobs=1): err= 0: pid=1987678: Wed Feb 14 20:32:25 2024 00:31:50.016 read: IOPS=648, BW=2596KiB/s (2658kB/s)(25.4MiB/10017msec) 00:31:50.016 slat (nsec): min=6776, max=89623, avg=20749.03, stdev=17096.42 00:31:50.016 clat (usec): min=1639, max=44808, avg=24490.69, stdev=3650.70 00:31:50.016 lat (usec): min=1652, max=44824, avg=24511.44, stdev=3651.30 00:31:50.016 clat percentiles (usec): 00:31:50.016 | 1.00th=[ 6063], 5.00th=[18482], 10.00th=[22676], 20.00th=[23725], 00:31:50.016 | 30.00th=[24249], 40.00th=[24511], 50.00th=[25035], 60.00th=[25297], 00:31:50.016 | 70.00th=[25822], 80.00th=[26084], 90.00th=[26608], 95.00th=[27132], 00:31:50.016 | 99.00th=[32113], 99.50th=[36439], 99.90th=[44303], 99.95th=[44827], 00:31:50.016 | 99.99th=[44827] 00:31:50.016 bw ( KiB/s): min= 2427, max= 2992, per=4.62%, avg=2593.55, stdev=155.93, samples=20 00:31:50.016 iops : min= 606, max= 748, avg=648.25, stdev=39.05, samples=20 00:31:50.016 lat (msec) : 2=0.09%, 4=0.35%, 10=1.22%, 20=4.82%, 50=93.52% 00:31:50.016 cpu : usr=98.60%, sys=0.87%, ctx=14, majf=0, minf=9 00:31:50.016 IO depths : 1=4.8%, 2=10.2%, 4=22.9%, 8=54.1%, 16=8.0%, 32=0.0%, >=64=0.0% 00:31:50.016 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.016 complete : 0=0.0%, 4=93.9%, 8=0.4%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.016 issued rwts: total=6500,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:50.016 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:50.016 filename0: (groupid=0, jobs=1): err= 0: pid=1987679: Wed Feb 14 20:32:25 2024 00:31:50.016 read: IOPS=446, BW=1785KiB/s (1827kB/s)(17.4MiB/10010msec) 00:31:50.016 slat (usec): min=3, max=100, avg=30.41, stdev=20.78 00:31:50.016 clat (usec): min=12047, max=54532, avg=35571.40, stdev=6145.76 00:31:50.016 lat (usec): min=12054, max=54546, avg=35601.80, stdev=6143.78 00:31:50.016 clat percentiles (usec): 00:31:50.016 | 1.00th=[22676], 5.00th=[25297], 10.00th=[27657], 20.00th=[30278], 00:31:50.016 | 30.00th=[32113], 40.00th=[33817], 50.00th=[35914], 60.00th=[36963], 00:31:50.016 | 70.00th=[39060], 80.00th=[41681], 90.00th=[43779], 95.00th=[44827], 00:31:50.016 | 99.00th=[46924], 99.50th=[47973], 99.90th=[54264], 99.95th=[54264], 00:31:50.016 | 99.99th=[54789] 00:31:50.016 bw ( KiB/s): min= 1660, max= 2128, per=3.17%, avg=1778.53, stdev=135.99, samples=19 00:31:50.016 iops : min= 415, max= 532, avg=444.63, stdev=34.00, samples=19 00:31:50.016 lat (msec) : 20=0.49%, 50=99.28%, 100=0.22% 00:31:50.016 cpu : usr=99.05%, sys=0.53%, ctx=12, majf=0, minf=9 00:31:50.016 IO depths : 1=5.6%, 2=11.1%, 4=22.9%, 8=53.2%, 16=7.3%, 32=0.0%, >=64=0.0% 00:31:50.016 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.016 complete : 0=0.0%, 4=93.6%, 8=0.9%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.016 issued rwts: total=4466,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:50.016 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:50.016 filename1: (groupid=0, jobs=1): err= 0: pid=1987680: Wed Feb 14 20:32:25 2024 00:31:50.016 read: IOPS=602, BW=2408KiB/s (2466kB/s)(23.6MiB/10021msec) 00:31:50.016 slat (usec): min=5, max=100, avg=25.62, stdev=18.85 00:31:50.016 clat (usec): min=8943, max=46953, avg=26432.80, stdev=4588.43 00:31:50.016 lat (usec): min=8979, max=47016, avg=26458.42, stdev=4589.49 00:31:50.016 clat percentiles (usec): 00:31:50.016 | 1.00th=[15795], 5.00th=[20317], 10.00th=[22938], 20.00th=[23987], 00:31:50.016 | 30.00th=[24511], 40.00th=[25035], 50.00th=[25560], 60.00th=[25822], 00:31:50.016 | 70.00th=[26608], 80.00th=[28705], 90.00th=[33424], 95.00th=[36439], 00:31:50.016 | 99.00th=[40633], 99.50th=[41157], 99.90th=[44827], 99.95th=[44827], 00:31:50.016 | 99.99th=[46924] 00:31:50.016 bw ( KiB/s): min= 2096, max= 2560, per=4.29%, avg=2407.50, stdev=113.76, samples=20 00:31:50.016 iops : min= 524, max= 640, avg=601.80, stdev=28.42, samples=20 00:31:50.016 lat (msec) : 10=0.02%, 20=4.69%, 50=95.29% 00:31:50.016 cpu : usr=98.73%, sys=0.85%, ctx=19, majf=0, minf=9 00:31:50.016 IO depths : 1=0.3%, 2=0.9%, 4=7.4%, 8=77.3%, 16=14.1%, 32=0.0%, >=64=0.0% 00:31:50.016 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.016 complete : 0=0.0%, 4=90.1%, 8=6.1%, 16=3.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.016 issued rwts: total=6033,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:50.016 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:50.016 filename1: (groupid=0, jobs=1): err= 0: pid=1987681: Wed Feb 14 20:32:25 2024 00:31:50.016 read: IOPS=606, BW=2426KiB/s (2484kB/s)(23.7MiB/10015msec) 00:31:50.016 slat (nsec): min=5874, max=87289, avg=32848.13, stdev=20531.72 00:31:50.016 clat (usec): min=8728, max=48236, avg=26209.06, stdev=4176.71 00:31:50.016 lat (usec): min=8744, max=48257, avg=26241.91, stdev=4175.82 00:31:50.016 clat percentiles (usec): 00:31:50.016 | 1.00th=[15926], 5.00th=[20841], 10.00th=[23200], 20.00th=[23987], 00:31:50.016 | 30.00th=[24511], 40.00th=[25035], 50.00th=[25560], 60.00th=[26084], 00:31:50.016 | 70.00th=[26608], 80.00th=[27657], 90.00th=[31589], 95.00th=[34866], 00:31:50.016 | 99.00th=[40109], 99.50th=[42206], 99.90th=[47973], 99.95th=[47973], 00:31:50.016 | 99.99th=[48497] 00:31:50.016 bw ( KiB/s): min= 2076, max= 2608, per=4.32%, avg=2424.10, stdev=128.50, samples=20 00:31:50.016 iops : min= 519, max= 652, avg=605.95, stdev=32.15, samples=20 00:31:50.016 lat (msec) : 10=0.20%, 20=3.51%, 50=96.30% 00:31:50.016 cpu : usr=98.68%, sys=0.87%, ctx=15, majf=0, minf=9 00:31:50.016 IO depths : 1=0.3%, 2=0.8%, 4=7.2%, 8=77.5%, 16=14.2%, 32=0.0%, >=64=0.0% 00:31:50.016 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.016 complete : 0=0.0%, 4=90.3%, 8=5.6%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.016 issued rwts: total=6073,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:50.016 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:50.016 filename1: (groupid=0, jobs=1): err= 0: pid=1987682: Wed Feb 14 20:32:25 2024 00:31:50.016 read: IOPS=601, BW=2405KiB/s (2463kB/s)(23.5MiB/10013msec) 00:31:50.016 slat (nsec): min=6804, max=90952, avg=29937.06, stdev=20638.04 00:31:50.016 clat (usec): min=10094, max=61234, avg=26399.51, stdev=4915.12 00:31:50.016 lat (usec): min=10104, max=61258, avg=26429.45, stdev=4914.28 00:31:50.016 clat percentiles (usec): 00:31:50.016 | 1.00th=[14222], 5.00th=[20317], 10.00th=[23200], 20.00th=[23987], 00:31:50.016 | 30.00th=[24511], 40.00th=[25035], 50.00th=[25560], 60.00th=[26084], 00:31:50.016 | 70.00th=[26608], 80.00th=[28181], 90.00th=[32637], 95.00th=[35914], 00:31:50.016 | 99.00th=[41157], 99.50th=[46400], 99.90th=[61080], 99.95th=[61080], 00:31:50.016 | 99.99th=[61080] 00:31:50.016 bw ( KiB/s): min= 2144, max= 2624, per=4.28%, avg=2402.30, stdev=121.29, samples=20 00:31:50.016 iops : min= 536, max= 656, avg=600.50, stdev=30.30, samples=20 00:31:50.016 lat (msec) : 20=4.88%, 50=94.85%, 100=0.27% 00:31:50.016 cpu : usr=98.99%, sys=0.61%, ctx=14, majf=0, minf=9 00:31:50.016 IO depths : 1=0.8%, 2=3.9%, 4=14.9%, 8=66.9%, 16=13.6%, 32=0.0%, >=64=0.0% 00:31:50.016 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.016 complete : 0=0.0%, 4=92.2%, 8=3.8%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.016 issued rwts: total=6021,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:50.016 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:50.016 filename1: (groupid=0, jobs=1): err= 0: pid=1987683: Wed Feb 14 20:32:25 2024 00:31:50.016 read: IOPS=553, BW=2215KiB/s (2268kB/s)(21.6MiB/10005msec) 00:31:50.016 slat (usec): min=5, max=100, avg=30.88, stdev=21.21 00:31:50.016 clat (usec): min=7104, max=59338, avg=28714.35, stdev=5880.42 00:31:50.016 lat (usec): min=7111, max=59358, avg=28745.23, stdev=5878.39 00:31:50.016 clat percentiles (usec): 00:31:50.016 | 1.00th=[15270], 5.00th=[21890], 10.00th=[23200], 20.00th=[24249], 00:31:50.016 | 30.00th=[25035], 40.00th=[25560], 50.00th=[26608], 60.00th=[29492], 00:31:50.016 | 70.00th=[31851], 80.00th=[34866], 90.00th=[36963], 95.00th=[38536], 00:31:50.016 | 99.00th=[42730], 99.50th=[45351], 99.90th=[46924], 99.95th=[58983], 00:31:50.016 | 99.99th=[59507] 00:31:50.016 bw ( KiB/s): min= 1720, max= 2432, per=3.91%, avg=2194.53, stdev=248.27, samples=19 00:31:50.016 iops : min= 430, max= 608, avg=548.47, stdev=62.00, samples=19 00:31:50.016 lat (msec) : 10=0.13%, 20=3.59%, 50=96.19%, 100=0.09% 00:31:50.016 cpu : usr=98.95%, sys=0.63%, ctx=14, majf=0, minf=9 00:31:50.016 IO depths : 1=0.4%, 2=0.9%, 4=10.2%, 8=74.6%, 16=13.9%, 32=0.0%, >=64=0.0% 00:31:50.016 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.016 complete : 0=0.0%, 4=91.1%, 8=4.9%, 16=3.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.016 issued rwts: total=5541,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:50.016 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:50.016 filename1: (groupid=0, jobs=1): err= 0: pid=1987684: Wed Feb 14 20:32:25 2024 00:31:50.016 read: IOPS=580, BW=2323KiB/s (2379kB/s)(22.7MiB/10006msec) 00:31:50.016 slat (nsec): min=4817, max=92990, avg=32601.76, stdev=21185.01 00:31:50.016 clat (usec): min=6977, max=50062, avg=27380.20, stdev=5324.55 00:31:50.016 lat (usec): min=6984, max=50090, avg=27412.81, stdev=5324.29 00:31:50.016 clat percentiles (usec): 00:31:50.017 | 1.00th=[14877], 5.00th=[20841], 10.00th=[22938], 20.00th=[24249], 00:31:50.017 | 30.00th=[24773], 40.00th=[25297], 50.00th=[25822], 60.00th=[26608], 00:31:50.017 | 70.00th=[28181], 80.00th=[31065], 90.00th=[35390], 95.00th=[37487], 00:31:50.017 | 99.00th=[42730], 99.50th=[46924], 99.90th=[49546], 99.95th=[50070], 00:31:50.017 | 99.99th=[50070] 00:31:50.017 bw ( KiB/s): min= 2020, max= 2443, per=4.12%, avg=2308.37, stdev=108.62, samples=19 00:31:50.017 iops : min= 505, max= 610, avg=576.89, stdev=27.02, samples=19 00:31:50.017 lat (msec) : 10=0.17%, 20=4.03%, 50=95.77%, 100=0.03% 00:31:50.017 cpu : usr=98.82%, sys=0.75%, ctx=18, majf=0, minf=9 00:31:50.017 IO depths : 1=0.1%, 2=0.5%, 4=6.9%, 8=77.9%, 16=14.6%, 32=0.0%, >=64=0.0% 00:31:50.017 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.017 complete : 0=0.0%, 4=90.2%, 8=6.1%, 16=3.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.017 issued rwts: total=5812,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:50.017 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:50.017 filename1: (groupid=0, jobs=1): err= 0: pid=1987685: Wed Feb 14 20:32:25 2024 00:31:50.017 read: IOPS=576, BW=2306KiB/s (2362kB/s)(22.5MiB/10004msec) 00:31:50.017 slat (nsec): min=6649, max=91505, avg=28851.19, stdev=21203.50 00:31:50.017 clat (usec): min=8449, max=71949, avg=27613.46, stdev=5621.05 00:31:50.017 lat (usec): min=8457, max=71964, avg=27642.31, stdev=5619.24 00:31:50.017 clat percentiles (usec): 00:31:50.017 | 1.00th=[15139], 5.00th=[20841], 10.00th=[22676], 20.00th=[23987], 00:31:50.017 | 30.00th=[24773], 40.00th=[25560], 50.00th=[26084], 60.00th=[26870], 00:31:50.017 | 70.00th=[28967], 80.00th=[31851], 90.00th=[35914], 95.00th=[38011], 00:31:50.017 | 99.00th=[43254], 99.50th=[49021], 99.90th=[50070], 99.95th=[71828], 00:31:50.017 | 99.99th=[71828] 00:31:50.017 bw ( KiB/s): min= 1888, max= 2467, per=4.08%, avg=2288.00, stdev=139.78, samples=19 00:31:50.017 iops : min= 472, max= 616, avg=571.84, stdev=34.90, samples=19 00:31:50.017 lat (msec) : 10=0.24%, 20=3.52%, 50=96.05%, 100=0.19% 00:31:50.017 cpu : usr=98.85%, sys=0.72%, ctx=17, majf=0, minf=9 00:31:50.017 IO depths : 1=0.1%, 2=0.3%, 4=5.7%, 8=78.1%, 16=15.9%, 32=0.0%, >=64=0.0% 00:31:50.017 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.017 complete : 0=0.0%, 4=90.4%, 8=6.7%, 16=2.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.017 issued rwts: total=5768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:50.017 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:50.017 filename1: (groupid=0, jobs=1): err= 0: pid=1987686: Wed Feb 14 20:32:25 2024 00:31:50.017 read: IOPS=568, BW=2273KiB/s (2327kB/s)(22.2MiB/10020msec) 00:31:50.017 slat (nsec): min=4234, max=91866, avg=30418.56, stdev=20876.78 00:31:50.017 clat (usec): min=9956, max=64430, avg=27984.57, stdev=5518.59 00:31:50.017 lat (usec): min=9968, max=64446, avg=28014.99, stdev=5517.03 00:31:50.017 clat percentiles (usec): 00:31:50.017 | 1.00th=[15139], 5.00th=[21627], 10.00th=[23462], 20.00th=[24249], 00:31:50.017 | 30.00th=[25035], 40.00th=[25560], 50.00th=[26346], 60.00th=[27132], 00:31:50.017 | 70.00th=[30016], 80.00th=[32637], 90.00th=[35914], 95.00th=[38536], 00:31:50.017 | 99.00th=[44303], 99.50th=[47449], 99.90th=[49021], 99.95th=[64226], 00:31:50.017 | 99.99th=[64226] 00:31:50.017 bw ( KiB/s): min= 1968, max= 2400, per=4.05%, avg=2272.90, stdev=105.12, samples=20 00:31:50.017 iops : min= 492, max= 600, avg=568.15, stdev=26.27, samples=20 00:31:50.017 lat (msec) : 10=0.05%, 20=3.76%, 50=96.14%, 100=0.05% 00:31:50.017 cpu : usr=98.88%, sys=0.69%, ctx=17, majf=0, minf=9 00:31:50.017 IO depths : 1=0.2%, 2=0.5%, 4=7.8%, 8=77.4%, 16=14.1%, 32=0.0%, >=64=0.0% 00:31:50.017 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.017 complete : 0=0.0%, 4=90.2%, 8=5.8%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.017 issued rwts: total=5693,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:50.017 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:50.017 filename1: (groupid=0, jobs=1): err= 0: pid=1987687: Wed Feb 14 20:32:25 2024 00:31:50.017 read: IOPS=577, BW=2309KiB/s (2365kB/s)(22.6MiB/10006msec) 00:31:50.017 slat (nsec): min=5480, max=95756, avg=30555.39, stdev=21387.45 00:31:50.017 clat (usec): min=5231, max=48771, avg=27563.48, stdev=5271.81 00:31:50.017 lat (usec): min=5238, max=48788, avg=27594.03, stdev=5270.90 00:31:50.017 clat percentiles (usec): 00:31:50.017 | 1.00th=[14746], 5.00th=[21365], 10.00th=[23462], 20.00th=[24249], 00:31:50.017 | 30.00th=[25035], 40.00th=[25560], 50.00th=[26084], 60.00th=[26870], 00:31:50.017 | 70.00th=[28967], 80.00th=[31589], 90.00th=[34866], 95.00th=[36963], 00:31:50.017 | 99.00th=[43779], 99.50th=[45876], 99.90th=[47973], 99.95th=[48497], 00:31:50.017 | 99.99th=[49021] 00:31:50.017 bw ( KiB/s): min= 2080, max= 2522, per=4.09%, avg=2293.42, stdev=119.25, samples=19 00:31:50.017 iops : min= 520, max= 630, avg=573.21, stdev=29.82, samples=19 00:31:50.017 lat (msec) : 10=0.40%, 20=3.41%, 50=96.19% 00:31:50.017 cpu : usr=98.86%, sys=0.71%, ctx=14, majf=0, minf=9 00:31:50.017 IO depths : 1=0.1%, 2=0.4%, 4=6.3%, 8=77.3%, 16=16.0%, 32=0.0%, >=64=0.0% 00:31:50.017 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.017 complete : 0=0.0%, 4=90.6%, 8=6.6%, 16=2.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.017 issued rwts: total=5777,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:50.017 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:50.017 filename2: (groupid=0, jobs=1): err= 0: pid=1987688: Wed Feb 14 20:32:25 2024 00:31:50.017 read: IOPS=578, BW=2315KiB/s (2371kB/s)(22.6MiB/10006msec) 00:31:50.017 slat (nsec): min=4567, max=98542, avg=31197.67, stdev=20756.49 00:31:50.017 clat (usec): min=8592, max=48109, avg=27477.54, stdev=5224.94 00:31:50.017 lat (usec): min=8600, max=48133, avg=27508.74, stdev=5224.40 00:31:50.017 clat percentiles (usec): 00:31:50.017 | 1.00th=[14091], 5.00th=[21890], 10.00th=[23462], 20.00th=[24249], 00:31:50.017 | 30.00th=[24773], 40.00th=[25297], 50.00th=[25822], 60.00th=[26608], 00:31:50.017 | 70.00th=[28181], 80.00th=[31851], 90.00th=[35914], 95.00th=[37487], 00:31:50.017 | 99.00th=[41681], 99.50th=[45876], 99.90th=[47449], 99.95th=[47973], 00:31:50.017 | 99.99th=[47973] 00:31:50.017 bw ( KiB/s): min= 1888, max= 2459, per=4.10%, avg=2301.42, stdev=169.85, samples=19 00:31:50.017 iops : min= 472, max= 614, avg=575.16, stdev=42.36, samples=19 00:31:50.017 lat (msec) : 10=0.21%, 20=3.09%, 50=96.70% 00:31:50.017 cpu : usr=98.86%, sys=0.73%, ctx=17, majf=0, minf=9 00:31:50.017 IO depths : 1=0.1%, 2=0.3%, 4=7.4%, 8=77.3%, 16=14.9%, 32=0.0%, >=64=0.0% 00:31:50.017 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.017 complete : 0=0.0%, 4=90.4%, 8=6.4%, 16=3.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.017 issued rwts: total=5792,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:50.017 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:50.017 filename2: (groupid=0, jobs=1): err= 0: pid=1987689: Wed Feb 14 20:32:25 2024 00:31:50.017 read: IOPS=557, BW=2229KiB/s (2283kB/s)(21.8MiB/10005msec) 00:31:50.017 slat (nsec): min=4409, max=99265, avg=33251.91, stdev=21611.26 00:31:50.017 clat (usec): min=8577, max=59028, avg=28458.74, stdev=5098.11 00:31:50.017 lat (usec): min=8586, max=59062, avg=28491.99, stdev=5095.09 00:31:50.017 clat percentiles (usec): 00:31:50.017 | 1.00th=[15533], 5.00th=[23200], 10.00th=[23987], 20.00th=[24773], 00:31:50.017 | 30.00th=[25297], 40.00th=[26084], 50.00th=[26870], 60.00th=[28705], 00:31:50.017 | 70.00th=[30802], 80.00th=[32900], 90.00th=[35914], 95.00th=[38011], 00:31:50.017 | 99.00th=[40633], 99.50th=[42206], 99.90th=[47449], 99.95th=[47973], 00:31:50.017 | 99.99th=[58983] 00:31:50.017 bw ( KiB/s): min= 1904, max= 2554, per=3.98%, avg=2233.74, stdev=244.49, samples=19 00:31:50.017 iops : min= 476, max= 638, avg=558.26, stdev=61.01, samples=19 00:31:50.017 lat (msec) : 10=0.22%, 20=2.47%, 50=97.27%, 100=0.04% 00:31:50.017 cpu : usr=98.96%, sys=0.61%, ctx=13, majf=0, minf=9 00:31:50.017 IO depths : 1=0.6%, 2=3.5%, 4=20.5%, 8=63.1%, 16=12.2%, 32=0.0%, >=64=0.0% 00:31:50.017 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.017 complete : 0=0.0%, 4=94.0%, 8=0.6%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.017 issued rwts: total=5576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:50.017 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:50.017 filename2: (groupid=0, jobs=1): err= 0: pid=1987690: Wed Feb 14 20:32:25 2024 00:31:50.017 read: IOPS=579, BW=2319KiB/s (2374kB/s)(22.7MiB/10020msec) 00:31:50.017 slat (usec): min=4, max=109, avg=28.97, stdev=20.35 00:31:50.017 clat (usec): min=10220, max=53071, avg=27441.34, stdev=5174.54 00:31:50.017 lat (usec): min=10235, max=53129, avg=27470.31, stdev=5174.66 00:31:50.017 clat percentiles (usec): 00:31:50.017 | 1.00th=[15795], 5.00th=[21890], 10.00th=[23462], 20.00th=[24249], 00:31:50.017 | 30.00th=[24773], 40.00th=[25297], 50.00th=[25822], 60.00th=[26346], 00:31:50.017 | 70.00th=[27919], 80.00th=[31589], 90.00th=[35390], 95.00th=[38011], 00:31:50.017 | 99.00th=[43254], 99.50th=[47449], 99.90th=[50594], 99.95th=[53216], 00:31:50.017 | 99.99th=[53216] 00:31:50.017 bw ( KiB/s): min= 2144, max= 2488, per=4.13%, avg=2318.70, stdev=92.07, samples=20 00:31:50.017 iops : min= 536, max= 622, avg=579.60, stdev=22.95, samples=20 00:31:50.017 lat (msec) : 20=3.13%, 50=96.71%, 100=0.15% 00:31:50.017 cpu : usr=98.95%, sys=0.64%, ctx=16, majf=0, minf=9 00:31:50.017 IO depths : 1=0.1%, 2=0.4%, 4=7.2%, 8=77.9%, 16=14.4%, 32=0.0%, >=64=0.0% 00:31:50.017 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.017 complete : 0=0.0%, 4=90.1%, 8=6.1%, 16=3.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.017 issued rwts: total=5808,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:50.017 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:50.017 filename2: (groupid=0, jobs=1): err= 0: pid=1987691: Wed Feb 14 20:32:25 2024 00:31:50.017 read: IOPS=577, BW=2310KiB/s (2366kB/s)(22.6MiB/10020msec) 00:31:50.017 slat (nsec): min=5482, max=99644, avg=33071.13, stdev=19344.71 00:31:50.017 clat (usec): min=7418, max=66412, avg=27538.43, stdev=5409.11 00:31:50.017 lat (usec): min=7433, max=66430, avg=27571.50, stdev=5407.70 00:31:50.017 clat percentiles (usec): 00:31:50.017 | 1.00th=[15926], 5.00th=[21365], 10.00th=[23200], 20.00th=[24249], 00:31:50.017 | 30.00th=[24773], 40.00th=[25297], 50.00th=[25822], 60.00th=[26608], 00:31:50.017 | 70.00th=[28443], 80.00th=[31589], 90.00th=[34866], 95.00th=[38011], 00:31:50.017 | 99.00th=[42206], 99.50th=[47973], 99.90th=[53740], 99.95th=[66323], 00:31:50.017 | 99.99th=[66323] 00:31:50.017 bw ( KiB/s): min= 1952, max= 2483, per=4.11%, avg=2307.65, stdev=115.47, samples=20 00:31:50.018 iops : min= 488, max= 620, avg=576.80, stdev=28.78, samples=20 00:31:50.018 lat (msec) : 10=0.07%, 20=3.91%, 50=95.71%, 100=0.31% 00:31:50.018 cpu : usr=98.39%, sys=1.01%, ctx=89, majf=0, minf=9 00:31:50.018 IO depths : 1=0.2%, 2=0.6%, 4=8.0%, 8=76.5%, 16=14.8%, 32=0.0%, >=64=0.0% 00:31:50.018 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.018 complete : 0=0.0%, 4=90.5%, 8=6.1%, 16=3.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.018 issued rwts: total=5787,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:50.018 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:50.018 filename2: (groupid=0, jobs=1): err= 0: pid=1987692: Wed Feb 14 20:32:25 2024 00:31:50.018 read: IOPS=575, BW=2301KiB/s (2356kB/s)(22.5MiB/10021msec) 00:31:50.018 slat (nsec): min=6783, max=97163, avg=29678.78, stdev=20010.84 00:31:50.018 clat (usec): min=11574, max=55460, avg=27666.20, stdev=5066.25 00:31:50.018 lat (usec): min=11589, max=55508, avg=27695.88, stdev=5066.01 00:31:50.018 clat percentiles (usec): 00:31:50.018 | 1.00th=[16188], 5.00th=[22676], 10.00th=[23725], 20.00th=[24249], 00:31:50.018 | 30.00th=[25035], 40.00th=[25297], 50.00th=[25822], 60.00th=[26608], 00:31:50.018 | 70.00th=[28705], 80.00th=[31851], 90.00th=[35390], 95.00th=[37487], 00:31:50.018 | 99.00th=[41681], 99.50th=[45876], 99.90th=[51643], 99.95th=[55313], 00:31:50.018 | 99.99th=[55313] 00:31:50.018 bw ( KiB/s): min= 2128, max= 2408, per=4.10%, avg=2298.70, stdev=92.59, samples=20 00:31:50.018 iops : min= 532, max= 602, avg=574.60, stdev=23.13, samples=20 00:31:50.018 lat (msec) : 20=2.32%, 50=97.47%, 100=0.21% 00:31:50.018 cpu : usr=98.81%, sys=0.76%, ctx=16, majf=0, minf=9 00:31:50.018 IO depths : 1=0.1%, 2=0.3%, 4=6.2%, 8=78.4%, 16=15.0%, 32=0.0%, >=64=0.0% 00:31:50.018 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.018 complete : 0=0.0%, 4=90.0%, 8=6.7%, 16=3.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.018 issued rwts: total=5764,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:50.018 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:50.018 filename2: (groupid=0, jobs=1): err= 0: pid=1987693: Wed Feb 14 20:32:25 2024 00:31:50.018 read: IOPS=571, BW=2288KiB/s (2343kB/s)(22.4MiB/10005msec) 00:31:50.018 slat (nsec): min=6747, max=89180, avg=23476.38, stdev=20138.57 00:31:50.018 clat (usec): min=9022, max=53034, avg=27862.12, stdev=5629.75 00:31:50.018 lat (usec): min=9034, max=53103, avg=27885.59, stdev=5628.60 00:31:50.018 clat percentiles (usec): 00:31:50.018 | 1.00th=[13829], 5.00th=[20055], 10.00th=[22676], 20.00th=[24249], 00:31:50.018 | 30.00th=[25035], 40.00th=[25560], 50.00th=[26346], 60.00th=[27657], 00:31:50.018 | 70.00th=[29754], 80.00th=[32637], 90.00th=[35390], 95.00th=[38011], 00:31:50.018 | 99.00th=[43779], 99.50th=[45876], 99.90th=[51119], 99.95th=[52691], 00:31:50.018 | 99.99th=[53216] 00:31:50.018 bw ( KiB/s): min= 2048, max= 2428, per=4.06%, avg=2276.21, stdev=106.83, samples=19 00:31:50.018 iops : min= 512, max= 607, avg=568.89, stdev=26.60, samples=19 00:31:50.018 lat (msec) : 10=0.52%, 20=4.25%, 50=95.04%, 100=0.19% 00:31:50.018 cpu : usr=98.89%, sys=0.66%, ctx=13, majf=0, minf=9 00:31:50.018 IO depths : 1=0.1%, 2=0.4%, 4=5.6%, 8=78.2%, 16=15.7%, 32=0.0%, >=64=0.0% 00:31:50.018 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.018 complete : 0=0.0%, 4=90.1%, 8=7.0%, 16=2.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.018 issued rwts: total=5722,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:50.018 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:50.018 filename2: (groupid=0, jobs=1): err= 0: pid=1987694: Wed Feb 14 20:32:25 2024 00:31:50.018 read: IOPS=610, BW=2442KiB/s (2500kB/s)(23.9MiB/10009msec) 00:31:50.018 slat (nsec): min=5890, max=94884, avg=30815.75, stdev=21328.70 00:31:50.018 clat (usec): min=9056, max=57226, avg=25971.28, stdev=6187.75 00:31:50.018 lat (usec): min=9066, max=57242, avg=26002.10, stdev=6190.73 00:31:50.018 clat percentiles (usec): 00:31:50.018 | 1.00th=[11731], 5.00th=[15401], 10.00th=[17957], 20.00th=[23200], 00:31:50.018 | 30.00th=[24249], 40.00th=[24773], 50.00th=[25297], 60.00th=[26084], 00:31:50.018 | 70.00th=[26870], 80.00th=[29754], 90.00th=[33817], 95.00th=[37487], 00:31:50.018 | 99.00th=[44827], 99.50th=[46400], 99.90th=[57410], 99.95th=[57410], 00:31:50.018 | 99.99th=[57410] 00:31:50.018 bw ( KiB/s): min= 2016, max= 2882, per=4.31%, avg=2418.47, stdev=211.05, samples=19 00:31:50.018 iops : min= 504, max= 720, avg=604.53, stdev=52.74, samples=19 00:31:50.018 lat (msec) : 10=0.29%, 20=13.11%, 50=86.33%, 100=0.26% 00:31:50.018 cpu : usr=99.15%, sys=0.44%, ctx=13, majf=0, minf=9 00:31:50.018 IO depths : 1=2.2%, 2=5.7%, 4=17.2%, 8=64.0%, 16=10.9%, 32=0.0%, >=64=0.0% 00:31:50.018 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.018 complete : 0=0.0%, 4=92.4%, 8=2.5%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.018 issued rwts: total=6110,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:50.018 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:50.018 filename2: (groupid=0, jobs=1): err= 0: pid=1987695: Wed Feb 14 20:32:25 2024 00:31:50.018 read: IOPS=596, BW=2386KiB/s (2443kB/s)(23.3MiB/10020msec) 00:31:50.018 slat (nsec): min=5300, max=90923, avg=29158.27, stdev=19783.42 00:31:50.018 clat (usec): min=8587, max=66188, avg=26650.07, stdev=4959.35 00:31:50.018 lat (usec): min=8605, max=66204, avg=26679.23, stdev=4959.89 00:31:50.018 clat percentiles (usec): 00:31:50.018 | 1.00th=[14877], 5.00th=[20317], 10.00th=[22938], 20.00th=[23987], 00:31:50.018 | 30.00th=[24511], 40.00th=[25035], 50.00th=[25560], 60.00th=[26084], 00:31:50.018 | 70.00th=[26870], 80.00th=[29492], 90.00th=[33424], 95.00th=[36439], 00:31:50.018 | 99.00th=[41157], 99.50th=[44303], 99.90th=[53740], 99.95th=[53740], 00:31:50.018 | 99.99th=[66323] 00:31:50.018 bw ( KiB/s): min= 2096, max= 2530, per=4.25%, avg=2386.00, stdev=111.29, samples=20 00:31:50.018 iops : min= 524, max= 632, avg=596.40, stdev=27.82, samples=20 00:31:50.018 lat (msec) : 10=0.05%, 20=4.70%, 50=94.98%, 100=0.27% 00:31:50.018 cpu : usr=98.75%, sys=0.82%, ctx=17, majf=0, minf=9 00:31:50.018 IO depths : 1=0.5%, 2=1.1%, 4=7.9%, 8=76.5%, 16=14.0%, 32=0.0%, >=64=0.0% 00:31:50.018 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.018 complete : 0=0.0%, 4=90.3%, 8=5.9%, 16=3.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.018 issued rwts: total=5977,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:50.018 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:50.018 00:31:50.018 Run status group 0 (all jobs): 00:31:50.018 READ: bw=54.8MiB/s (57.4MB/s), 1785KiB/s-2596KiB/s (1827kB/s-2658kB/s), io=550MiB (577MB), run=10004-10050msec 00:31:50.018 20:32:25 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:31:50.018 20:32:25 -- target/dif.sh@43 -- # local sub 00:31:50.018 20:32:25 -- target/dif.sh@45 -- # for sub in "$@" 00:31:50.018 20:32:25 -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:50.018 20:32:25 -- target/dif.sh@36 -- # local sub_id=0 00:31:50.018 20:32:25 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:50.018 20:32:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:50.018 20:32:25 -- common/autotest_common.sh@10 -- # set +x 00:31:50.018 20:32:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:50.018 20:32:25 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:50.018 20:32:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:50.018 20:32:25 -- common/autotest_common.sh@10 -- # set +x 00:31:50.018 20:32:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:50.018 20:32:25 -- target/dif.sh@45 -- # for sub in "$@" 00:31:50.018 20:32:25 -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:50.018 20:32:25 -- target/dif.sh@36 -- # local sub_id=1 00:31:50.018 20:32:25 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:50.018 20:32:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:50.018 20:32:25 -- common/autotest_common.sh@10 -- # set +x 00:31:50.018 20:32:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:50.018 20:32:25 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:50.018 20:32:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:50.018 20:32:25 -- common/autotest_common.sh@10 -- # set +x 00:31:50.018 20:32:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:50.018 20:32:25 -- target/dif.sh@45 -- # for sub in "$@" 00:31:50.018 20:32:25 -- target/dif.sh@46 -- # destroy_subsystem 2 00:31:50.018 20:32:25 -- target/dif.sh@36 -- # local sub_id=2 00:31:50.018 20:32:25 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:50.018 20:32:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:50.018 20:32:25 -- common/autotest_common.sh@10 -- # set +x 00:31:50.018 20:32:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:50.018 20:32:25 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:31:50.018 20:32:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:50.018 20:32:25 -- common/autotest_common.sh@10 -- # set +x 00:31:50.018 20:32:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:50.018 20:32:25 -- target/dif.sh@115 -- # NULL_DIF=1 00:31:50.018 20:32:25 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:31:50.018 20:32:25 -- target/dif.sh@115 -- # numjobs=2 00:31:50.018 20:32:25 -- target/dif.sh@115 -- # iodepth=8 00:31:50.018 20:32:25 -- target/dif.sh@115 -- # runtime=5 00:31:50.018 20:32:25 -- target/dif.sh@115 -- # files=1 00:31:50.018 20:32:25 -- target/dif.sh@117 -- # create_subsystems 0 1 00:31:50.018 20:32:25 -- target/dif.sh@28 -- # local sub 00:31:50.018 20:32:25 -- target/dif.sh@30 -- # for sub in "$@" 00:31:50.018 20:32:25 -- target/dif.sh@31 -- # create_subsystem 0 00:31:50.018 20:32:25 -- target/dif.sh@18 -- # local sub_id=0 00:31:50.018 20:32:25 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:50.018 20:32:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:50.018 20:32:25 -- common/autotest_common.sh@10 -- # set +x 00:31:50.018 bdev_null0 00:31:50.018 20:32:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:50.018 20:32:25 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:50.018 20:32:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:50.018 20:32:25 -- common/autotest_common.sh@10 -- # set +x 00:31:50.018 20:32:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:50.018 20:32:25 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:50.018 20:32:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:50.018 20:32:25 -- common/autotest_common.sh@10 -- # set +x 00:31:50.018 20:32:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:50.018 20:32:25 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:50.018 20:32:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:50.018 20:32:25 -- common/autotest_common.sh@10 -- # set +x 00:31:50.018 [2024-02-14 20:32:25.465872] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:50.019 20:32:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:50.019 20:32:25 -- target/dif.sh@30 -- # for sub in "$@" 00:31:50.019 20:32:25 -- target/dif.sh@31 -- # create_subsystem 1 00:31:50.019 20:32:25 -- target/dif.sh@18 -- # local sub_id=1 00:31:50.019 20:32:25 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:31:50.019 20:32:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:50.019 20:32:25 -- common/autotest_common.sh@10 -- # set +x 00:31:50.019 bdev_null1 00:31:50.019 20:32:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:50.019 20:32:25 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:50.019 20:32:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:50.019 20:32:25 -- common/autotest_common.sh@10 -- # set +x 00:31:50.019 20:32:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:50.019 20:32:25 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:50.019 20:32:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:50.019 20:32:25 -- common/autotest_common.sh@10 -- # set +x 00:31:50.019 20:32:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:50.019 20:32:25 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:50.019 20:32:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:50.019 20:32:25 -- common/autotest_common.sh@10 -- # set +x 00:31:50.019 20:32:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:50.019 20:32:25 -- target/dif.sh@118 -- # fio /dev/fd/62 00:31:50.019 20:32:25 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:31:50.019 20:32:25 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:31:50.019 20:32:25 -- nvmf/common.sh@520 -- # config=() 00:31:50.019 20:32:25 -- nvmf/common.sh@520 -- # local subsystem config 00:31:50.019 20:32:25 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:50.019 20:32:25 -- target/dif.sh@82 -- # gen_fio_conf 00:31:50.019 20:32:25 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:50.019 20:32:25 -- target/dif.sh@54 -- # local file 00:31:50.019 20:32:25 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:50.019 { 00:31:50.019 "params": { 00:31:50.019 "name": "Nvme$subsystem", 00:31:50.019 "trtype": "$TEST_TRANSPORT", 00:31:50.019 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:50.019 "adrfam": "ipv4", 00:31:50.019 "trsvcid": "$NVMF_PORT", 00:31:50.019 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:50.019 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:50.019 "hdgst": ${hdgst:-false}, 00:31:50.019 "ddgst": ${ddgst:-false} 00:31:50.019 }, 00:31:50.019 "method": "bdev_nvme_attach_controller" 00:31:50.019 } 00:31:50.019 EOF 00:31:50.019 )") 00:31:50.019 20:32:25 -- common/autotest_common.sh@1333 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:50.019 20:32:25 -- target/dif.sh@56 -- # cat 00:31:50.019 20:32:25 -- common/autotest_common.sh@1314 -- # local fio_dir=/usr/src/fio 00:31:50.019 20:32:25 -- common/autotest_common.sh@1316 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:50.019 20:32:25 -- common/autotest_common.sh@1316 -- # local sanitizers 00:31:50.019 20:32:25 -- common/autotest_common.sh@1317 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:50.019 20:32:25 -- common/autotest_common.sh@1318 -- # shift 00:31:50.019 20:32:25 -- common/autotest_common.sh@1320 -- # local asan_lib= 00:31:50.019 20:32:25 -- nvmf/common.sh@542 -- # cat 00:31:50.019 20:32:25 -- common/autotest_common.sh@1321 -- # for sanitizer in "${sanitizers[@]}" 00:31:50.019 20:32:25 -- target/dif.sh@72 -- # (( file = 1 )) 00:31:50.019 20:32:25 -- target/dif.sh@72 -- # (( file <= files )) 00:31:50.019 20:32:25 -- target/dif.sh@73 -- # cat 00:31:50.019 20:32:25 -- common/autotest_common.sh@1322 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:50.019 20:32:25 -- common/autotest_common.sh@1322 -- # grep libasan 00:31:50.019 20:32:25 -- target/dif.sh@72 -- # (( file++ )) 00:31:50.019 20:32:25 -- common/autotest_common.sh@1322 -- # awk '{print $3}' 00:31:50.019 20:32:25 -- target/dif.sh@72 -- # (( file <= files )) 00:31:50.019 20:32:25 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:50.019 20:32:25 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:50.019 { 00:31:50.019 "params": { 00:31:50.019 "name": "Nvme$subsystem", 00:31:50.019 "trtype": "$TEST_TRANSPORT", 00:31:50.019 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:50.019 "adrfam": "ipv4", 00:31:50.019 "trsvcid": "$NVMF_PORT", 00:31:50.019 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:50.019 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:50.019 "hdgst": ${hdgst:-false}, 00:31:50.019 "ddgst": ${ddgst:-false} 00:31:50.019 }, 00:31:50.019 "method": "bdev_nvme_attach_controller" 00:31:50.019 } 00:31:50.019 EOF 00:31:50.019 )") 00:31:50.019 20:32:25 -- nvmf/common.sh@542 -- # cat 00:31:50.019 20:32:25 -- nvmf/common.sh@544 -- # jq . 00:31:50.019 20:32:25 -- nvmf/common.sh@545 -- # IFS=, 00:31:50.019 20:32:25 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:31:50.019 "params": { 00:31:50.019 "name": "Nvme0", 00:31:50.019 "trtype": "tcp", 00:31:50.019 "traddr": "10.0.0.2", 00:31:50.019 "adrfam": "ipv4", 00:31:50.019 "trsvcid": "4420", 00:31:50.019 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:50.019 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:50.019 "hdgst": false, 00:31:50.019 "ddgst": false 00:31:50.019 }, 00:31:50.019 "method": "bdev_nvme_attach_controller" 00:31:50.019 },{ 00:31:50.019 "params": { 00:31:50.019 "name": "Nvme1", 00:31:50.019 "trtype": "tcp", 00:31:50.019 "traddr": "10.0.0.2", 00:31:50.019 "adrfam": "ipv4", 00:31:50.019 "trsvcid": "4420", 00:31:50.019 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:50.019 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:50.019 "hdgst": false, 00:31:50.019 "ddgst": false 00:31:50.019 }, 00:31:50.019 "method": "bdev_nvme_attach_controller" 00:31:50.019 }' 00:31:50.019 20:32:25 -- common/autotest_common.sh@1322 -- # asan_lib= 00:31:50.019 20:32:25 -- common/autotest_common.sh@1323 -- # [[ -n '' ]] 00:31:50.019 20:32:25 -- common/autotest_common.sh@1321 -- # for sanitizer in "${sanitizers[@]}" 00:31:50.019 20:32:25 -- common/autotest_common.sh@1322 -- # grep libclang_rt.asan 00:31:50.019 20:32:25 -- common/autotest_common.sh@1322 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:50.019 20:32:25 -- common/autotest_common.sh@1322 -- # awk '{print $3}' 00:31:50.019 20:32:25 -- common/autotest_common.sh@1322 -- # asan_lib= 00:31:50.019 20:32:25 -- common/autotest_common.sh@1323 -- # [[ -n '' ]] 00:31:50.019 20:32:25 -- common/autotest_common.sh@1329 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:50.019 20:32:25 -- common/autotest_common.sh@1329 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:50.019 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:50.019 ... 00:31:50.019 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:50.019 ... 00:31:50.019 fio-3.35 00:31:50.019 Starting 4 threads 00:31:50.019 EAL: No free 2048 kB hugepages reported on node 1 00:31:50.019 [2024-02-14 20:32:26.451527] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:31:50.019 [2024-02-14 20:32:26.451571] rpc.c: 167:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:31:54.210 00:31:54.210 filename0: (groupid=0, jobs=1): err= 0: pid=1989647: Wed Feb 14 20:32:31 2024 00:31:54.210 read: IOPS=2964, BW=23.2MiB/s (24.3MB/s)(117MiB/5037msec) 00:31:54.210 slat (nsec): min=6043, max=54976, avg=12264.55, stdev=7344.38 00:31:54.210 clat (usec): min=990, max=45606, avg=2664.82, stdev=3746.75 00:31:54.210 lat (usec): min=997, max=45627, avg=2677.08, stdev=3746.75 00:31:54.210 clat percentiles (usec): 00:31:54.210 | 1.00th=[ 1319], 5.00th=[ 1532], 10.00th=[ 1663], 20.00th=[ 1860], 00:31:54.210 | 30.00th=[ 2008], 40.00th=[ 2147], 50.00th=[ 2245], 60.00th=[ 2376], 00:31:54.210 | 70.00th=[ 2540], 80.00th=[ 2769], 90.00th=[ 3130], 95.00th=[ 3490], 00:31:54.210 | 99.00th=[ 4686], 99.50th=[43779], 99.90th=[44827], 99.95th=[45351], 00:31:54.210 | 99.99th=[45351] 00:31:54.210 bw ( KiB/s): min=15536, max=29552, per=30.33%, avg=23885.10, stdev=3917.72, samples=10 00:31:54.210 iops : min= 1942, max= 3694, avg=2985.60, stdev=489.70, samples=10 00:31:54.210 lat (usec) : 1000=0.03% 00:31:54.210 lat (msec) : 2=29.45%, 4=68.31%, 10=1.40%, 50=0.80% 00:31:54.210 cpu : usr=97.54%, sys=2.10%, ctx=6, majf=0, minf=69 00:31:54.210 IO depths : 1=0.3%, 2=1.7%, 4=66.5%, 8=31.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:54.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:54.210 complete : 0=0.0%, 4=94.9%, 8=5.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:54.210 issued rwts: total=14933,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:54.210 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:54.210 filename0: (groupid=0, jobs=1): err= 0: pid=1989648: Wed Feb 14 20:32:31 2024 00:31:54.210 read: IOPS=3101, BW=24.2MiB/s (25.4MB/s)(122MiB/5042msec) 00:31:54.210 slat (nsec): min=3169, max=55521, avg=10621.86, stdev=7464.79 00:31:54.210 clat (usec): min=576, max=45921, avg=2542.83, stdev=3243.87 00:31:54.210 lat (usec): min=605, max=45933, avg=2553.45, stdev=3243.81 00:31:54.210 clat percentiles (usec): 00:31:54.210 | 1.00th=[ 1270], 5.00th=[ 1500], 10.00th=[ 1631], 20.00th=[ 1827], 00:31:54.210 | 30.00th=[ 1975], 40.00th=[ 2114], 50.00th=[ 2212], 60.00th=[ 2343], 00:31:54.210 | 70.00th=[ 2507], 80.00th=[ 2704], 90.00th=[ 3097], 95.00th=[ 3458], 00:31:54.210 | 99.00th=[ 4555], 99.50th=[42730], 99.90th=[44827], 99.95th=[45876], 00:31:54.210 | 99.99th=[45876] 00:31:54.210 bw ( KiB/s): min=18304, max=30224, per=31.76%, avg=25009.60, stdev=4041.04, samples=10 00:31:54.210 iops : min= 2288, max= 3778, avg=3126.20, stdev=505.13, samples=10 00:31:54.210 lat (usec) : 750=0.01% 00:31:54.210 lat (msec) : 2=31.96%, 4=65.91%, 10=1.53%, 50=0.59% 00:31:54.210 cpu : usr=96.85%, sys=2.84%, ctx=6, majf=0, minf=35 00:31:54.210 IO depths : 1=0.3%, 2=1.8%, 4=66.2%, 8=31.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:54.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:54.210 complete : 0=0.0%, 4=95.1%, 8=4.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:54.210 issued rwts: total=15636,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:54.210 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:54.210 filename1: (groupid=0, jobs=1): err= 0: pid=1989649: Wed Feb 14 20:32:31 2024 00:31:54.210 read: IOPS=3167, BW=24.7MiB/s (25.9MB/s)(124MiB/5002msec) 00:31:54.210 slat (nsec): min=4411, max=51712, avg=10467.43, stdev=7303.13 00:31:54.210 clat (usec): min=603, max=45565, avg=2497.48, stdev=3157.18 00:31:54.210 lat (usec): min=610, max=45595, avg=2507.95, stdev=3157.27 00:31:54.210 clat percentiles (usec): 00:31:54.210 | 1.00th=[ 1156], 5.00th=[ 1336], 10.00th=[ 1549], 20.00th=[ 1795], 00:31:54.210 | 30.00th=[ 1958], 40.00th=[ 2114], 50.00th=[ 2212], 60.00th=[ 2343], 00:31:54.210 | 70.00th=[ 2507], 80.00th=[ 2704], 90.00th=[ 3064], 95.00th=[ 3392], 00:31:54.210 | 99.00th=[ 4359], 99.50th=[42730], 99.90th=[44827], 99.95th=[45351], 00:31:54.210 | 99.99th=[45351] 00:31:54.210 bw ( KiB/s): min=18752, max=30608, per=32.18%, avg=25339.20, stdev=3829.58, samples=10 00:31:54.210 iops : min= 2344, max= 3826, avg=3167.40, stdev=478.70, samples=10 00:31:54.210 lat (usec) : 750=0.01%, 1000=0.06% 00:31:54.210 lat (msec) : 2=32.70%, 4=65.58%, 10=1.10%, 50=0.56% 00:31:54.210 cpu : usr=96.66%, sys=3.00%, ctx=6, majf=0, minf=19 00:31:54.210 IO depths : 1=0.3%, 2=1.7%, 4=65.4%, 8=32.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:54.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:54.210 complete : 0=0.0%, 4=95.6%, 8=4.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:54.210 issued rwts: total=15844,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:54.210 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:54.210 filename1: (groupid=0, jobs=1): err= 0: pid=1989650: Wed Feb 14 20:32:31 2024 00:31:54.210 read: IOPS=642, BW=5143KiB/s (5266kB/s)(25.1MiB/5003msec) 00:31:54.210 slat (nsec): min=5722, max=52828, avg=10133.50, stdev=6905.60 00:31:54.210 clat (usec): min=1254, max=47231, avg=12404.95, stdev=16470.30 00:31:54.210 lat (usec): min=1261, max=47244, avg=12415.09, stdev=16470.23 00:31:54.210 clat percentiles (usec): 00:31:54.210 | 1.00th=[ 1500], 5.00th=[ 2409], 10.00th=[ 3097], 20.00th=[ 3523], 00:31:54.210 | 30.00th=[ 3851], 40.00th=[ 4146], 50.00th=[ 4424], 60.00th=[ 4752], 00:31:54.210 | 70.00th=[ 5276], 80.00th=[43254], 90.00th=[44827], 95.00th=[45876], 00:31:54.210 | 99.00th=[46400], 99.50th=[46400], 99.90th=[47449], 99.95th=[47449], 00:31:54.210 | 99.99th=[47449] 00:31:54.210 bw ( KiB/s): min= 3072, max=10416, per=6.52%, avg=5132.80, stdev=2268.16, samples=10 00:31:54.211 iops : min= 384, max= 1302, avg=641.60, stdev=283.52, samples=10 00:31:54.211 lat (msec) : 2=3.17%, 4=32.18%, 10=44.40%, 20=0.09%, 50=20.15% 00:31:54.211 cpu : usr=98.36%, sys=1.32%, ctx=6, majf=0, minf=46 00:31:54.211 IO depths : 1=4.3%, 2=13.7%, 4=60.2%, 8=21.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:54.211 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:54.211 complete : 0=0.0%, 4=90.3%, 8=9.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:54.211 issued rwts: total=3216,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:54.211 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:54.211 00:31:54.211 Run status group 0 (all jobs): 00:31:54.211 READ: bw=76.9MiB/s (80.6MB/s), 5143KiB/s-24.7MiB/s (5266kB/s-25.9MB/s), io=388MiB (407MB), run=5002-5042msec 00:31:54.471 20:32:31 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:31:54.471 20:32:31 -- target/dif.sh@43 -- # local sub 00:31:54.471 20:32:31 -- target/dif.sh@45 -- # for sub in "$@" 00:31:54.471 20:32:31 -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:54.471 20:32:31 -- target/dif.sh@36 -- # local sub_id=0 00:31:54.471 20:32:31 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:54.471 20:32:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:54.471 20:32:31 -- common/autotest_common.sh@10 -- # set +x 00:31:54.471 20:32:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:54.471 20:32:31 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:54.471 20:32:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:54.471 20:32:31 -- common/autotest_common.sh@10 -- # set +x 00:31:54.471 20:32:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:54.471 20:32:31 -- target/dif.sh@45 -- # for sub in "$@" 00:31:54.471 20:32:31 -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:54.471 20:32:31 -- target/dif.sh@36 -- # local sub_id=1 00:31:54.471 20:32:31 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:54.471 20:32:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:54.471 20:32:31 -- common/autotest_common.sh@10 -- # set +x 00:31:54.471 20:32:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:54.471 20:32:31 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:54.471 20:32:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:54.471 20:32:31 -- common/autotest_common.sh@10 -- # set +x 00:31:54.471 20:32:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:54.471 00:31:54.471 real 0m24.271s 00:31:54.471 user 4m50.798s 00:31:54.471 sys 0m4.375s 00:31:54.471 20:32:31 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:54.471 20:32:31 -- common/autotest_common.sh@10 -- # set +x 00:31:54.471 ************************************ 00:31:54.471 END TEST fio_dif_rand_params 00:31:54.471 ************************************ 00:31:54.471 20:32:31 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:31:54.471 20:32:31 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:31:54.471 20:32:31 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:31:54.471 20:32:31 -- common/autotest_common.sh@10 -- # set +x 00:31:54.471 ************************************ 00:31:54.471 START TEST fio_dif_digest 00:31:54.471 ************************************ 00:31:54.471 20:32:31 -- common/autotest_common.sh@1102 -- # fio_dif_digest 00:31:54.471 20:32:31 -- target/dif.sh@123 -- # local NULL_DIF 00:31:54.471 20:32:31 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:31:54.471 20:32:31 -- target/dif.sh@125 -- # local hdgst ddgst 00:31:54.471 20:32:31 -- target/dif.sh@127 -- # NULL_DIF=3 00:31:54.471 20:32:31 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:31:54.471 20:32:31 -- target/dif.sh@127 -- # numjobs=3 00:31:54.471 20:32:31 -- target/dif.sh@127 -- # iodepth=3 00:31:54.471 20:32:31 -- target/dif.sh@127 -- # runtime=10 00:31:54.471 20:32:31 -- target/dif.sh@128 -- # hdgst=true 00:31:54.471 20:32:31 -- target/dif.sh@128 -- # ddgst=true 00:31:54.471 20:32:31 -- target/dif.sh@130 -- # create_subsystems 0 00:31:54.471 20:32:31 -- target/dif.sh@28 -- # local sub 00:31:54.471 20:32:31 -- target/dif.sh@30 -- # for sub in "$@" 00:31:54.471 20:32:31 -- target/dif.sh@31 -- # create_subsystem 0 00:31:54.471 20:32:31 -- target/dif.sh@18 -- # local sub_id=0 00:31:54.471 20:32:31 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:54.471 20:32:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:54.471 20:32:31 -- common/autotest_common.sh@10 -- # set +x 00:31:54.471 bdev_null0 00:31:54.471 20:32:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:54.471 20:32:31 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:54.471 20:32:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:54.471 20:32:31 -- common/autotest_common.sh@10 -- # set +x 00:31:54.730 20:32:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:54.730 20:32:31 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:54.730 20:32:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:54.730 20:32:31 -- common/autotest_common.sh@10 -- # set +x 00:31:54.730 20:32:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:54.730 20:32:31 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:54.730 20:32:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:54.730 20:32:31 -- common/autotest_common.sh@10 -- # set +x 00:31:54.731 [2024-02-14 20:32:31.901661] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:54.731 20:32:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:54.731 20:32:31 -- target/dif.sh@131 -- # fio /dev/fd/62 00:31:54.731 20:32:31 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:31:54.731 20:32:31 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:54.731 20:32:31 -- nvmf/common.sh@520 -- # config=() 00:31:54.731 20:32:31 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:54.731 20:32:31 -- nvmf/common.sh@520 -- # local subsystem config 00:31:54.731 20:32:31 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:54.731 20:32:31 -- common/autotest_common.sh@1333 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:54.731 20:32:31 -- target/dif.sh@82 -- # gen_fio_conf 00:31:54.731 20:32:31 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:54.731 { 00:31:54.731 "params": { 00:31:54.731 "name": "Nvme$subsystem", 00:31:54.731 "trtype": "$TEST_TRANSPORT", 00:31:54.731 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:54.731 "adrfam": "ipv4", 00:31:54.731 "trsvcid": "$NVMF_PORT", 00:31:54.731 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:54.731 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:54.731 "hdgst": ${hdgst:-false}, 00:31:54.731 "ddgst": ${ddgst:-false} 00:31:54.731 }, 00:31:54.731 "method": "bdev_nvme_attach_controller" 00:31:54.731 } 00:31:54.731 EOF 00:31:54.731 )") 00:31:54.731 20:32:31 -- common/autotest_common.sh@1314 -- # local fio_dir=/usr/src/fio 00:31:54.731 20:32:31 -- target/dif.sh@54 -- # local file 00:31:54.731 20:32:31 -- target/dif.sh@56 -- # cat 00:31:54.731 20:32:31 -- common/autotest_common.sh@1316 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:54.731 20:32:31 -- common/autotest_common.sh@1316 -- # local sanitizers 00:31:54.731 20:32:31 -- common/autotest_common.sh@1317 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:54.731 20:32:31 -- common/autotest_common.sh@1318 -- # shift 00:31:54.731 20:32:31 -- common/autotest_common.sh@1320 -- # local asan_lib= 00:31:54.731 20:32:31 -- common/autotest_common.sh@1321 -- # for sanitizer in "${sanitizers[@]}" 00:31:54.731 20:32:31 -- nvmf/common.sh@542 -- # cat 00:31:54.731 20:32:31 -- target/dif.sh@72 -- # (( file = 1 )) 00:31:54.731 20:32:31 -- target/dif.sh@72 -- # (( file <= files )) 00:31:54.731 20:32:31 -- common/autotest_common.sh@1322 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:54.731 20:32:31 -- common/autotest_common.sh@1322 -- # grep libasan 00:31:54.731 20:32:31 -- common/autotest_common.sh@1322 -- # awk '{print $3}' 00:31:54.731 20:32:31 -- nvmf/common.sh@544 -- # jq . 00:31:54.731 20:32:31 -- nvmf/common.sh@545 -- # IFS=, 00:31:54.731 20:32:31 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:31:54.731 "params": { 00:31:54.731 "name": "Nvme0", 00:31:54.731 "trtype": "tcp", 00:31:54.731 "traddr": "10.0.0.2", 00:31:54.731 "adrfam": "ipv4", 00:31:54.731 "trsvcid": "4420", 00:31:54.731 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:54.731 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:54.731 "hdgst": true, 00:31:54.731 "ddgst": true 00:31:54.731 }, 00:31:54.731 "method": "bdev_nvme_attach_controller" 00:31:54.731 }' 00:31:54.731 20:32:31 -- common/autotest_common.sh@1322 -- # asan_lib= 00:31:54.731 20:32:31 -- common/autotest_common.sh@1323 -- # [[ -n '' ]] 00:31:54.731 20:32:31 -- common/autotest_common.sh@1321 -- # for sanitizer in "${sanitizers[@]}" 00:31:54.731 20:32:31 -- common/autotest_common.sh@1322 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:54.731 20:32:31 -- common/autotest_common.sh@1322 -- # grep libclang_rt.asan 00:31:54.731 20:32:31 -- common/autotest_common.sh@1322 -- # awk '{print $3}' 00:31:54.731 20:32:31 -- common/autotest_common.sh@1322 -- # asan_lib= 00:31:54.731 20:32:31 -- common/autotest_common.sh@1323 -- # [[ -n '' ]] 00:31:54.731 20:32:31 -- common/autotest_common.sh@1329 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:54.731 20:32:31 -- common/autotest_common.sh@1329 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:54.990 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:54.990 ... 00:31:54.990 fio-3.35 00:31:54.990 Starting 3 threads 00:31:54.990 EAL: No free 2048 kB hugepages reported on node 1 00:31:55.249 [2024-02-14 20:32:32.560853] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:31:55.249 [2024-02-14 20:32:32.560900] rpc.c: 167:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:32:07.455 00:32:07.455 filename0: (groupid=0, jobs=1): err= 0: pid=1990709: Wed Feb 14 20:32:42 2024 00:32:07.455 read: IOPS=297, BW=37.1MiB/s (38.9MB/s)(373MiB/10045msec) 00:32:07.455 slat (usec): min=6, max=104, avg=22.90, stdev= 7.90 00:32:07.455 clat (usec): min=5806, max=55725, avg=10043.86, stdev=3493.98 00:32:07.455 lat (usec): min=5814, max=55748, avg=10066.75, stdev=3494.45 00:32:07.455 clat percentiles (usec): 00:32:07.455 | 1.00th=[ 6652], 5.00th=[ 7308], 10.00th=[ 7701], 20.00th=[ 8455], 00:32:07.455 | 30.00th=[ 9372], 40.00th=[ 9765], 50.00th=[10028], 60.00th=[10290], 00:32:07.455 | 70.00th=[10552], 80.00th=[10945], 90.00th=[11469], 95.00th=[11863], 00:32:07.455 | 99.00th=[13960], 99.50th=[51643], 99.90th=[54789], 99.95th=[55837], 00:32:07.455 | 99.99th=[55837] 00:32:07.455 bw ( KiB/s): min=32768, max=44544, per=36.77%, avg=38182.40, stdev=2948.57, samples=20 00:32:07.455 iops : min= 256, max= 348, avg=298.30, stdev=23.04, samples=20 00:32:07.455 lat (msec) : 10=48.53%, 20=50.94%, 50=0.03%, 100=0.50% 00:32:07.455 cpu : usr=96.73%, sys=2.87%, ctx=36, majf=0, minf=170 00:32:07.455 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:07.455 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:07.455 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:07.455 issued rwts: total=2984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:07.455 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:07.455 filename0: (groupid=0, jobs=1): err= 0: pid=1990710: Wed Feb 14 20:32:42 2024 00:32:07.455 read: IOPS=264, BW=33.0MiB/s (34.6MB/s)(331MiB/10006msec) 00:32:07.455 slat (nsec): min=6214, max=48391, avg=19187.52, stdev=9078.36 00:32:07.455 clat (usec): min=6335, max=58467, avg=11330.32, stdev=6129.43 00:32:07.455 lat (usec): min=6362, max=58480, avg=11349.51, stdev=6129.37 00:32:07.455 clat percentiles (usec): 00:32:07.455 | 1.00th=[ 7111], 5.00th=[ 7570], 10.00th=[ 7963], 20.00th=[ 9372], 00:32:07.455 | 30.00th=[10159], 40.00th=[10552], 50.00th=[10814], 60.00th=[11076], 00:32:07.455 | 70.00th=[11338], 80.00th=[11731], 90.00th=[12256], 95.00th=[13173], 00:32:07.455 | 99.00th=[54264], 99.50th=[55837], 99.90th=[56886], 99.95th=[57410], 00:32:07.455 | 99.99th=[58459] 00:32:07.455 bw ( KiB/s): min=25394, max=40448, per=32.55%, avg=33807.30, stdev=4018.67, samples=20 00:32:07.455 iops : min= 198, max= 316, avg=264.10, stdev=31.44, samples=20 00:32:07.455 lat (msec) : 10=27.50%, 20=70.57%, 50=0.11%, 100=1.82% 00:32:07.455 cpu : usr=96.08%, sys=3.56%, ctx=20, majf=0, minf=157 00:32:07.455 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:07.455 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:07.455 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:07.455 issued rwts: total=2644,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:07.455 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:07.455 filename0: (groupid=0, jobs=1): err= 0: pid=1990711: Wed Feb 14 20:32:42 2024 00:32:07.455 read: IOPS=251, BW=31.4MiB/s (32.9MB/s)(316MiB/10049msec) 00:32:07.455 slat (nsec): min=6255, max=54877, avg=20209.01, stdev=9080.34 00:32:07.455 clat (usec): min=6134, max=98633, avg=11915.43, stdev=8429.20 00:32:07.455 lat (usec): min=6159, max=98648, avg=11935.63, stdev=8429.55 00:32:07.455 clat percentiles (usec): 00:32:07.455 | 1.00th=[ 7046], 5.00th=[ 7701], 10.00th=[ 8455], 20.00th=[ 9503], 00:32:07.455 | 30.00th=[10028], 40.00th=[10290], 50.00th=[10552], 60.00th=[10945], 00:32:07.455 | 70.00th=[11207], 80.00th=[11600], 90.00th=[12387], 95.00th=[13698], 00:32:07.455 | 99.00th=[56361], 99.50th=[57934], 99.90th=[96994], 99.95th=[98042], 00:32:07.455 | 99.99th=[99091] 00:32:07.455 bw ( KiB/s): min=18432, max=39936, per=31.08%, avg=32281.60, stdev=5289.59, samples=20 00:32:07.455 iops : min= 144, max= 312, avg=252.20, stdev=41.32, samples=20 00:32:07.455 lat (msec) : 10=29.07%, 20=67.92%, 100=3.01% 00:32:07.455 cpu : usr=96.33%, sys=3.30%, ctx=28, majf=0, minf=163 00:32:07.455 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:07.455 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:07.455 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:07.455 issued rwts: total=2525,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:07.455 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:07.455 00:32:07.455 Run status group 0 (all jobs): 00:32:07.455 READ: bw=101MiB/s (106MB/s), 31.4MiB/s-37.1MiB/s (32.9MB/s-38.9MB/s), io=1019MiB (1069MB), run=10006-10049msec 00:32:07.455 20:32:42 -- target/dif.sh@132 -- # destroy_subsystems 0 00:32:07.455 20:32:42 -- target/dif.sh@43 -- # local sub 00:32:07.455 20:32:42 -- target/dif.sh@45 -- # for sub in "$@" 00:32:07.455 20:32:42 -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:07.455 20:32:42 -- target/dif.sh@36 -- # local sub_id=0 00:32:07.455 20:32:42 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:07.455 20:32:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:07.455 20:32:42 -- common/autotest_common.sh@10 -- # set +x 00:32:07.455 20:32:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:07.455 20:32:42 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:07.455 20:32:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:07.455 20:32:42 -- common/autotest_common.sh@10 -- # set +x 00:32:07.455 20:32:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:07.455 00:32:07.455 real 0m11.060s 00:32:07.455 user 0m35.530s 00:32:07.455 sys 0m1.240s 00:32:07.455 20:32:42 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:07.455 20:32:42 -- common/autotest_common.sh@10 -- # set +x 00:32:07.455 ************************************ 00:32:07.455 END TEST fio_dif_digest 00:32:07.455 ************************************ 00:32:07.455 20:32:42 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:32:07.455 20:32:42 -- target/dif.sh@147 -- # nvmftestfini 00:32:07.455 20:32:42 -- nvmf/common.sh@476 -- # nvmfcleanup 00:32:07.455 20:32:42 -- nvmf/common.sh@116 -- # sync 00:32:07.455 20:32:42 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:32:07.455 20:32:42 -- nvmf/common.sh@119 -- # set +e 00:32:07.455 20:32:42 -- nvmf/common.sh@120 -- # for i in {1..20} 00:32:07.455 20:32:42 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:32:07.455 rmmod nvme_tcp 00:32:07.455 rmmod nvme_fabrics 00:32:07.455 rmmod nvme_keyring 00:32:07.455 20:32:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:32:07.455 20:32:43 -- nvmf/common.sh@123 -- # set -e 00:32:07.455 20:32:43 -- nvmf/common.sh@124 -- # return 0 00:32:07.455 20:32:43 -- nvmf/common.sh@477 -- # '[' -n 1982081 ']' 00:32:07.455 20:32:43 -- nvmf/common.sh@478 -- # killprocess 1982081 00:32:07.455 20:32:43 -- common/autotest_common.sh@924 -- # '[' -z 1982081 ']' 00:32:07.455 20:32:43 -- common/autotest_common.sh@928 -- # kill -0 1982081 00:32:07.455 20:32:43 -- common/autotest_common.sh@929 -- # uname 00:32:07.455 20:32:43 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:32:07.455 20:32:43 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 1982081 00:32:07.455 20:32:43 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:32:07.455 20:32:43 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:32:07.455 20:32:43 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 1982081' 00:32:07.455 killing process with pid 1982081 00:32:07.455 20:32:43 -- common/autotest_common.sh@943 -- # kill 1982081 00:32:07.455 20:32:43 -- common/autotest_common.sh@948 -- # wait 1982081 00:32:07.455 20:32:43 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:32:07.455 20:32:43 -- nvmf/common.sh@481 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:08.392 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:32:08.651 Waiting for block devices as requested 00:32:08.910 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:32:08.910 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:32:08.910 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:32:09.169 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:32:09.169 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:32:09.169 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:32:09.169 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:32:09.428 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:32:09.428 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:32:09.428 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:32:09.428 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:32:09.687 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:32:09.687 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:32:09.687 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:32:09.947 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:32:09.947 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:32:09.947 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:32:09.947 20:32:47 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:32:09.947 20:32:47 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:32:09.947 20:32:47 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:09.947 20:32:47 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:32:09.947 20:32:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:09.947 20:32:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:09.947 20:32:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:12.485 20:32:49 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:32:12.485 00:32:12.485 real 1m14.638s 00:32:12.485 user 7m8.355s 00:32:12.485 sys 0m18.763s 00:32:12.485 20:32:49 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:12.485 20:32:49 -- common/autotest_common.sh@10 -- # set +x 00:32:12.485 ************************************ 00:32:12.485 END TEST nvmf_dif 00:32:12.485 ************************************ 00:32:12.485 20:32:49 -- spdk/autotest.sh@301 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:32:12.485 20:32:49 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:32:12.485 20:32:49 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:32:12.485 20:32:49 -- common/autotest_common.sh@10 -- # set +x 00:32:12.485 ************************************ 00:32:12.485 START TEST nvmf_abort_qd_sizes 00:32:12.485 ************************************ 00:32:12.485 20:32:49 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:32:12.485 * Looking for test storage... 00:32:12.485 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:12.485 20:32:49 -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:12.485 20:32:49 -- nvmf/common.sh@7 -- # uname -s 00:32:12.485 20:32:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:12.485 20:32:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:12.485 20:32:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:12.485 20:32:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:12.485 20:32:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:12.485 20:32:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:12.485 20:32:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:12.485 20:32:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:12.485 20:32:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:12.485 20:32:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:12.485 20:32:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:32:12.485 20:32:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:32:12.485 20:32:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:12.485 20:32:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:12.485 20:32:49 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:12.485 20:32:49 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:12.485 20:32:49 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:12.485 20:32:49 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:12.485 20:32:49 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:12.485 20:32:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:12.485 20:32:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:12.485 20:32:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:12.485 20:32:49 -- paths/export.sh@5 -- # export PATH 00:32:12.485 20:32:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:12.485 20:32:49 -- nvmf/common.sh@46 -- # : 0 00:32:12.485 20:32:49 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:32:12.485 20:32:49 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:32:12.485 20:32:49 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:32:12.485 20:32:49 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:12.485 20:32:49 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:12.486 20:32:49 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:32:12.486 20:32:49 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:32:12.486 20:32:49 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:32:12.486 20:32:49 -- target/abort_qd_sizes.sh@73 -- # nvmftestinit 00:32:12.486 20:32:49 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:32:12.486 20:32:49 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:12.486 20:32:49 -- nvmf/common.sh@436 -- # prepare_net_devs 00:32:12.486 20:32:49 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:32:12.486 20:32:49 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:32:12.486 20:32:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:12.486 20:32:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:12.486 20:32:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:12.486 20:32:49 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:32:12.486 20:32:49 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:32:12.486 20:32:49 -- nvmf/common.sh@284 -- # xtrace_disable 00:32:12.486 20:32:49 -- common/autotest_common.sh@10 -- # set +x 00:32:19.062 20:32:55 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:32:19.062 20:32:55 -- nvmf/common.sh@290 -- # pci_devs=() 00:32:19.062 20:32:55 -- nvmf/common.sh@290 -- # local -a pci_devs 00:32:19.062 20:32:55 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:32:19.062 20:32:55 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:32:19.062 20:32:55 -- nvmf/common.sh@292 -- # pci_drivers=() 00:32:19.062 20:32:55 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:32:19.062 20:32:55 -- nvmf/common.sh@294 -- # net_devs=() 00:32:19.062 20:32:55 -- nvmf/common.sh@294 -- # local -ga net_devs 00:32:19.062 20:32:55 -- nvmf/common.sh@295 -- # e810=() 00:32:19.062 20:32:55 -- nvmf/common.sh@295 -- # local -ga e810 00:32:19.062 20:32:55 -- nvmf/common.sh@296 -- # x722=() 00:32:19.062 20:32:55 -- nvmf/common.sh@296 -- # local -ga x722 00:32:19.062 20:32:55 -- nvmf/common.sh@297 -- # mlx=() 00:32:19.062 20:32:55 -- nvmf/common.sh@297 -- # local -ga mlx 00:32:19.062 20:32:55 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:19.062 20:32:55 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:19.062 20:32:55 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:19.062 20:32:55 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:19.062 20:32:55 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:19.062 20:32:55 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:19.062 20:32:55 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:19.062 20:32:55 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:19.062 20:32:55 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:19.062 20:32:55 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:19.062 20:32:55 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:19.062 20:32:55 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:32:19.062 20:32:55 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:32:19.062 20:32:55 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:32:19.062 20:32:55 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:32:19.062 20:32:55 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:32:19.062 20:32:55 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:32:19.062 20:32:55 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:32:19.062 20:32:55 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:19.062 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:19.062 20:32:55 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:32:19.062 20:32:55 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:32:19.062 20:32:55 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:19.062 20:32:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:19.063 20:32:55 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:32:19.063 20:32:55 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:32:19.063 20:32:55 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:19.063 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:19.063 20:32:55 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:32:19.063 20:32:55 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:32:19.063 20:32:55 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:19.063 20:32:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:19.063 20:32:55 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:32:19.063 20:32:55 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:32:19.063 20:32:55 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:32:19.063 20:32:55 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:32:19.063 20:32:55 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:32:19.063 20:32:55 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:19.063 20:32:55 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:32:19.063 20:32:55 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:19.063 20:32:55 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:19.063 Found net devices under 0000:af:00.0: cvl_0_0 00:32:19.063 20:32:55 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:32:19.063 20:32:55 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:32:19.063 20:32:55 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:19.063 20:32:55 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:32:19.063 20:32:55 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:19.063 20:32:55 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:19.063 Found net devices under 0000:af:00.1: cvl_0_1 00:32:19.063 20:32:55 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:32:19.063 20:32:55 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:32:19.063 20:32:55 -- nvmf/common.sh@402 -- # is_hw=yes 00:32:19.063 20:32:55 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:32:19.063 20:32:55 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:32:19.063 20:32:55 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:32:19.063 20:32:55 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:19.063 20:32:55 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:19.063 20:32:55 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:19.063 20:32:55 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:32:19.063 20:32:55 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:19.063 20:32:55 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:19.063 20:32:55 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:32:19.063 20:32:55 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:19.063 20:32:55 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:19.063 20:32:55 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:32:19.063 20:32:55 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:32:19.063 20:32:55 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:32:19.063 20:32:55 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:19.063 20:32:55 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:19.063 20:32:55 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:19.063 20:32:55 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:32:19.063 20:32:55 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:19.063 20:32:55 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:19.063 20:32:55 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:19.063 20:32:55 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:32:19.063 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:19.063 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.163 ms 00:32:19.063 00:32:19.063 --- 10.0.0.2 ping statistics --- 00:32:19.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:19.063 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:32:19.063 20:32:55 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:19.063 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:19.063 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.364 ms 00:32:19.063 00:32:19.063 --- 10.0.0.1 ping statistics --- 00:32:19.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:19.063 rtt min/avg/max/mdev = 0.364/0.364/0.364/0.000 ms 00:32:19.063 20:32:55 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:19.063 20:32:55 -- nvmf/common.sh@410 -- # return 0 00:32:19.063 20:32:55 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:32:19.063 20:32:55 -- nvmf/common.sh@439 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:21.037 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:32:21.605 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:32:21.605 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:32:21.605 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:32:21.605 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:32:21.605 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:32:21.605 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:32:21.605 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:32:21.605 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:32:21.605 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:32:21.605 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:32:21.605 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:32:21.605 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:32:21.605 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:32:21.605 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:32:21.605 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:32:21.605 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:32:22.542 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:32:22.542 20:32:59 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:22.542 20:32:59 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:32:22.542 20:32:59 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:32:22.542 20:32:59 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:22.542 20:32:59 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:32:22.542 20:32:59 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:32:22.542 20:32:59 -- target/abort_qd_sizes.sh@74 -- # nvmfappstart -m 0xf 00:32:22.542 20:32:59 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:32:22.542 20:32:59 -- common/autotest_common.sh@710 -- # xtrace_disable 00:32:22.542 20:32:59 -- common/autotest_common.sh@10 -- # set +x 00:32:22.542 20:32:59 -- nvmf/common.sh@469 -- # nvmfpid=1999426 00:32:22.542 20:32:59 -- nvmf/common.sh@470 -- # waitforlisten 1999426 00:32:22.542 20:32:59 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:32:22.542 20:32:59 -- common/autotest_common.sh@817 -- # '[' -z 1999426 ']' 00:32:22.542 20:32:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:22.542 20:32:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:32:22.542 20:32:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:22.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:22.542 20:32:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:32:22.542 20:32:59 -- common/autotest_common.sh@10 -- # set +x 00:32:22.542 [2024-02-14 20:32:59.885454] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:32:22.542 [2024-02-14 20:32:59.885493] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:22.542 EAL: No free 2048 kB hugepages reported on node 1 00:32:22.542 [2024-02-14 20:32:59.948363] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:22.801 [2024-02-14 20:33:00.035095] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:22.801 [2024-02-14 20:33:00.035198] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:22.801 [2024-02-14 20:33:00.035208] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:22.801 [2024-02-14 20:33:00.035218] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:22.801 [2024-02-14 20:33:00.035263] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:22.801 [2024-02-14 20:33:00.035367] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:32:22.801 [2024-02-14 20:33:00.035454] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:32:22.801 [2024-02-14 20:33:00.035455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:23.371 20:33:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:32:23.371 20:33:00 -- common/autotest_common.sh@850 -- # return 0 00:32:23.371 20:33:00 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:32:23.371 20:33:00 -- common/autotest_common.sh@716 -- # xtrace_disable 00:32:23.371 20:33:00 -- common/autotest_common.sh@10 -- # set +x 00:32:23.371 20:33:00 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:23.371 20:33:00 -- target/abort_qd_sizes.sh@76 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:32:23.371 20:33:00 -- target/abort_qd_sizes.sh@78 -- # mapfile -t nvmes 00:32:23.371 20:33:00 -- target/abort_qd_sizes.sh@78 -- # nvme_in_userspace 00:32:23.371 20:33:00 -- scripts/common.sh@311 -- # local bdf bdfs 00:32:23.371 20:33:00 -- scripts/common.sh@312 -- # local nvmes 00:32:23.371 20:33:00 -- scripts/common.sh@314 -- # [[ -n 0000:5e:00.0 0000:5f:00.0 ]] 00:32:23.371 20:33:00 -- scripts/common.sh@315 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:32:23.371 20:33:00 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:32:23.371 20:33:00 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:32:23.371 20:33:00 -- scripts/common.sh@322 -- # uname -s 00:32:23.371 20:33:00 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:32:23.371 20:33:00 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:32:23.371 20:33:00 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:32:23.371 20:33:00 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5f:00.0 ]] 00:32:23.371 20:33:00 -- scripts/common.sh@323 -- # continue 00:32:23.371 20:33:00 -- scripts/common.sh@327 -- # (( 1 )) 00:32:23.371 20:33:00 -- scripts/common.sh@328 -- # printf '%s\n' 0000:5e:00.0 00:32:23.371 20:33:00 -- target/abort_qd_sizes.sh@79 -- # (( 1 > 0 )) 00:32:23.371 20:33:00 -- target/abort_qd_sizes.sh@81 -- # nvme=0000:5e:00.0 00:32:23.371 20:33:00 -- target/abort_qd_sizes.sh@83 -- # run_test spdk_target_abort spdk_target 00:32:23.371 20:33:00 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:32:23.371 20:33:00 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:32:23.371 20:33:00 -- common/autotest_common.sh@10 -- # set +x 00:32:23.371 ************************************ 00:32:23.371 START TEST spdk_target_abort 00:32:23.371 ************************************ 00:32:23.371 20:33:00 -- common/autotest_common.sh@1102 -- # spdk_target 00:32:23.371 20:33:00 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:32:23.371 20:33:00 -- target/abort_qd_sizes.sh@44 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:32:23.371 20:33:00 -- target/abort_qd_sizes.sh@46 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:32:23.371 20:33:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:23.371 20:33:00 -- common/autotest_common.sh@10 -- # set +x 00:32:26.657 spdk_targetn1 00:32:26.657 20:33:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:26.657 20:33:03 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:26.657 20:33:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:26.657 20:33:03 -- common/autotest_common.sh@10 -- # set +x 00:32:26.657 [2024-02-14 20:33:03.569222] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:26.657 20:33:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:26.657 20:33:03 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:spdk_target -a -s SPDKISFASTANDAWESOME 00:32:26.657 20:33:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:26.657 20:33:03 -- common/autotest_common.sh@10 -- # set +x 00:32:26.657 20:33:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:26.657 20:33:03 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:spdk_target spdk_targetn1 00:32:26.657 20:33:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:26.657 20:33:03 -- common/autotest_common.sh@10 -- # set +x 00:32:26.657 20:33:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:26.657 20:33:03 -- target/abort_qd_sizes.sh@51 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:spdk_target -t tcp -a 10.0.0.2 -s 4420 00:32:26.657 20:33:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:26.658 20:33:03 -- common/autotest_common.sh@10 -- # set +x 00:32:26.658 [2024-02-14 20:33:03.602048] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:26.658 20:33:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:26.658 20:33:03 -- target/abort_qd_sizes.sh@53 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:spdk_target 00:32:26.658 20:33:03 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:32:26.658 20:33:03 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:32:26.658 20:33:03 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:32:26.658 20:33:03 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:32:26.658 20:33:03 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:32:26.658 20:33:03 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:32:26.658 20:33:03 -- target/abort_qd_sizes.sh@24 -- # local target r 00:32:26.658 20:33:03 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:32:26.658 20:33:03 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:26.658 20:33:03 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:32:26.658 20:33:03 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:26.658 20:33:03 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:32:26.658 20:33:03 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:26.658 20:33:03 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:32:26.658 20:33:03 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:26.658 20:33:03 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:26.658 20:33:03 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:26.658 20:33:03 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:32:26.658 20:33:03 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:26.658 20:33:03 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:32:26.658 EAL: No free 2048 kB hugepages reported on node 1 00:32:29.941 Initializing NVMe Controllers 00:32:29.941 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:32:29.941 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:32:29.941 Initialization complete. Launching workers. 00:32:29.941 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 5973, failed: 0 00:32:29.941 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1517, failed to submit 4456 00:32:29.941 success 917, unsuccess 600, failed 0 00:32:29.941 20:33:06 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:29.941 20:33:06 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:32:29.941 EAL: No free 2048 kB hugepages reported on node 1 00:32:33.224 Initializing NVMe Controllers 00:32:33.224 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:32:33.224 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:32:33.224 Initialization complete. Launching workers. 00:32:33.224 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 8751, failed: 0 00:32:33.225 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1235, failed to submit 7516 00:32:33.225 success 355, unsuccess 880, failed 0 00:32:33.225 20:33:10 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:33.225 20:33:10 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:32:33.225 EAL: No free 2048 kB hugepages reported on node 1 00:32:36.512 Initializing NVMe Controllers 00:32:36.512 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:32:36.512 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:32:36.512 Initialization complete. Launching workers. 00:32:36.513 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 35037, failed: 0 00:32:36.513 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 2864, failed to submit 32173 00:32:36.513 success 753, unsuccess 2111, failed 0 00:32:36.513 20:33:13 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:spdk_target 00:32:36.513 20:33:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:36.513 20:33:13 -- common/autotest_common.sh@10 -- # set +x 00:32:36.513 20:33:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:36.513 20:33:13 -- target/abort_qd_sizes.sh@56 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:32:36.513 20:33:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:36.513 20:33:13 -- common/autotest_common.sh@10 -- # set +x 00:32:37.448 20:33:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:37.448 20:33:14 -- target/abort_qd_sizes.sh@62 -- # killprocess 1999426 00:32:37.448 20:33:14 -- common/autotest_common.sh@924 -- # '[' -z 1999426 ']' 00:32:37.448 20:33:14 -- common/autotest_common.sh@928 -- # kill -0 1999426 00:32:37.448 20:33:14 -- common/autotest_common.sh@929 -- # uname 00:32:37.448 20:33:14 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:32:37.448 20:33:14 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 1999426 00:32:37.448 20:33:14 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:32:37.448 20:33:14 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:32:37.448 20:33:14 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 1999426' 00:32:37.448 killing process with pid 1999426 00:32:37.448 20:33:14 -- common/autotest_common.sh@943 -- # kill 1999426 00:32:37.448 20:33:14 -- common/autotest_common.sh@948 -- # wait 1999426 00:32:37.706 00:32:37.706 real 0m14.147s 00:32:37.706 user 0m56.149s 00:32:37.706 sys 0m2.205s 00:32:37.706 20:33:14 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:37.706 20:33:14 -- common/autotest_common.sh@10 -- # set +x 00:32:37.706 ************************************ 00:32:37.706 END TEST spdk_target_abort 00:32:37.706 ************************************ 00:32:37.706 20:33:14 -- target/abort_qd_sizes.sh@84 -- # run_test kernel_target_abort kernel_target 00:32:37.706 20:33:14 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:32:37.706 20:33:14 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:32:37.706 20:33:14 -- common/autotest_common.sh@10 -- # set +x 00:32:37.706 ************************************ 00:32:37.706 START TEST kernel_target_abort 00:32:37.706 ************************************ 00:32:37.706 20:33:14 -- common/autotest_common.sh@1102 -- # kernel_target 00:32:37.706 20:33:14 -- target/abort_qd_sizes.sh@66 -- # local name=kernel_target 00:32:37.706 20:33:14 -- target/abort_qd_sizes.sh@68 -- # configure_kernel_target kernel_target 00:32:37.706 20:33:14 -- nvmf/common.sh@621 -- # kernel_name=kernel_target 00:32:37.706 20:33:14 -- nvmf/common.sh@622 -- # nvmet=/sys/kernel/config/nvmet 00:32:37.706 20:33:14 -- nvmf/common.sh@623 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/kernel_target 00:32:37.706 20:33:14 -- nvmf/common.sh@624 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:32:37.706 20:33:14 -- nvmf/common.sh@625 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:37.706 20:33:14 -- nvmf/common.sh@627 -- # local block nvme 00:32:37.706 20:33:14 -- nvmf/common.sh@629 -- # [[ ! -e /sys/module/nvmet ]] 00:32:37.706 20:33:14 -- nvmf/common.sh@630 -- # modprobe nvmet 00:32:37.706 20:33:14 -- nvmf/common.sh@633 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:37.706 20:33:14 -- nvmf/common.sh@635 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:40.237 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:32:40.808 Waiting for block devices as requested 00:32:40.808 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:32:40.808 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:32:40.808 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:32:41.067 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:32:41.067 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:32:41.067 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:32:41.067 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:32:41.325 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:32:41.325 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:32:41.325 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:32:41.325 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:32:41.594 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:32:41.595 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:32:41.595 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:32:41.595 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:32:41.881 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:32:41.881 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:32:41.881 20:33:19 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:32:41.881 20:33:19 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:41.881 20:33:19 -- nvmf/common.sh@640 -- # block_in_use nvme0n1 00:32:41.881 20:33:19 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:32:41.881 20:33:19 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:41.881 No valid GPT data, bailing 00:32:41.881 20:33:19 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:41.881 20:33:19 -- scripts/common.sh@393 -- # pt= 00:32:41.881 20:33:19 -- scripts/common.sh@394 -- # return 1 00:32:41.881 20:33:19 -- nvmf/common.sh@640 -- # nvme=/dev/nvme0n1 00:32:41.881 20:33:19 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:32:41.881 20:33:19 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n1 ]] 00:32:41.881 20:33:19 -- nvmf/common.sh@640 -- # block_in_use nvme1n1 00:32:41.881 20:33:19 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:32:41.881 20:33:19 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme1n1 00:32:42.139 No valid GPT data, bailing 00:32:42.139 20:33:19 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:32:42.139 20:33:19 -- scripts/common.sh@393 -- # pt= 00:32:42.139 20:33:19 -- scripts/common.sh@394 -- # return 1 00:32:42.139 20:33:19 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n1 00:32:42.139 20:33:19 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:32:42.139 20:33:19 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n2 ]] 00:32:42.139 20:33:19 -- nvmf/common.sh@640 -- # block_in_use nvme1n2 00:32:42.139 20:33:19 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:32:42.139 20:33:19 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme1n2 00:32:42.139 No valid GPT data, bailing 00:32:42.139 20:33:19 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:32:42.139 20:33:19 -- scripts/common.sh@393 -- # pt= 00:32:42.139 20:33:19 -- scripts/common.sh@394 -- # return 1 00:32:42.139 20:33:19 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n2 00:32:42.139 20:33:19 -- nvmf/common.sh@643 -- # [[ -b /dev/nvme1n2 ]] 00:32:42.139 20:33:19 -- nvmf/common.sh@645 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:32:42.139 20:33:19 -- nvmf/common.sh@646 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:32:42.139 20:33:19 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:42.140 20:33:19 -- nvmf/common.sh@652 -- # echo SPDK-kernel_target 00:32:42.140 20:33:19 -- nvmf/common.sh@654 -- # echo 1 00:32:42.140 20:33:19 -- nvmf/common.sh@655 -- # echo /dev/nvme1n2 00:32:42.140 20:33:19 -- nvmf/common.sh@656 -- # echo 1 00:32:42.140 20:33:19 -- nvmf/common.sh@662 -- # echo 10.0.0.1 00:32:42.140 20:33:19 -- nvmf/common.sh@663 -- # echo tcp 00:32:42.140 20:33:19 -- nvmf/common.sh@664 -- # echo 4420 00:32:42.140 20:33:19 -- nvmf/common.sh@665 -- # echo ipv4 00:32:42.140 20:33:19 -- nvmf/common.sh@668 -- # ln -s /sys/kernel/config/nvmet/subsystems/kernel_target /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:42.140 20:33:19 -- nvmf/common.sh@671 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:32:42.140 00:32:42.140 Discovery Log Number of Records 2, Generation counter 2 00:32:42.140 =====Discovery Log Entry 0====== 00:32:42.140 trtype: tcp 00:32:42.140 adrfam: ipv4 00:32:42.140 subtype: current discovery subsystem 00:32:42.140 treq: not specified, sq flow control disable supported 00:32:42.140 portid: 1 00:32:42.140 trsvcid: 4420 00:32:42.140 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:42.140 traddr: 10.0.0.1 00:32:42.140 eflags: none 00:32:42.140 sectype: none 00:32:42.140 =====Discovery Log Entry 1====== 00:32:42.140 trtype: tcp 00:32:42.140 adrfam: ipv4 00:32:42.140 subtype: nvme subsystem 00:32:42.140 treq: not specified, sq flow control disable supported 00:32:42.140 portid: 1 00:32:42.140 trsvcid: 4420 00:32:42.140 subnqn: kernel_target 00:32:42.140 traddr: 10.0.0.1 00:32:42.140 eflags: none 00:32:42.140 sectype: none 00:32:42.140 20:33:19 -- target/abort_qd_sizes.sh@69 -- # rabort tcp IPv4 10.0.0.1 4420 kernel_target 00:32:42.140 20:33:19 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:32:42.140 20:33:19 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:32:42.140 20:33:19 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:32:42.140 20:33:19 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:32:42.140 20:33:19 -- target/abort_qd_sizes.sh@21 -- # local subnqn=kernel_target 00:32:42.140 20:33:19 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:32:42.140 20:33:19 -- target/abort_qd_sizes.sh@24 -- # local target r 00:32:42.140 20:33:19 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:32:42.140 20:33:19 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:42.140 20:33:19 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:32:42.140 20:33:19 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:42.140 20:33:19 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:32:42.140 20:33:19 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:42.140 20:33:19 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:32:42.140 20:33:19 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:42.140 20:33:19 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:32:42.140 20:33:19 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:42.140 20:33:19 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:32:42.140 20:33:19 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:42.140 20:33:19 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:32:42.140 EAL: No free 2048 kB hugepages reported on node 1 00:32:45.425 Initializing NVMe Controllers 00:32:45.425 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:32:45.425 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:32:45.425 Initialization complete. Launching workers. 00:32:45.425 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 9737, failed: 9735 00:32:45.425 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 19472, failed to submit 0 00:32:45.425 success 0, unsuccess 19472, failed 0 00:32:45.425 20:33:22 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:45.425 20:33:22 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:32:45.425 EAL: No free 2048 kB hugepages reported on node 1 00:32:48.707 Initializing NVMe Controllers 00:32:48.707 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:32:48.707 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:32:48.707 Initialization complete. Launching workers. 00:32:48.707 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 11257, failed: 11220 00:32:48.707 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 22301, failed to submit 176 00:32:48.707 success 0, unsuccess 22301, failed 0 00:32:48.707 20:33:25 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:48.707 20:33:25 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:32:48.707 EAL: No free 2048 kB hugepages reported on node 1 00:32:51.992 Initializing NVMe Controllers 00:32:51.992 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:32:51.992 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:32:51.992 Initialization complete. Launching workers. 00:32:51.992 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 11778, failed: 11742 00:32:51.992 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 23365, failed to submit 155 00:32:51.992 success 0, unsuccess 23365, failed 0 00:32:51.992 20:33:28 -- target/abort_qd_sizes.sh@70 -- # clean_kernel_target 00:32:51.992 20:33:28 -- nvmf/common.sh@675 -- # [[ -e /sys/kernel/config/nvmet/subsystems/kernel_target ]] 00:32:51.992 20:33:28 -- nvmf/common.sh@677 -- # echo 0 00:32:51.992 20:33:28 -- nvmf/common.sh@679 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/kernel_target 00:32:51.992 20:33:28 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:32:51.992 20:33:28 -- nvmf/common.sh@681 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:51.992 20:33:28 -- nvmf/common.sh@682 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:32:51.992 20:33:28 -- nvmf/common.sh@684 -- # modules=(/sys/module/nvmet/holders/*) 00:32:51.992 20:33:28 -- nvmf/common.sh@686 -- # modprobe -r nvmet_tcp nvmet 00:32:51.992 00:32:51.992 real 0m13.889s 00:32:51.993 user 0m3.339s 00:32:51.993 sys 0m4.291s 00:32:51.993 20:33:28 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:51.993 20:33:28 -- common/autotest_common.sh@10 -- # set +x 00:32:51.993 ************************************ 00:32:51.993 END TEST kernel_target_abort 00:32:51.993 ************************************ 00:32:51.993 20:33:28 -- target/abort_qd_sizes.sh@86 -- # trap - SIGINT SIGTERM EXIT 00:32:51.993 20:33:28 -- target/abort_qd_sizes.sh@87 -- # nvmftestfini 00:32:51.993 20:33:28 -- nvmf/common.sh@476 -- # nvmfcleanup 00:32:51.993 20:33:28 -- nvmf/common.sh@116 -- # sync 00:32:51.993 20:33:28 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:32:51.993 20:33:28 -- nvmf/common.sh@119 -- # set +e 00:32:51.993 20:33:28 -- nvmf/common.sh@120 -- # for i in {1..20} 00:32:51.993 20:33:28 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:32:51.993 rmmod nvme_tcp 00:32:51.993 rmmod nvme_fabrics 00:32:51.993 rmmod nvme_keyring 00:32:51.993 20:33:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:32:51.993 20:33:28 -- nvmf/common.sh@123 -- # set -e 00:32:51.993 20:33:28 -- nvmf/common.sh@124 -- # return 0 00:32:51.993 20:33:28 -- nvmf/common.sh@477 -- # '[' -n 1999426 ']' 00:32:51.993 20:33:28 -- nvmf/common.sh@478 -- # killprocess 1999426 00:32:51.993 20:33:28 -- common/autotest_common.sh@924 -- # '[' -z 1999426 ']' 00:32:51.993 20:33:28 -- common/autotest_common.sh@928 -- # kill -0 1999426 00:32:51.993 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 928: kill: (1999426) - No such process 00:32:51.993 20:33:28 -- common/autotest_common.sh@951 -- # echo 'Process with pid 1999426 is not found' 00:32:51.993 Process with pid 1999426 is not found 00:32:51.993 20:33:28 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:32:51.993 20:33:28 -- nvmf/common.sh@481 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:54.528 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:32:54.528 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:32:54.528 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:32:54.528 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:32:54.528 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:32:54.528 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:32:54.528 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:32:54.528 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:32:54.528 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:32:54.528 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:32:54.528 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:32:54.528 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:32:54.528 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:32:54.528 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:32:54.528 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:32:54.528 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:32:54.528 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:32:54.787 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:32:54.787 20:33:32 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:32:54.787 20:33:32 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:32:54.787 20:33:32 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:54.787 20:33:32 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:32:54.787 20:33:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:54.787 20:33:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:54.787 20:33:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:56.692 20:33:34 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:32:56.692 00:32:56.692 real 0m44.624s 00:32:56.692 user 1m3.919s 00:32:56.692 sys 0m15.455s 00:32:56.692 20:33:34 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:56.692 20:33:34 -- common/autotest_common.sh@10 -- # set +x 00:32:56.692 ************************************ 00:32:56.692 END TEST nvmf_abort_qd_sizes 00:32:56.692 ************************************ 00:32:56.692 20:33:34 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:32:56.692 20:33:34 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:32:56.692 20:33:34 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:32:56.692 20:33:34 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:32:56.692 20:33:34 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:32:56.692 20:33:34 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:32:56.692 20:33:34 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:32:56.692 20:33:34 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:32:56.692 20:33:34 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:32:56.692 20:33:34 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:32:56.692 20:33:34 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:32:56.692 20:33:34 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:32:56.692 20:33:34 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:32:56.692 20:33:34 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:32:56.692 20:33:34 -- spdk/autotest.sh@378 -- # [[ 0 -eq 1 ]] 00:32:56.692 20:33:34 -- spdk/autotest.sh@383 -- # trap - SIGINT SIGTERM EXIT 00:32:56.692 20:33:34 -- spdk/autotest.sh@385 -- # timing_enter post_cleanup 00:32:56.692 20:33:34 -- common/autotest_common.sh@710 -- # xtrace_disable 00:32:56.692 20:33:34 -- common/autotest_common.sh@10 -- # set +x 00:32:56.692 20:33:34 -- spdk/autotest.sh@386 -- # autotest_cleanup 00:32:56.692 20:33:34 -- common/autotest_common.sh@1369 -- # local autotest_es=0 00:32:56.692 20:33:34 -- common/autotest_common.sh@1370 -- # xtrace_disable 00:32:56.692 20:33:34 -- common/autotest_common.sh@10 -- # set +x 00:33:00.878 INFO: APP EXITING 00:33:00.878 INFO: killing all VMs 00:33:00.878 INFO: killing vhost app 00:33:00.878 INFO: EXIT DONE 00:33:03.409 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:33:03.667 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:33:03.667 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:33:03.667 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:33:03.667 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:33:03.667 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:33:03.667 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:33:03.667 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:33:03.667 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:33:03.667 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:33:03.925 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:33:03.925 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:33:03.925 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:33:03.925 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:33:03.925 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:33:03.925 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:33:03.925 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:33:03.925 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:33:07.277 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:33:07.277 Cleaning 00:33:07.277 Removing: /var/run/dpdk/spdk0/config 00:33:07.277 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:33:07.277 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:33:07.277 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:33:07.277 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:33:07.277 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:33:07.277 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:33:07.277 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:33:07.277 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:33:07.277 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:33:07.277 Removing: /var/run/dpdk/spdk0/hugepage_info 00:33:07.277 Removing: /var/run/dpdk/spdk1/config 00:33:07.277 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:33:07.277 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:33:07.277 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:33:07.277 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:33:07.277 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:33:07.277 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:33:07.277 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:33:07.277 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:33:07.277 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:33:07.277 Removing: /var/run/dpdk/spdk1/hugepage_info 00:33:07.277 Removing: /var/run/dpdk/spdk1/mp_socket 00:33:07.277 Removing: /var/run/dpdk/spdk2/config 00:33:07.277 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:33:07.277 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:33:07.277 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:33:07.277 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:33:07.277 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:33:07.277 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:33:07.277 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:33:07.277 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:33:07.277 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:33:07.277 Removing: /var/run/dpdk/spdk2/hugepage_info 00:33:07.277 Removing: /var/run/dpdk/spdk3/config 00:33:07.277 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:33:07.277 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:33:07.277 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:33:07.277 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:33:07.277 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:33:07.277 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:33:07.277 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:33:07.277 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:33:07.277 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:33:07.277 Removing: /var/run/dpdk/spdk3/hugepage_info 00:33:07.277 Removing: /var/run/dpdk/spdk4/config 00:33:07.277 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:33:07.277 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:33:07.277 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:33:07.277 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:33:07.277 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:33:07.277 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:33:07.277 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:33:07.277 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:33:07.277 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:33:07.277 Removing: /var/run/dpdk/spdk4/hugepage_info 00:33:07.277 Removing: /dev/shm/bdev_svc_trace.1 00:33:07.277 Removing: /dev/shm/nvmf_trace.0 00:33:07.537 Removing: /dev/shm/spdk_tgt_trace.pid1591191 00:33:07.537 Removing: /var/run/dpdk/spdk0 00:33:07.537 Removing: /var/run/dpdk/spdk1 00:33:07.537 Removing: /var/run/dpdk/spdk2 00:33:07.537 Removing: /var/run/dpdk/spdk3 00:33:07.537 Removing: /var/run/dpdk/spdk4 00:33:07.537 Removing: /var/run/dpdk/spdk_pid1589009 00:33:07.537 Removing: /var/run/dpdk/spdk_pid1590129 00:33:07.537 Removing: /var/run/dpdk/spdk_pid1591191 00:33:07.537 Removing: /var/run/dpdk/spdk_pid1591851 00:33:07.537 Removing: /var/run/dpdk/spdk_pid1593363 00:33:07.537 Removing: /var/run/dpdk/spdk_pid1594635 00:33:07.537 Removing: /var/run/dpdk/spdk_pid1594913 00:33:07.537 Removing: /var/run/dpdk/spdk_pid1595204 00:33:07.537 Removing: /var/run/dpdk/spdk_pid1595507 00:33:07.537 Removing: /var/run/dpdk/spdk_pid1595827 00:33:07.537 Removing: /var/run/dpdk/spdk_pid1596046 00:33:07.537 Removing: /var/run/dpdk/spdk_pid1596285 00:33:07.537 Removing: /var/run/dpdk/spdk_pid1596557 00:33:07.537 Removing: /var/run/dpdk/spdk_pid1597526 00:33:07.537 Removing: /var/run/dpdk/spdk_pid1600629 00:33:07.537 Removing: /var/run/dpdk/spdk_pid1600899 00:33:07.537 Removing: /var/run/dpdk/spdk_pid1601165 00:33:07.537 Removing: /var/run/dpdk/spdk_pid1601556 00:33:07.537 Removing: /var/run/dpdk/spdk_pid1602057 00:33:07.537 Removing: /var/run/dpdk/spdk_pid1602283 00:33:07.537 Removing: /var/run/dpdk/spdk_pid1602720 00:33:07.537 Removing: /var/run/dpdk/spdk_pid1602789 00:33:07.537 Removing: /var/run/dpdk/spdk_pid1603044 00:33:07.537 Removing: /var/run/dpdk/spdk_pid1603279 00:33:07.537 Removing: /var/run/dpdk/spdk_pid1603534 00:33:07.537 Removing: /var/run/dpdk/spdk_pid1603550 00:33:07.537 Removing: /var/run/dpdk/spdk_pid1604098 00:33:07.537 Removing: /var/run/dpdk/spdk_pid1604343 00:33:07.537 Removing: /var/run/dpdk/spdk_pid1604636 00:33:07.537 Removing: /var/run/dpdk/spdk_pid1604898 00:33:07.537 Removing: /var/run/dpdk/spdk_pid1604922 00:33:07.537 Removing: /var/run/dpdk/spdk_pid1605061 00:33:07.537 Removing: /var/run/dpdk/spdk_pid1605282 00:33:07.537 Removing: /var/run/dpdk/spdk_pid1605538 00:33:07.537 Removing: /var/run/dpdk/spdk_pid1605754 00:33:07.537 Removing: /var/run/dpdk/spdk_pid1606013 00:33:07.537 Removing: /var/run/dpdk/spdk_pid1606261 00:33:07.537 Removing: /var/run/dpdk/spdk_pid1606514 00:33:07.537 Removing: /var/run/dpdk/spdk_pid1606739 00:33:07.537 Removing: /var/run/dpdk/spdk_pid1606991 00:33:07.537 Removing: /var/run/dpdk/spdk_pid1607209 00:33:07.537 Removing: /var/run/dpdk/spdk_pid1607455 00:33:07.537 Removing: /var/run/dpdk/spdk_pid1607684 00:33:07.537 Removing: /var/run/dpdk/spdk_pid1607925 00:33:07.537 Removing: /var/run/dpdk/spdk_pid1608148 00:33:07.537 Removing: /var/run/dpdk/spdk_pid1608407 00:33:07.537 Removing: /var/run/dpdk/spdk_pid1608634 00:33:07.537 Removing: /var/run/dpdk/spdk_pid1608882 00:33:07.537 Removing: /var/run/dpdk/spdk_pid1609106 00:33:07.537 Removing: /var/run/dpdk/spdk_pid1609369 00:33:07.537 Removing: /var/run/dpdk/spdk_pid1609594 00:33:07.537 Removing: /var/run/dpdk/spdk_pid1609855 00:33:07.537 Removing: /var/run/dpdk/spdk_pid1610075 00:33:07.537 Removing: /var/run/dpdk/spdk_pid1610325 00:33:07.537 Removing: /var/run/dpdk/spdk_pid1610537 00:33:07.537 Removing: /var/run/dpdk/spdk_pid1610791 00:33:07.537 Removing: /var/run/dpdk/spdk_pid1611016 00:33:07.537 Removing: /var/run/dpdk/spdk_pid1611273 00:33:07.797 Removing: /var/run/dpdk/spdk_pid1611498 00:33:07.797 Removing: /var/run/dpdk/spdk_pid1611738 00:33:07.797 Removing: /var/run/dpdk/spdk_pid1611971 00:33:07.797 Removing: /var/run/dpdk/spdk_pid1612220 00:33:07.797 Removing: /var/run/dpdk/spdk_pid1612447 00:33:07.797 Removing: /var/run/dpdk/spdk_pid1612700 00:33:07.797 Removing: /var/run/dpdk/spdk_pid1612934 00:33:07.797 Removing: /var/run/dpdk/spdk_pid1613188 00:33:07.797 Removing: /var/run/dpdk/spdk_pid1613435 00:33:07.797 Removing: /var/run/dpdk/spdk_pid1613688 00:33:07.797 Removing: /var/run/dpdk/spdk_pid1613923 00:33:07.797 Removing: /var/run/dpdk/spdk_pid1614170 00:33:07.797 Removing: /var/run/dpdk/spdk_pid1614402 00:33:07.797 Removing: /var/run/dpdk/spdk_pid1614672 00:33:07.797 Removing: /var/run/dpdk/spdk_pid1614909 00:33:07.797 Removing: /var/run/dpdk/spdk_pid1615220 00:33:07.797 Removing: /var/run/dpdk/spdk_pid1619269 00:33:07.797 Removing: /var/run/dpdk/spdk_pid1704627 00:33:07.797 Removing: /var/run/dpdk/spdk_pid1709311 00:33:07.797 Removing: /var/run/dpdk/spdk_pid1718523 00:33:07.797 Removing: /var/run/dpdk/spdk_pid1724204 00:33:07.797 Removing: /var/run/dpdk/spdk_pid1729445 00:33:07.797 Removing: /var/run/dpdk/spdk_pid1729946 00:33:07.797 Removing: /var/run/dpdk/spdk_pid1739437 00:33:07.797 Removing: /var/run/dpdk/spdk_pid1739707 00:33:07.797 Removing: /var/run/dpdk/spdk_pid1744449 00:33:07.797 Removing: /var/run/dpdk/spdk_pid1750596 00:33:07.797 Removing: /var/run/dpdk/spdk_pid1753267 00:33:07.797 Removing: /var/run/dpdk/spdk_pid1764366 00:33:07.797 Removing: /var/run/dpdk/spdk_pid1774038 00:33:07.797 Removing: /var/run/dpdk/spdk_pid1775741 00:33:07.797 Removing: /var/run/dpdk/spdk_pid1776761 00:33:07.797 Removing: /var/run/dpdk/spdk_pid1794973 00:33:07.797 Removing: /var/run/dpdk/spdk_pid1799255 00:33:07.797 Removing: /var/run/dpdk/spdk_pid1804034 00:33:07.797 Removing: /var/run/dpdk/spdk_pid1805647 00:33:07.797 Removing: /var/run/dpdk/spdk_pid1807496 00:33:07.797 Removing: /var/run/dpdk/spdk_pid1807737 00:33:07.797 Removing: /var/run/dpdk/spdk_pid1807972 00:33:07.797 Removing: /var/run/dpdk/spdk_pid1808209 00:33:07.797 Removing: /var/run/dpdk/spdk_pid1808734 00:33:07.797 Removing: /var/run/dpdk/spdk_pid1810736 00:33:07.797 Removing: /var/run/dpdk/spdk_pid1811706 00:33:07.797 Removing: /var/run/dpdk/spdk_pid1812284 00:33:07.797 Removing: /var/run/dpdk/spdk_pid1818201 00:33:07.797 Removing: /var/run/dpdk/spdk_pid1824080 00:33:07.797 Removing: /var/run/dpdk/spdk_pid1829408 00:33:07.797 Removing: /var/run/dpdk/spdk_pid1867630 00:33:07.797 Removing: /var/run/dpdk/spdk_pid1871462 00:33:07.797 Removing: /var/run/dpdk/spdk_pid1877724 00:33:07.797 Removing: /var/run/dpdk/spdk_pid1879039 00:33:07.797 Removing: /var/run/dpdk/spdk_pid1880560 00:33:07.797 Removing: /var/run/dpdk/spdk_pid1885273 00:33:07.797 Removing: /var/run/dpdk/spdk_pid1889683 00:33:07.797 Removing: /var/run/dpdk/spdk_pid1898022 00:33:07.797 Removing: /var/run/dpdk/spdk_pid1898038 00:33:07.797 Removing: /var/run/dpdk/spdk_pid1903019 00:33:07.797 Removing: /var/run/dpdk/spdk_pid1903253 00:33:07.797 Removing: /var/run/dpdk/spdk_pid1903466 00:33:07.797 Removing: /var/run/dpdk/spdk_pid1903814 00:33:07.797 Removing: /var/run/dpdk/spdk_pid1903948 00:33:07.797 Removing: /var/run/dpdk/spdk_pid1905263 00:33:07.797 Removing: /var/run/dpdk/spdk_pid1906958 00:33:07.797 Removing: /var/run/dpdk/spdk_pid1908564 00:33:08.056 Removing: /var/run/dpdk/spdk_pid1910226 00:33:08.056 Removing: /var/run/dpdk/spdk_pid1912124 00:33:08.056 Removing: /var/run/dpdk/spdk_pid1914119 00:33:08.056 Removing: /var/run/dpdk/spdk_pid1920253 00:33:08.056 Removing: /var/run/dpdk/spdk_pid1920822 00:33:08.056 Removing: /var/run/dpdk/spdk_pid1922343 00:33:08.056 Removing: /var/run/dpdk/spdk_pid1923129 00:33:08.056 Removing: /var/run/dpdk/spdk_pid1928913 00:33:08.056 Removing: /var/run/dpdk/spdk_pid1931694 00:33:08.056 Removing: /var/run/dpdk/spdk_pid1937578 00:33:08.056 Removing: /var/run/dpdk/spdk_pid1943633 00:33:08.056 Removing: /var/run/dpdk/spdk_pid1949662 00:33:08.056 Removing: /var/run/dpdk/spdk_pid1950300 00:33:08.056 Removing: /var/run/dpdk/spdk_pid1950996 00:33:08.056 Removing: /var/run/dpdk/spdk_pid1951692 00:33:08.056 Removing: /var/run/dpdk/spdk_pid1952454 00:33:08.056 Removing: /var/run/dpdk/spdk_pid1953151 00:33:08.056 Removing: /var/run/dpdk/spdk_pid1953847 00:33:08.056 Removing: /var/run/dpdk/spdk_pid1954456 00:33:08.056 Removing: /var/run/dpdk/spdk_pid1959599 00:33:08.056 Removing: /var/run/dpdk/spdk_pid1959830 00:33:08.056 Removing: /var/run/dpdk/spdk_pid1966169 00:33:08.056 Removing: /var/run/dpdk/spdk_pid1966440 00:33:08.056 Removing: /var/run/dpdk/spdk_pid1968663 00:33:08.056 Removing: /var/run/dpdk/spdk_pid1976683 00:33:08.056 Removing: /var/run/dpdk/spdk_pid1976696 00:33:08.056 Removing: /var/run/dpdk/spdk_pid1982348 00:33:08.056 Removing: /var/run/dpdk/spdk_pid1984322 00:33:08.056 Removing: /var/run/dpdk/spdk_pid1986296 00:33:08.057 Removing: /var/run/dpdk/spdk_pid1987350 00:33:08.057 Removing: /var/run/dpdk/spdk_pid1989325 00:33:08.057 Removing: /var/run/dpdk/spdk_pid1990608 00:33:08.057 Removing: /var/run/dpdk/spdk_pid2000257 00:33:08.057 Removing: /var/run/dpdk/spdk_pid2001014 00:33:08.057 Removing: /var/run/dpdk/spdk_pid2001645 00:33:08.057 Removing: /var/run/dpdk/spdk_pid2004202 00:33:08.057 Removing: /var/run/dpdk/spdk_pid2004673 00:33:08.057 Removing: /var/run/dpdk/spdk_pid2005140 00:33:08.057 Clean 00:33:08.057 killing process with pid 1538399 00:33:16.173 killing process with pid 1538396 00:33:16.173 killing process with pid 1538398 00:33:16.173 killing process with pid 1538397 00:33:16.173 20:33:53 -- common/autotest_common.sh@1434 -- # return 0 00:33:16.173 20:33:53 -- spdk/autotest.sh@387 -- # timing_exit post_cleanup 00:33:16.173 20:33:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:33:16.173 20:33:53 -- common/autotest_common.sh@10 -- # set +x 00:33:16.173 20:33:53 -- spdk/autotest.sh@389 -- # timing_exit autotest 00:33:16.173 20:33:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:33:16.173 20:33:53 -- common/autotest_common.sh@10 -- # set +x 00:33:16.173 20:33:53 -- spdk/autotest.sh@390 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:33:16.173 20:33:53 -- spdk/autotest.sh@392 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:33:16.173 20:33:53 -- spdk/autotest.sh@392 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:33:16.173 20:33:53 -- spdk/autotest.sh@394 -- # hash lcov 00:33:16.173 20:33:53 -- spdk/autotest.sh@394 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:33:16.173 20:33:53 -- spdk/autotest.sh@396 -- # hostname 00:33:16.173 20:33:53 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-03 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:33:16.173 geninfo: WARNING: invalid characters removed from testname! 00:33:34.258 20:34:10 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:36.162 20:34:13 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:37.538 20:34:14 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:39.441 20:34:16 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:40.819 20:34:18 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:42.790 20:34:19 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:44.167 20:34:21 -- spdk/autotest.sh@403 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:33:44.167 20:34:21 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:44.167 20:34:21 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:33:44.167 20:34:21 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:44.167 20:34:21 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:44.167 20:34:21 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:44.167 20:34:21 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:44.167 20:34:21 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:44.167 20:34:21 -- paths/export.sh@5 -- $ export PATH 00:33:44.167 20:34:21 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:44.167 20:34:21 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:33:44.167 20:34:21 -- common/autobuild_common.sh@435 -- $ date +%s 00:33:44.167 20:34:21 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1707939261.XXXXXX 00:33:44.167 20:34:21 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1707939261.ifGjah 00:33:44.167 20:34:21 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:33:44.167 20:34:21 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:33:44.167 20:34:21 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:33:44.167 20:34:21 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:33:44.167 20:34:21 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:33:44.167 20:34:21 -- common/autobuild_common.sh@451 -- $ get_config_params 00:33:44.167 20:34:21 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:33:44.167 20:34:21 -- common/autotest_common.sh@10 -- $ set +x 00:33:44.167 20:34:21 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:33:44.167 20:34:21 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j96 00:33:44.167 20:34:21 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:44.167 20:34:21 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:33:44.167 20:34:21 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:33:44.167 20:34:21 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:33:44.167 20:34:21 -- spdk/autopackage.sh@19 -- $ timing_finish 00:33:44.167 20:34:21 -- common/autotest_common.sh@722 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:33:44.167 20:34:21 -- common/autotest_common.sh@723 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:33:44.167 20:34:21 -- common/autotest_common.sh@725 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:33:44.426 20:34:21 -- spdk/autopackage.sh@20 -- $ exit 0 00:33:44.426 + [[ -n 1496114 ]] 00:33:44.426 + sudo kill 1496114 00:33:44.436 [Pipeline] } 00:33:44.455 [Pipeline] // stage 00:33:44.461 [Pipeline] } 00:33:44.478 [Pipeline] // timeout 00:33:44.483 [Pipeline] } 00:33:44.499 [Pipeline] // catchError 00:33:44.504 [Pipeline] } 00:33:44.521 [Pipeline] // wrap 00:33:44.527 [Pipeline] } 00:33:44.542 [Pipeline] // catchError 00:33:44.551 [Pipeline] stage 00:33:44.553 [Pipeline] { (Epilogue) 00:33:44.567 [Pipeline] catchError 00:33:44.569 [Pipeline] { 00:33:44.583 [Pipeline] echo 00:33:44.585 Cleanup processes 00:33:44.590 [Pipeline] sh 00:33:44.874 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:44.874 2018629 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:44.887 [Pipeline] sh 00:33:45.170 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:45.170 ++ grep -v 'sudo pgrep' 00:33:45.170 ++ awk '{print $1}' 00:33:45.170 + sudo kill -9 00:33:45.170 + true 00:33:45.181 [Pipeline] sh 00:33:45.462 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:33:57.681 [Pipeline] sh 00:33:57.963 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:33:57.963 Artifacts sizes are good 00:33:57.977 [Pipeline] archiveArtifacts 00:33:57.984 Archiving artifacts 00:33:58.198 [Pipeline] sh 00:33:58.482 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:33:58.496 [Pipeline] cleanWs 00:33:58.506 [WS-CLEANUP] Deleting project workspace... 00:33:58.506 [WS-CLEANUP] Deferred wipeout is used... 00:33:58.512 [WS-CLEANUP] done 00:33:58.514 [Pipeline] } 00:33:58.533 [Pipeline] // catchError 00:33:58.544 [Pipeline] sh 00:33:58.822 + logger -p user.info -t JENKINS-CI 00:33:58.831 [Pipeline] } 00:33:58.846 [Pipeline] // stage 00:33:58.851 [Pipeline] } 00:33:58.867 [Pipeline] // node 00:33:58.872 [Pipeline] End of Pipeline 00:33:58.919 Finished: SUCCESS